00:00:00.001 Started by upstream project "autotest-spdk-v24.01-LTS-vs-dpdk-v22.11" build number 623 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3288 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.098 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.099 The recommended git tool is: git 00:00:00.099 using credential 00000000-0000-0000-0000-000000000002 00:00:00.102 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.122 Fetching changes from the remote Git repository 00:00:00.124 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.150 Using shallow fetch with depth 1 00:00:00.150 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.150 > git --version # timeout=10 00:00:00.175 > git --version # 'git version 2.39.2' 00:00:00.175 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.189 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.189 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.652 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.663 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.672 Checking out Revision 6b67f5fa1cb27c9c410cb5dac6df31d28ba79422 (FETCH_HEAD) 00:00:04.672 > git config core.sparsecheckout # timeout=10 00:00:04.684 > git read-tree -mu HEAD # timeout=10 00:00:04.701 > git checkout -f 6b67f5fa1cb27c9c410cb5dac6df31d28ba79422 # timeout=5 00:00:04.719 Commit message: "doc: add chapter about running CI Vagrant images on dev-systems" 00:00:04.720 > git rev-list --no-walk 6b67f5fa1cb27c9c410cb5dac6df31d28ba79422 # timeout=10 00:00:04.829 [Pipeline] Start of Pipeline 00:00:04.840 [Pipeline] library 00:00:04.841 Loading library shm_lib@master 00:00:04.842 Library shm_lib@master is cached. Copying from home. 00:00:04.858 [Pipeline] node 00:00:04.866 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:04.867 [Pipeline] { 00:00:04.875 [Pipeline] catchError 00:00:04.876 [Pipeline] { 00:00:04.888 [Pipeline] wrap 00:00:04.897 [Pipeline] { 00:00:04.905 [Pipeline] stage 00:00:04.907 [Pipeline] { (Prologue) 00:00:05.084 [Pipeline] sh 00:00:05.368 + logger -p user.info -t JENKINS-CI 00:00:05.390 [Pipeline] echo 00:00:05.391 Node: GP11 00:00:05.399 [Pipeline] sh 00:00:05.692 [Pipeline] setCustomBuildProperty 00:00:05.704 [Pipeline] echo 00:00:05.705 Cleanup processes 00:00:05.709 [Pipeline] sh 00:00:05.989 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.989 3190469 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.001 [Pipeline] sh 00:00:06.287 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.287 ++ grep -v 'sudo pgrep' 00:00:06.287 ++ awk '{print $1}' 00:00:06.287 + sudo kill -9 00:00:06.287 + true 00:00:06.300 [Pipeline] cleanWs 00:00:06.308 [WS-CLEANUP] Deleting project workspace... 00:00:06.308 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.314 [WS-CLEANUP] done 00:00:06.318 [Pipeline] setCustomBuildProperty 00:00:06.329 [Pipeline] sh 00:00:06.609 + sudo git config --global --replace-all safe.directory '*' 00:00:06.696 [Pipeline] httpRequest 00:00:06.731 [Pipeline] echo 00:00:06.733 Sorcerer 10.211.164.101 is alive 00:00:06.740 [Pipeline] httpRequest 00:00:06.744 HttpMethod: GET 00:00:06.745 URL: http://10.211.164.101/packages/jbp_6b67f5fa1cb27c9c410cb5dac6df31d28ba79422.tar.gz 00:00:06.746 Sending request to url: http://10.211.164.101/packages/jbp_6b67f5fa1cb27c9c410cb5dac6df31d28ba79422.tar.gz 00:00:06.766 Response Code: HTTP/1.1 200 OK 00:00:06.767 Success: Status code 200 is in the accepted range: 200,404 00:00:06.768 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_6b67f5fa1cb27c9c410cb5dac6df31d28ba79422.tar.gz 00:00:15.722 [Pipeline] sh 00:00:16.009 + tar --no-same-owner -xf jbp_6b67f5fa1cb27c9c410cb5dac6df31d28ba79422.tar.gz 00:00:16.025 [Pipeline] httpRequest 00:00:16.060 [Pipeline] echo 00:00:16.061 Sorcerer 10.211.164.101 is alive 00:00:16.069 [Pipeline] httpRequest 00:00:16.074 HttpMethod: GET 00:00:16.074 URL: http://10.211.164.101/packages/spdk_dbef7efacb6f3438cd0fe1344a67946669fb1419.tar.gz 00:00:16.075 Sending request to url: http://10.211.164.101/packages/spdk_dbef7efacb6f3438cd0fe1344a67946669fb1419.tar.gz 00:00:16.088 Response Code: HTTP/1.1 200 OK 00:00:16.089 Success: Status code 200 is in the accepted range: 200,404 00:00:16.089 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_dbef7efacb6f3438cd0fe1344a67946669fb1419.tar.gz 00:01:20.833 [Pipeline] sh 00:01:21.120 + tar --no-same-owner -xf spdk_dbef7efacb6f3438cd0fe1344a67946669fb1419.tar.gz 00:01:24.428 [Pipeline] sh 00:01:24.714 + git -C spdk log --oneline -n5 00:01:24.714 dbef7efac test: fix dpdk builds on ubuntu24 00:01:24.714 4b94202c6 lib/event: Bug fix for framework_set_scheduler 00:01:24.714 507e9ba07 nvme: add lock_depth for ctrlr_lock 00:01:24.714 62fda7b5f nvme: check pthread_mutex_destroy() return value 00:01:24.714 e03c164a1 nvme: add nvme_ctrlr_lock 00:01:24.734 [Pipeline] withCredentials 00:01:24.746 > git --version # timeout=10 00:01:24.760 > git --version # 'git version 2.39.2' 00:01:24.778 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:24.781 [Pipeline] { 00:01:24.790 [Pipeline] retry 00:01:24.793 [Pipeline] { 00:01:24.811 [Pipeline] sh 00:01:25.093 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:01:25.106 [Pipeline] } 00:01:25.128 [Pipeline] // retry 00:01:25.133 [Pipeline] } 00:01:25.154 [Pipeline] // withCredentials 00:01:25.164 [Pipeline] httpRequest 00:01:25.181 [Pipeline] echo 00:01:25.183 Sorcerer 10.211.164.101 is alive 00:01:25.191 [Pipeline] httpRequest 00:01:25.195 HttpMethod: GET 00:01:25.196 URL: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:25.197 Sending request to url: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:25.200 Response Code: HTTP/1.1 200 OK 00:01:25.200 Success: Status code 200 is in the accepted range: 200,404 00:01:25.201 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:28.553 [Pipeline] sh 00:01:28.839 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:30.233 [Pipeline] sh 00:01:30.518 + git -C dpdk log --oneline -n5 00:01:30.518 caf0f5d395 version: 22.11.4 00:01:30.518 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:30.518 dc9c799c7d vhost: fix missing spinlock unlock 00:01:30.518 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:30.518 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:30.529 [Pipeline] } 00:01:30.546 [Pipeline] // stage 00:01:30.556 [Pipeline] stage 00:01:30.558 [Pipeline] { (Prepare) 00:01:30.580 [Pipeline] writeFile 00:01:30.599 [Pipeline] sh 00:01:30.887 + logger -p user.info -t JENKINS-CI 00:01:30.901 [Pipeline] sh 00:01:31.186 + logger -p user.info -t JENKINS-CI 00:01:31.199 [Pipeline] sh 00:01:31.484 + cat autorun-spdk.conf 00:01:31.484 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:31.484 SPDK_TEST_NVMF=1 00:01:31.484 SPDK_TEST_NVME_CLI=1 00:01:31.484 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:31.484 SPDK_TEST_NVMF_NICS=e810 00:01:31.484 SPDK_TEST_VFIOUSER=1 00:01:31.484 SPDK_RUN_UBSAN=1 00:01:31.484 NET_TYPE=phy 00:01:31.484 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:31.484 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:31.492 RUN_NIGHTLY=1 00:01:31.497 [Pipeline] readFile 00:01:31.525 [Pipeline] withEnv 00:01:31.527 [Pipeline] { 00:01:31.542 [Pipeline] sh 00:01:31.826 + set -ex 00:01:31.826 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:31.826 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:31.826 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:31.826 ++ SPDK_TEST_NVMF=1 00:01:31.826 ++ SPDK_TEST_NVME_CLI=1 00:01:31.826 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:31.826 ++ SPDK_TEST_NVMF_NICS=e810 00:01:31.826 ++ SPDK_TEST_VFIOUSER=1 00:01:31.826 ++ SPDK_RUN_UBSAN=1 00:01:31.826 ++ NET_TYPE=phy 00:01:31.826 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:31.826 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:31.826 ++ RUN_NIGHTLY=1 00:01:31.826 + case $SPDK_TEST_NVMF_NICS in 00:01:31.826 + DRIVERS=ice 00:01:31.826 + [[ tcp == \r\d\m\a ]] 00:01:31.826 + [[ -n ice ]] 00:01:31.826 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:31.826 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:31.826 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:31.826 rmmod: ERROR: Module irdma is not currently loaded 00:01:31.826 rmmod: ERROR: Module i40iw is not currently loaded 00:01:31.826 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:31.826 + true 00:01:31.826 + for D in $DRIVERS 00:01:31.826 + sudo modprobe ice 00:01:31.826 + exit 0 00:01:31.835 [Pipeline] } 00:01:31.854 [Pipeline] // withEnv 00:01:31.861 [Pipeline] } 00:01:31.878 [Pipeline] // stage 00:01:31.889 [Pipeline] catchError 00:01:31.891 [Pipeline] { 00:01:31.908 [Pipeline] timeout 00:01:31.908 Timeout set to expire in 50 min 00:01:31.910 [Pipeline] { 00:01:31.927 [Pipeline] stage 00:01:31.929 [Pipeline] { (Tests) 00:01:31.946 [Pipeline] sh 00:01:32.231 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:32.231 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:32.231 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:32.231 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:32.231 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:32.231 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:32.231 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:32.231 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:32.231 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:32.231 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:32.231 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:32.231 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:32.231 + source /etc/os-release 00:01:32.231 ++ NAME='Fedora Linux' 00:01:32.231 ++ VERSION='38 (Cloud Edition)' 00:01:32.231 ++ ID=fedora 00:01:32.231 ++ VERSION_ID=38 00:01:32.231 ++ VERSION_CODENAME= 00:01:32.231 ++ PLATFORM_ID=platform:f38 00:01:32.231 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:32.231 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:32.231 ++ LOGO=fedora-logo-icon 00:01:32.231 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:32.231 ++ HOME_URL=https://fedoraproject.org/ 00:01:32.231 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:32.231 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:32.231 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:32.231 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:32.231 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:32.231 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:32.231 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:32.231 ++ SUPPORT_END=2024-05-14 00:01:32.231 ++ VARIANT='Cloud Edition' 00:01:32.231 ++ VARIANT_ID=cloud 00:01:32.231 + uname -a 00:01:32.231 Linux spdk-gp-11 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:32.231 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:33.168 Hugepages 00:01:33.168 node hugesize free / total 00:01:33.168 node0 1048576kB 0 / 0 00:01:33.168 node0 2048kB 0 / 0 00:01:33.168 node1 1048576kB 0 / 0 00:01:33.168 node1 2048kB 0 / 0 00:01:33.168 00:01:33.168 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:33.168 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:33.168 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:33.168 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:33.168 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:33.168 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:33.168 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:33.168 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:33.168 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:33.168 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:33.168 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:33.168 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:33.168 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:33.168 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:33.168 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:33.168 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:33.168 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:33.168 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:33.168 + rm -f /tmp/spdk-ld-path 00:01:33.168 + source autorun-spdk.conf 00:01:33.168 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:33.168 ++ SPDK_TEST_NVMF=1 00:01:33.168 ++ SPDK_TEST_NVME_CLI=1 00:01:33.168 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:33.168 ++ SPDK_TEST_NVMF_NICS=e810 00:01:33.168 ++ SPDK_TEST_VFIOUSER=1 00:01:33.168 ++ SPDK_RUN_UBSAN=1 00:01:33.168 ++ NET_TYPE=phy 00:01:33.168 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:33.168 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:33.168 ++ RUN_NIGHTLY=1 00:01:33.168 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:33.168 + [[ -n '' ]] 00:01:33.168 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:33.168 + for M in /var/spdk/build-*-manifest.txt 00:01:33.168 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:33.169 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:33.169 + for M in /var/spdk/build-*-manifest.txt 00:01:33.169 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:33.169 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:33.169 ++ uname 00:01:33.169 + [[ Linux == \L\i\n\u\x ]] 00:01:33.169 + sudo dmesg -T 00:01:33.169 + sudo dmesg --clear 00:01:33.428 + dmesg_pid=3191183 00:01:33.428 + [[ Fedora Linux == FreeBSD ]] 00:01:33.428 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:33.428 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:33.428 + sudo dmesg -Tw 00:01:33.428 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:33.428 + [[ -x /usr/src/fio-static/fio ]] 00:01:33.428 + export FIO_BIN=/usr/src/fio-static/fio 00:01:33.428 + FIO_BIN=/usr/src/fio-static/fio 00:01:33.428 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:33.428 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:33.428 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:33.428 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:33.428 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:33.428 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:33.428 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:33.428 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:33.428 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:33.428 Test configuration: 00:01:33.428 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:33.428 SPDK_TEST_NVMF=1 00:01:33.428 SPDK_TEST_NVME_CLI=1 00:01:33.428 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:33.428 SPDK_TEST_NVMF_NICS=e810 00:01:33.428 SPDK_TEST_VFIOUSER=1 00:01:33.428 SPDK_RUN_UBSAN=1 00:01:33.428 NET_TYPE=phy 00:01:33.428 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:33.428 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:33.428 RUN_NIGHTLY=1 00:41:17 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:33.428 00:41:17 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:33.428 00:41:17 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:33.428 00:41:17 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:33.428 00:41:17 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:33.428 00:41:17 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:33.428 00:41:17 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:33.428 00:41:17 -- paths/export.sh@5 -- $ export PATH 00:01:33.428 00:41:17 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:33.428 00:41:17 -- common/autobuild_common.sh@437 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:33.428 00:41:17 -- common/autobuild_common.sh@438 -- $ date +%s 00:01:33.428 00:41:17 -- common/autobuild_common.sh@438 -- $ mktemp -dt spdk_1721688077.XXXXXX 00:01:33.428 00:41:17 -- common/autobuild_common.sh@438 -- $ SPDK_WORKSPACE=/tmp/spdk_1721688077.TF43Qs 00:01:33.428 00:41:17 -- common/autobuild_common.sh@440 -- $ [[ -n '' ]] 00:01:33.428 00:41:17 -- common/autobuild_common.sh@444 -- $ '[' -n v22.11.4 ']' 00:01:33.428 00:41:17 -- common/autobuild_common.sh@445 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:33.428 00:41:17 -- common/autobuild_common.sh@445 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:01:33.428 00:41:17 -- common/autobuild_common.sh@451 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:33.428 00:41:17 -- common/autobuild_common.sh@453 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:33.428 00:41:17 -- common/autobuild_common.sh@454 -- $ get_config_params 00:01:33.428 00:41:17 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:01:33.428 00:41:17 -- common/autotest_common.sh@10 -- $ set +x 00:01:33.428 00:41:17 -- common/autobuild_common.sh@454 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:01:33.428 00:41:17 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:33.428 00:41:17 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:33.428 00:41:17 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:33.428 00:41:17 -- spdk/autobuild.sh@16 -- $ date -u 00:01:33.428 Mon Jul 22 10:41:17 PM UTC 2024 00:01:33.428 00:41:17 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:33.428 LTS-60-gdbef7efac 00:01:33.428 00:41:17 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:33.428 00:41:17 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:33.428 00:41:17 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:33.429 00:41:17 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:01:33.429 00:41:17 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:33.429 00:41:17 -- common/autotest_common.sh@10 -- $ set +x 00:01:33.429 ************************************ 00:01:33.429 START TEST ubsan 00:01:33.429 ************************************ 00:01:33.429 00:41:17 -- common/autotest_common.sh@1104 -- $ echo 'using ubsan' 00:01:33.429 using ubsan 00:01:33.429 00:01:33.429 real 0m0.000s 00:01:33.429 user 0m0.000s 00:01:33.429 sys 0m0.000s 00:01:33.429 00:41:17 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:33.429 00:41:17 -- common/autotest_common.sh@10 -- $ set +x 00:01:33.429 ************************************ 00:01:33.429 END TEST ubsan 00:01:33.429 ************************************ 00:01:33.429 00:41:17 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:01:33.429 00:41:17 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:33.429 00:41:17 -- common/autobuild_common.sh@430 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:33.429 00:41:17 -- common/autotest_common.sh@1077 -- $ '[' 2 -le 1 ']' 00:01:33.429 00:41:17 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:33.429 00:41:17 -- common/autotest_common.sh@10 -- $ set +x 00:01:33.429 ************************************ 00:01:33.429 START TEST build_native_dpdk 00:01:33.429 ************************************ 00:01:33.429 00:41:17 -- common/autotest_common.sh@1104 -- $ _build_native_dpdk 00:01:33.429 00:41:17 -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:33.429 00:41:17 -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:33.429 00:41:17 -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:33.429 00:41:17 -- common/autobuild_common.sh@51 -- $ local compiler 00:01:33.429 00:41:17 -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:33.429 00:41:17 -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:33.429 00:41:17 -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:33.429 00:41:17 -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:33.429 00:41:17 -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:33.429 00:41:17 -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:33.429 00:41:17 -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:33.429 00:41:17 -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:33.429 00:41:17 -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:33.429 00:41:17 -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:33.429 00:41:17 -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:33.429 00:41:17 -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:33.429 00:41:17 -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:33.429 00:41:17 -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:01:33.429 00:41:17 -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:33.429 00:41:17 -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:01:33.429 caf0f5d395 version: 22.11.4 00:01:33.429 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:33.429 dc9c799c7d vhost: fix missing spinlock unlock 00:01:33.429 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:33.429 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:33.429 00:41:17 -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:33.429 00:41:17 -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:33.429 00:41:17 -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:01:33.429 00:41:17 -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:33.429 00:41:17 -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:33.429 00:41:17 -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:33.429 00:41:17 -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:33.429 00:41:17 -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:33.429 00:41:17 -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:33.429 00:41:17 -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:33.429 00:41:17 -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:33.429 00:41:17 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:33.429 00:41:17 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:33.429 00:41:17 -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:33.429 00:41:17 -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:33.429 00:41:17 -- common/autobuild_common.sh@168 -- $ uname -s 00:01:33.429 00:41:17 -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:33.429 00:41:17 -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:01:33.429 00:41:17 -- scripts/common.sh@372 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:01:33.429 00:41:17 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:01:33.429 00:41:17 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:01:33.429 00:41:17 -- scripts/common.sh@335 -- $ IFS=.-: 00:01:33.429 00:41:17 -- scripts/common.sh@335 -- $ read -ra ver1 00:01:33.429 00:41:17 -- scripts/common.sh@336 -- $ IFS=.-: 00:01:33.429 00:41:17 -- scripts/common.sh@336 -- $ read -ra ver2 00:01:33.429 00:41:17 -- scripts/common.sh@337 -- $ local 'op=<' 00:01:33.429 00:41:17 -- scripts/common.sh@339 -- $ ver1_l=3 00:01:33.429 00:41:17 -- scripts/common.sh@340 -- $ ver2_l=3 00:01:33.429 00:41:17 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:01:33.429 00:41:17 -- scripts/common.sh@343 -- $ case "$op" in 00:01:33.429 00:41:17 -- scripts/common.sh@344 -- $ : 1 00:01:33.429 00:41:17 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:01:33.429 00:41:17 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:33.429 00:41:17 -- scripts/common.sh@364 -- $ decimal 22 00:01:33.429 00:41:17 -- scripts/common.sh@352 -- $ local d=22 00:01:33.429 00:41:17 -- scripts/common.sh@353 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:01:33.429 00:41:17 -- scripts/common.sh@354 -- $ echo 22 00:01:33.429 00:41:17 -- scripts/common.sh@364 -- $ ver1[v]=22 00:01:33.429 00:41:17 -- scripts/common.sh@365 -- $ decimal 21 00:01:33.429 00:41:17 -- scripts/common.sh@352 -- $ local d=21 00:01:33.429 00:41:17 -- scripts/common.sh@353 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:33.429 00:41:17 -- scripts/common.sh@354 -- $ echo 21 00:01:33.429 00:41:17 -- scripts/common.sh@365 -- $ ver2[v]=21 00:01:33.429 00:41:17 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:01:33.429 00:41:17 -- scripts/common.sh@366 -- $ return 1 00:01:33.429 00:41:17 -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:33.429 patching file config/rte_config.h 00:01:33.429 Hunk #1 succeeded at 60 (offset 1 line). 00:01:33.429 00:41:17 -- common/autobuild_common.sh@176 -- $ lt 22.11.4 24.07.0 00:01:33.429 00:41:17 -- scripts/common.sh@372 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:01:33.429 00:41:17 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:01:33.429 00:41:17 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:01:33.429 00:41:17 -- scripts/common.sh@335 -- $ IFS=.-: 00:01:33.429 00:41:17 -- scripts/common.sh@335 -- $ read -ra ver1 00:01:33.429 00:41:17 -- scripts/common.sh@336 -- $ IFS=.-: 00:01:33.429 00:41:17 -- scripts/common.sh@336 -- $ read -ra ver2 00:01:33.429 00:41:17 -- scripts/common.sh@337 -- $ local 'op=<' 00:01:33.429 00:41:17 -- scripts/common.sh@339 -- $ ver1_l=3 00:01:33.429 00:41:17 -- scripts/common.sh@340 -- $ ver2_l=3 00:01:33.429 00:41:17 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:01:33.429 00:41:17 -- scripts/common.sh@343 -- $ case "$op" in 00:01:33.429 00:41:17 -- scripts/common.sh@344 -- $ : 1 00:01:33.429 00:41:17 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:01:33.429 00:41:17 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:33.429 00:41:17 -- scripts/common.sh@364 -- $ decimal 22 00:01:33.429 00:41:17 -- scripts/common.sh@352 -- $ local d=22 00:01:33.429 00:41:17 -- scripts/common.sh@353 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:01:33.429 00:41:17 -- scripts/common.sh@354 -- $ echo 22 00:01:33.429 00:41:17 -- scripts/common.sh@364 -- $ ver1[v]=22 00:01:33.429 00:41:17 -- scripts/common.sh@365 -- $ decimal 24 00:01:33.429 00:41:17 -- scripts/common.sh@352 -- $ local d=24 00:01:33.429 00:41:17 -- scripts/common.sh@353 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:33.429 00:41:17 -- scripts/common.sh@354 -- $ echo 24 00:01:33.429 00:41:17 -- scripts/common.sh@365 -- $ ver2[v]=24 00:01:33.429 00:41:17 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:01:33.429 00:41:17 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:01:33.429 00:41:17 -- scripts/common.sh@367 -- $ return 0 00:01:33.429 00:41:17 -- common/autobuild_common.sh@177 -- $ patch -p1 00:01:33.429 patching file lib/pcapng/rte_pcapng.c 00:01:33.429 Hunk #1 succeeded at 110 (offset -18 lines). 00:01:33.429 00:41:17 -- common/autobuild_common.sh@180 -- $ dpdk_kmods=false 00:01:33.429 00:41:17 -- common/autobuild_common.sh@181 -- $ uname -s 00:01:33.429 00:41:17 -- common/autobuild_common.sh@181 -- $ '[' Linux = FreeBSD ']' 00:01:33.429 00:41:17 -- common/autobuild_common.sh@185 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:33.429 00:41:17 -- common/autobuild_common.sh@185 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:37.622 The Meson build system 00:01:37.622 Version: 1.3.1 00:01:37.622 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:37.622 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:01:37.622 Build type: native build 00:01:37.622 Program cat found: YES (/usr/bin/cat) 00:01:37.622 Project name: DPDK 00:01:37.622 Project version: 22.11.4 00:01:37.622 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:37.622 C linker for the host machine: gcc ld.bfd 2.39-16 00:01:37.622 Host machine cpu family: x86_64 00:01:37.622 Host machine cpu: x86_64 00:01:37.622 Message: ## Building in Developer Mode ## 00:01:37.622 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:37.622 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:01:37.622 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:01:37.622 Program objdump found: YES (/usr/bin/objdump) 00:01:37.622 Program python3 found: YES (/usr/bin/python3) 00:01:37.622 Program cat found: YES (/usr/bin/cat) 00:01:37.622 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:37.622 Checking for size of "void *" : 8 00:01:37.622 Checking for size of "void *" : 8 (cached) 00:01:37.622 Library m found: YES 00:01:37.622 Library numa found: YES 00:01:37.622 Has header "numaif.h" : YES 00:01:37.622 Library fdt found: NO 00:01:37.622 Library execinfo found: NO 00:01:37.622 Has header "execinfo.h" : YES 00:01:37.622 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:37.622 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:37.622 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:37.622 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:37.622 Run-time dependency openssl found: YES 3.0.9 00:01:37.622 Run-time dependency libpcap found: YES 1.10.4 00:01:37.622 Has header "pcap.h" with dependency libpcap: YES 00:01:37.623 Compiler for C supports arguments -Wcast-qual: YES 00:01:37.623 Compiler for C supports arguments -Wdeprecated: YES 00:01:37.623 Compiler for C supports arguments -Wformat: YES 00:01:37.623 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:37.623 Compiler for C supports arguments -Wformat-security: NO 00:01:37.623 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:37.623 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:37.623 Compiler for C supports arguments -Wnested-externs: YES 00:01:37.623 Compiler for C supports arguments -Wold-style-definition: YES 00:01:37.623 Compiler for C supports arguments -Wpointer-arith: YES 00:01:37.623 Compiler for C supports arguments -Wsign-compare: YES 00:01:37.623 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:37.623 Compiler for C supports arguments -Wundef: YES 00:01:37.623 Compiler for C supports arguments -Wwrite-strings: YES 00:01:37.623 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:37.623 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:37.623 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:37.623 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:37.623 Compiler for C supports arguments -mavx512f: YES 00:01:37.623 Checking if "AVX512 checking" compiles: YES 00:01:37.623 Fetching value of define "__SSE4_2__" : 1 00:01:37.623 Fetching value of define "__AES__" : 1 00:01:37.623 Fetching value of define "__AVX__" : 1 00:01:37.623 Fetching value of define "__AVX2__" : (undefined) 00:01:37.623 Fetching value of define "__AVX512BW__" : (undefined) 00:01:37.623 Fetching value of define "__AVX512CD__" : (undefined) 00:01:37.623 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:37.623 Fetching value of define "__AVX512F__" : (undefined) 00:01:37.623 Fetching value of define "__AVX512VL__" : (undefined) 00:01:37.623 Fetching value of define "__PCLMUL__" : 1 00:01:37.623 Fetching value of define "__RDRND__" : 1 00:01:37.623 Fetching value of define "__RDSEED__" : (undefined) 00:01:37.623 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:37.623 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:37.623 Message: lib/kvargs: Defining dependency "kvargs" 00:01:37.623 Message: lib/telemetry: Defining dependency "telemetry" 00:01:37.623 Checking for function "getentropy" : YES 00:01:37.623 Message: lib/eal: Defining dependency "eal" 00:01:37.623 Message: lib/ring: Defining dependency "ring" 00:01:37.623 Message: lib/rcu: Defining dependency "rcu" 00:01:37.623 Message: lib/mempool: Defining dependency "mempool" 00:01:37.623 Message: lib/mbuf: Defining dependency "mbuf" 00:01:37.623 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:37.623 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:37.623 Compiler for C supports arguments -mpclmul: YES 00:01:37.623 Compiler for C supports arguments -maes: YES 00:01:37.623 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:37.623 Compiler for C supports arguments -mavx512bw: YES 00:01:37.623 Compiler for C supports arguments -mavx512dq: YES 00:01:37.623 Compiler for C supports arguments -mavx512vl: YES 00:01:37.623 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:37.623 Compiler for C supports arguments -mavx2: YES 00:01:37.623 Compiler for C supports arguments -mavx: YES 00:01:37.623 Message: lib/net: Defining dependency "net" 00:01:37.623 Message: lib/meter: Defining dependency "meter" 00:01:37.623 Message: lib/ethdev: Defining dependency "ethdev" 00:01:37.623 Message: lib/pci: Defining dependency "pci" 00:01:37.623 Message: lib/cmdline: Defining dependency "cmdline" 00:01:37.623 Message: lib/metrics: Defining dependency "metrics" 00:01:37.623 Message: lib/hash: Defining dependency "hash" 00:01:37.623 Message: lib/timer: Defining dependency "timer" 00:01:37.623 Fetching value of define "__AVX2__" : (undefined) (cached) 00:01:37.623 Compiler for C supports arguments -mavx2: YES (cached) 00:01:37.623 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:37.623 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:01:37.623 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:01:37.623 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:01:37.623 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:01:37.623 Message: lib/acl: Defining dependency "acl" 00:01:37.623 Message: lib/bbdev: Defining dependency "bbdev" 00:01:37.623 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:37.623 Run-time dependency libelf found: YES 0.190 00:01:37.623 Message: lib/bpf: Defining dependency "bpf" 00:01:37.623 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:37.623 Message: lib/compressdev: Defining dependency "compressdev" 00:01:37.623 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:37.623 Message: lib/distributor: Defining dependency "distributor" 00:01:37.623 Message: lib/efd: Defining dependency "efd" 00:01:37.623 Message: lib/eventdev: Defining dependency "eventdev" 00:01:37.623 Message: lib/gpudev: Defining dependency "gpudev" 00:01:37.623 Message: lib/gro: Defining dependency "gro" 00:01:37.623 Message: lib/gso: Defining dependency "gso" 00:01:37.623 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:37.623 Message: lib/jobstats: Defining dependency "jobstats" 00:01:37.623 Message: lib/latencystats: Defining dependency "latencystats" 00:01:37.623 Message: lib/lpm: Defining dependency "lpm" 00:01:37.623 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:37.623 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:37.623 Fetching value of define "__AVX512IFMA__" : (undefined) 00:01:37.623 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:01:37.623 Message: lib/member: Defining dependency "member" 00:01:37.623 Message: lib/pcapng: Defining dependency "pcapng" 00:01:37.623 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:37.623 Message: lib/power: Defining dependency "power" 00:01:37.623 Message: lib/rawdev: Defining dependency "rawdev" 00:01:37.623 Message: lib/regexdev: Defining dependency "regexdev" 00:01:37.623 Message: lib/dmadev: Defining dependency "dmadev" 00:01:37.623 Message: lib/rib: Defining dependency "rib" 00:01:37.623 Message: lib/reorder: Defining dependency "reorder" 00:01:37.623 Message: lib/sched: Defining dependency "sched" 00:01:37.623 Message: lib/security: Defining dependency "security" 00:01:37.623 Message: lib/stack: Defining dependency "stack" 00:01:37.623 Has header "linux/userfaultfd.h" : YES 00:01:37.623 Message: lib/vhost: Defining dependency "vhost" 00:01:37.623 Message: lib/ipsec: Defining dependency "ipsec" 00:01:37.623 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:37.623 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:37.623 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:01:37.623 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:37.623 Message: lib/fib: Defining dependency "fib" 00:01:37.623 Message: lib/port: Defining dependency "port" 00:01:37.623 Message: lib/pdump: Defining dependency "pdump" 00:01:37.623 Message: lib/table: Defining dependency "table" 00:01:37.623 Message: lib/pipeline: Defining dependency "pipeline" 00:01:37.623 Message: lib/graph: Defining dependency "graph" 00:01:37.623 Message: lib/node: Defining dependency "node" 00:01:37.623 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:37.623 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:37.623 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:37.623 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:37.623 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:37.623 Compiler for C supports arguments -Wno-unused-value: YES 00:01:38.561 Compiler for C supports arguments -Wno-format: YES 00:01:38.561 Compiler for C supports arguments -Wno-format-security: YES 00:01:38.561 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:01:38.561 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:38.561 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:01:38.561 Compiler for C supports arguments -Wno-unused-parameter: YES 00:01:38.561 Fetching value of define "__AVX2__" : (undefined) (cached) 00:01:38.561 Compiler for C supports arguments -mavx2: YES (cached) 00:01:38.561 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:38.561 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:38.561 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:38.561 Compiler for C supports arguments -march=skylake-avx512: YES 00:01:38.561 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:01:38.561 Program doxygen found: YES (/usr/bin/doxygen) 00:01:38.561 Configuring doxy-api.conf using configuration 00:01:38.561 Program sphinx-build found: NO 00:01:38.561 Configuring rte_build_config.h using configuration 00:01:38.561 Message: 00:01:38.561 ================= 00:01:38.561 Applications Enabled 00:01:38.561 ================= 00:01:38.561 00:01:38.561 apps: 00:01:38.561 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:01:38.561 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:01:38.561 test-security-perf, 00:01:38.561 00:01:38.561 Message: 00:01:38.561 ================= 00:01:38.561 Libraries Enabled 00:01:38.561 ================= 00:01:38.561 00:01:38.561 libs: 00:01:38.561 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:01:38.561 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:01:38.561 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:01:38.561 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:01:38.561 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:01:38.561 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:01:38.561 table, pipeline, graph, node, 00:01:38.561 00:01:38.561 Message: 00:01:38.561 =============== 00:01:38.561 Drivers Enabled 00:01:38.561 =============== 00:01:38.561 00:01:38.561 common: 00:01:38.561 00:01:38.561 bus: 00:01:38.561 pci, vdev, 00:01:38.561 mempool: 00:01:38.561 ring, 00:01:38.561 dma: 00:01:38.561 00:01:38.561 net: 00:01:38.561 i40e, 00:01:38.561 raw: 00:01:38.561 00:01:38.561 crypto: 00:01:38.561 00:01:38.561 compress: 00:01:38.561 00:01:38.561 regex: 00:01:38.561 00:01:38.561 vdpa: 00:01:38.561 00:01:38.561 event: 00:01:38.561 00:01:38.561 baseband: 00:01:38.561 00:01:38.561 gpu: 00:01:38.561 00:01:38.561 00:01:38.561 Message: 00:01:38.561 ================= 00:01:38.561 Content Skipped 00:01:38.561 ================= 00:01:38.561 00:01:38.561 apps: 00:01:38.561 00:01:38.561 libs: 00:01:38.561 kni: explicitly disabled via build config (deprecated lib) 00:01:38.561 flow_classify: explicitly disabled via build config (deprecated lib) 00:01:38.561 00:01:38.561 drivers: 00:01:38.561 common/cpt: not in enabled drivers build config 00:01:38.561 common/dpaax: not in enabled drivers build config 00:01:38.561 common/iavf: not in enabled drivers build config 00:01:38.561 common/idpf: not in enabled drivers build config 00:01:38.561 common/mvep: not in enabled drivers build config 00:01:38.561 common/octeontx: not in enabled drivers build config 00:01:38.561 bus/auxiliary: not in enabled drivers build config 00:01:38.561 bus/dpaa: not in enabled drivers build config 00:01:38.561 bus/fslmc: not in enabled drivers build config 00:01:38.561 bus/ifpga: not in enabled drivers build config 00:01:38.561 bus/vmbus: not in enabled drivers build config 00:01:38.561 common/cnxk: not in enabled drivers build config 00:01:38.561 common/mlx5: not in enabled drivers build config 00:01:38.561 common/qat: not in enabled drivers build config 00:01:38.561 common/sfc_efx: not in enabled drivers build config 00:01:38.561 mempool/bucket: not in enabled drivers build config 00:01:38.561 mempool/cnxk: not in enabled drivers build config 00:01:38.561 mempool/dpaa: not in enabled drivers build config 00:01:38.561 mempool/dpaa2: not in enabled drivers build config 00:01:38.561 mempool/octeontx: not in enabled drivers build config 00:01:38.561 mempool/stack: not in enabled drivers build config 00:01:38.561 dma/cnxk: not in enabled drivers build config 00:01:38.561 dma/dpaa: not in enabled drivers build config 00:01:38.561 dma/dpaa2: not in enabled drivers build config 00:01:38.561 dma/hisilicon: not in enabled drivers build config 00:01:38.561 dma/idxd: not in enabled drivers build config 00:01:38.561 dma/ioat: not in enabled drivers build config 00:01:38.561 dma/skeleton: not in enabled drivers build config 00:01:38.561 net/af_packet: not in enabled drivers build config 00:01:38.561 net/af_xdp: not in enabled drivers build config 00:01:38.561 net/ark: not in enabled drivers build config 00:01:38.561 net/atlantic: not in enabled drivers build config 00:01:38.561 net/avp: not in enabled drivers build config 00:01:38.561 net/axgbe: not in enabled drivers build config 00:01:38.561 net/bnx2x: not in enabled drivers build config 00:01:38.561 net/bnxt: not in enabled drivers build config 00:01:38.561 net/bonding: not in enabled drivers build config 00:01:38.561 net/cnxk: not in enabled drivers build config 00:01:38.561 net/cxgbe: not in enabled drivers build config 00:01:38.561 net/dpaa: not in enabled drivers build config 00:01:38.561 net/dpaa2: not in enabled drivers build config 00:01:38.561 net/e1000: not in enabled drivers build config 00:01:38.561 net/ena: not in enabled drivers build config 00:01:38.561 net/enetc: not in enabled drivers build config 00:01:38.561 net/enetfec: not in enabled drivers build config 00:01:38.561 net/enic: not in enabled drivers build config 00:01:38.561 net/failsafe: not in enabled drivers build config 00:01:38.561 net/fm10k: not in enabled drivers build config 00:01:38.561 net/gve: not in enabled drivers build config 00:01:38.561 net/hinic: not in enabled drivers build config 00:01:38.561 net/hns3: not in enabled drivers build config 00:01:38.561 net/iavf: not in enabled drivers build config 00:01:38.561 net/ice: not in enabled drivers build config 00:01:38.561 net/idpf: not in enabled drivers build config 00:01:38.561 net/igc: not in enabled drivers build config 00:01:38.561 net/ionic: not in enabled drivers build config 00:01:38.561 net/ipn3ke: not in enabled drivers build config 00:01:38.561 net/ixgbe: not in enabled drivers build config 00:01:38.561 net/kni: not in enabled drivers build config 00:01:38.561 net/liquidio: not in enabled drivers build config 00:01:38.561 net/mana: not in enabled drivers build config 00:01:38.561 net/memif: not in enabled drivers build config 00:01:38.561 net/mlx4: not in enabled drivers build config 00:01:38.561 net/mlx5: not in enabled drivers build config 00:01:38.561 net/mvneta: not in enabled drivers build config 00:01:38.561 net/mvpp2: not in enabled drivers build config 00:01:38.561 net/netvsc: not in enabled drivers build config 00:01:38.561 net/nfb: not in enabled drivers build config 00:01:38.561 net/nfp: not in enabled drivers build config 00:01:38.561 net/ngbe: not in enabled drivers build config 00:01:38.561 net/null: not in enabled drivers build config 00:01:38.561 net/octeontx: not in enabled drivers build config 00:01:38.561 net/octeon_ep: not in enabled drivers build config 00:01:38.561 net/pcap: not in enabled drivers build config 00:01:38.561 net/pfe: not in enabled drivers build config 00:01:38.561 net/qede: not in enabled drivers build config 00:01:38.561 net/ring: not in enabled drivers build config 00:01:38.561 net/sfc: not in enabled drivers build config 00:01:38.561 net/softnic: not in enabled drivers build config 00:01:38.561 net/tap: not in enabled drivers build config 00:01:38.561 net/thunderx: not in enabled drivers build config 00:01:38.561 net/txgbe: not in enabled drivers build config 00:01:38.561 net/vdev_netvsc: not in enabled drivers build config 00:01:38.561 net/vhost: not in enabled drivers build config 00:01:38.561 net/virtio: not in enabled drivers build config 00:01:38.561 net/vmxnet3: not in enabled drivers build config 00:01:38.561 raw/cnxk_bphy: not in enabled drivers build config 00:01:38.561 raw/cnxk_gpio: not in enabled drivers build config 00:01:38.561 raw/dpaa2_cmdif: not in enabled drivers build config 00:01:38.561 raw/ifpga: not in enabled drivers build config 00:01:38.561 raw/ntb: not in enabled drivers build config 00:01:38.561 raw/skeleton: not in enabled drivers build config 00:01:38.561 crypto/armv8: not in enabled drivers build config 00:01:38.561 crypto/bcmfs: not in enabled drivers build config 00:01:38.561 crypto/caam_jr: not in enabled drivers build config 00:01:38.561 crypto/ccp: not in enabled drivers build config 00:01:38.561 crypto/cnxk: not in enabled drivers build config 00:01:38.561 crypto/dpaa_sec: not in enabled drivers build config 00:01:38.561 crypto/dpaa2_sec: not in enabled drivers build config 00:01:38.561 crypto/ipsec_mb: not in enabled drivers build config 00:01:38.561 crypto/mlx5: not in enabled drivers build config 00:01:38.561 crypto/mvsam: not in enabled drivers build config 00:01:38.561 crypto/nitrox: not in enabled drivers build config 00:01:38.561 crypto/null: not in enabled drivers build config 00:01:38.561 crypto/octeontx: not in enabled drivers build config 00:01:38.561 crypto/openssl: not in enabled drivers build config 00:01:38.561 crypto/scheduler: not in enabled drivers build config 00:01:38.561 crypto/uadk: not in enabled drivers build config 00:01:38.561 crypto/virtio: not in enabled drivers build config 00:01:38.561 compress/isal: not in enabled drivers build config 00:01:38.561 compress/mlx5: not in enabled drivers build config 00:01:38.561 compress/octeontx: not in enabled drivers build config 00:01:38.562 compress/zlib: not in enabled drivers build config 00:01:38.562 regex/mlx5: not in enabled drivers build config 00:01:38.562 regex/cn9k: not in enabled drivers build config 00:01:38.562 vdpa/ifc: not in enabled drivers build config 00:01:38.562 vdpa/mlx5: not in enabled drivers build config 00:01:38.562 vdpa/sfc: not in enabled drivers build config 00:01:38.562 event/cnxk: not in enabled drivers build config 00:01:38.562 event/dlb2: not in enabled drivers build config 00:01:38.562 event/dpaa: not in enabled drivers build config 00:01:38.562 event/dpaa2: not in enabled drivers build config 00:01:38.562 event/dsw: not in enabled drivers build config 00:01:38.562 event/opdl: not in enabled drivers build config 00:01:38.562 event/skeleton: not in enabled drivers build config 00:01:38.562 event/sw: not in enabled drivers build config 00:01:38.562 event/octeontx: not in enabled drivers build config 00:01:38.562 baseband/acc: not in enabled drivers build config 00:01:38.562 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:01:38.562 baseband/fpga_lte_fec: not in enabled drivers build config 00:01:38.562 baseband/la12xx: not in enabled drivers build config 00:01:38.562 baseband/null: not in enabled drivers build config 00:01:38.562 baseband/turbo_sw: not in enabled drivers build config 00:01:38.562 gpu/cuda: not in enabled drivers build config 00:01:38.562 00:01:38.562 00:01:38.562 Build targets in project: 316 00:01:38.562 00:01:38.562 DPDK 22.11.4 00:01:38.562 00:01:38.562 User defined options 00:01:38.562 libdir : lib 00:01:38.562 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:38.562 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:01:38.562 c_link_args : 00:01:38.562 enable_docs : false 00:01:38.562 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:38.562 enable_kmods : false 00:01:38.562 machine : native 00:01:38.562 tests : false 00:01:38.562 00:01:38.562 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:38.562 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:01:38.562 00:41:22 -- common/autobuild_common.sh@189 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 00:01:38.562 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:01:38.827 [1/745] Generating lib/rte_kvargs_mingw with a custom command 00:01:38.827 [2/745] Generating lib/rte_kvargs_def with a custom command 00:01:38.827 [3/745] Generating lib/rte_telemetry_mingw with a custom command 00:01:38.827 [4/745] Generating lib/rte_telemetry_def with a custom command 00:01:38.827 [5/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:38.827 [6/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:38.827 [7/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:38.827 [8/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:38.827 [9/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:38.827 [10/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:38.827 [11/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:38.827 [12/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:38.827 [13/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:38.827 [14/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:38.827 [15/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:38.827 [16/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:38.827 [17/745] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:38.827 [18/745] Linking static target lib/librte_kvargs.a 00:01:38.827 [19/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:38.827 [20/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:38.827 [21/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:38.827 [22/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:38.827 [23/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:38.827 [24/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:39.095 [25/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:39.095 [26/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:39.095 [27/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:39.095 [28/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:39.095 [29/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:39.095 [30/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:39.095 [31/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:39.095 [32/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:01:39.095 [33/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:39.095 [34/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:39.095 [35/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:39.095 [36/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:39.095 [37/745] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:39.095 [38/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:39.095 [39/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:39.095 [40/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:39.095 [41/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:39.095 [42/745] Generating lib/rte_eal_mingw with a custom command 00:01:39.095 [43/745] Generating lib/rte_eal_def with a custom command 00:01:39.095 [44/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:39.095 [45/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:39.095 [46/745] Generating lib/rte_ring_def with a custom command 00:01:39.096 [47/745] Generating lib/rte_ring_mingw with a custom command 00:01:39.096 [48/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:39.096 [49/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:39.096 [50/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:39.096 [51/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:39.096 [52/745] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:39.096 [53/745] Generating lib/rte_rcu_mingw with a custom command 00:01:39.096 [54/745] Generating lib/rte_rcu_def with a custom command 00:01:39.096 [55/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:39.096 [56/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:39.096 [57/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:39.096 [58/745] Generating lib/rte_mempool_def with a custom command 00:01:39.096 [59/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:01:39.096 [60/745] Generating lib/rte_mempool_mingw with a custom command 00:01:39.096 [61/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:39.096 [62/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:39.096 [63/745] Generating lib/rte_mbuf_def with a custom command 00:01:39.096 [64/745] Generating lib/rte_mbuf_mingw with a custom command 00:01:39.096 [65/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:39.096 [66/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:39.096 [67/745] Generating lib/rte_net_def with a custom command 00:01:39.096 [68/745] Generating lib/rte_net_mingw with a custom command 00:01:39.096 [69/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:39.096 [70/745] Generating lib/rte_meter_def with a custom command 00:01:39.096 [71/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:39.096 [72/745] Generating lib/rte_meter_mingw with a custom command 00:01:39.096 [73/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:39.096 [74/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:39.096 [75/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:39.096 [76/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:39.354 [77/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:39.354 [78/745] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.354 [79/745] Generating lib/rte_ethdev_def with a custom command 00:01:39.354 [80/745] Linking target lib/librte_kvargs.so.23.0 00:01:39.354 [81/745] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:39.354 [82/745] Generating lib/rte_ethdev_mingw with a custom command 00:01:39.354 [83/745] Linking static target lib/librte_ring.a 00:01:39.354 [84/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:39.354 [85/745] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:39.354 [86/745] Generating lib/rte_pci_def with a custom command 00:01:39.354 [87/745] Linking static target lib/librte_meter.a 00:01:39.354 [88/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:39.354 [89/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:39.354 [90/745] Generating lib/rte_pci_mingw with a custom command 00:01:39.614 [91/745] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:39.614 [92/745] Linking static target lib/librte_pci.a 00:01:39.614 [93/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:39.614 [94/745] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:01:39.614 [95/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:39.614 [96/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:39.614 [97/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:39.614 [98/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:39.614 [99/745] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.878 [100/745] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.878 [101/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:39.878 [102/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:39.878 [103/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:39.878 [104/745] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.878 [105/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:39.878 [106/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:39.878 [107/745] Generating lib/rte_cmdline_mingw with a custom command 00:01:39.878 [108/745] Generating lib/rte_cmdline_def with a custom command 00:01:39.878 [109/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:39.878 [110/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:39.878 [111/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:39.878 [112/745] Generating lib/rte_metrics_def with a custom command 00:01:39.878 [113/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:39.878 [114/745] Generating lib/rte_metrics_mingw with a custom command 00:01:39.878 [115/745] Linking static target lib/librte_telemetry.a 00:01:39.878 [116/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:39.878 [117/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:39.878 [118/745] Generating lib/rte_hash_def with a custom command 00:01:39.878 [119/745] Generating lib/rte_hash_mingw with a custom command 00:01:39.878 [120/745] Generating lib/rte_timer_def with a custom command 00:01:39.878 [121/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:39.878 [122/745] Generating lib/rte_timer_mingw with a custom command 00:01:40.141 [123/745] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:01:40.141 [124/745] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:40.141 [125/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:40.141 [126/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:40.141 [127/745] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:40.406 [128/745] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:40.406 [129/745] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:40.406 [130/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:40.406 [131/745] Generating lib/rte_acl_def with a custom command 00:01:40.406 [132/745] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:40.406 [133/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:40.406 [134/745] Generating lib/rte_acl_mingw with a custom command 00:01:40.406 [135/745] Generating lib/rte_bbdev_def with a custom command 00:01:40.406 [136/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:40.406 [137/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:40.406 [138/745] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:40.407 [139/745] Generating lib/rte_bbdev_mingw with a custom command 00:01:40.407 [140/745] Generating lib/rte_bitratestats_def with a custom command 00:01:40.407 [141/745] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.407 [142/745] Generating lib/rte_bitratestats_mingw with a custom command 00:01:40.407 [143/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:40.407 [144/745] Linking target lib/librte_telemetry.so.23.0 00:01:40.407 [145/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:40.407 [146/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:40.407 [147/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:40.407 [148/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:40.665 [149/745] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:40.665 [150/745] Generating lib/rte_bpf_def with a custom command 00:01:40.665 [151/745] Generating lib/rte_bpf_mingw with a custom command 00:01:40.665 [152/745] Generating lib/rte_cfgfile_def with a custom command 00:01:40.665 [153/745] Generating lib/rte_cfgfile_mingw with a custom command 00:01:40.665 [154/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:40.666 [155/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:40.666 [156/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:40.666 [157/745] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:40.666 [158/745] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:01:40.666 [159/745] Generating lib/rte_compressdev_mingw with a custom command 00:01:40.666 [160/745] Generating lib/rte_compressdev_def with a custom command 00:01:40.666 [161/745] Generating lib/rte_cryptodev_def with a custom command 00:01:40.666 [162/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:40.666 [163/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:40.666 [164/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:40.666 [165/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:40.666 [166/745] Generating lib/rte_cryptodev_mingw with a custom command 00:01:40.927 [167/745] Linking static target lib/librte_cmdline.a 00:01:40.927 [168/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:40.927 [169/745] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:40.927 [170/745] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:40.927 [171/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:40.927 [172/745] Linking static target lib/librte_timer.a 00:01:40.927 [173/745] Linking static target lib/librte_rcu.a 00:01:40.927 [174/745] Generating lib/rte_distributor_def with a custom command 00:01:40.927 [175/745] Generating lib/rte_distributor_mingw with a custom command 00:01:40.927 [176/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:40.927 [177/745] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:40.927 [178/745] Linking static target lib/librte_net.a 00:01:40.927 [179/745] Generating lib/rte_efd_def with a custom command 00:01:40.927 [180/745] Generating lib/rte_efd_mingw with a custom command 00:01:40.927 [181/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:41.193 [182/745] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:01:41.193 [183/745] Linking static target lib/librte_metrics.a 00:01:41.193 [184/745] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:01:41.193 [185/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:41.193 [186/745] Linking static target lib/librte_cfgfile.a 00:01:41.193 [187/745] Linking static target lib/librte_mempool.a 00:01:41.193 [188/745] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.193 [189/745] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:01:41.193 [190/745] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.453 [191/745] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.453 [192/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:41.453 [193/745] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:01:41.453 [194/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:41.453 [195/745] Generating lib/rte_eventdev_mingw with a custom command 00:01:41.453 [196/745] Generating lib/rte_eventdev_def with a custom command 00:01:41.453 [197/745] Linking static target lib/librte_eal.a 00:01:41.453 [198/745] Generating lib/rte_gpudev_def with a custom command 00:01:41.453 [199/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:01:41.719 [200/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:01:41.719 [201/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:01:41.719 [202/745] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.719 [203/745] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:01:41.719 [204/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:01:41.719 [205/745] Linking static target lib/librte_bitratestats.a 00:01:41.719 [206/745] Generating lib/rte_gpudev_mingw with a custom command 00:01:41.719 [207/745] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:01:41.719 [208/745] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.719 [209/745] Generating lib/rte_gro_def with a custom command 00:01:41.719 [210/745] Generating lib/rte_gro_mingw with a custom command 00:01:41.719 [211/745] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:01:41.719 [212/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:01:41.984 [213/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:41.984 [214/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:41.984 [215/745] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.984 [216/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:01:41.984 [217/745] Generating lib/rte_gso_def with a custom command 00:01:41.984 [218/745] Generating lib/rte_gso_mingw with a custom command 00:01:41.984 [219/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:01:41.984 [220/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:01:42.250 [221/745] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.250 [222/745] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:01:42.250 [223/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:42.250 [224/745] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:01:42.250 [225/745] Linking static target lib/librte_bbdev.a 00:01:42.250 [226/745] Generating lib/rte_ip_frag_def with a custom command 00:01:42.250 [227/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:01:42.250 [228/745] Generating lib/rte_ip_frag_mingw with a custom command 00:01:42.250 [229/745] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.250 [230/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:42.250 [231/745] Generating lib/rte_jobstats_def with a custom command 00:01:42.250 [232/745] Generating lib/rte_jobstats_mingw with a custom command 00:01:42.250 [233/745] Generating lib/rte_latencystats_def with a custom command 00:01:42.250 [234/745] Generating lib/rte_latencystats_mingw with a custom command 00:01:42.250 [235/745] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:01:42.513 [236/745] Generating lib/rte_lpm_def with a custom command 00:01:42.513 [237/745] Generating lib/rte_lpm_mingw with a custom command 00:01:42.513 [238/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:42.513 [239/745] Linking static target lib/librte_compressdev.a 00:01:42.513 [240/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:01:42.513 [241/745] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:01:42.513 [242/745] Linking static target lib/librte_jobstats.a 00:01:42.513 [243/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:01:42.781 [244/745] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:42.781 [245/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:01:42.781 [246/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:01:42.781 [247/745] Linking static target lib/librte_distributor.a 00:01:42.781 [248/745] Generating lib/rte_member_def with a custom command 00:01:42.781 [249/745] Generating lib/rte_member_mingw with a custom command 00:01:43.042 [250/745] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.042 [251/745] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:01:43.042 [252/745] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:01:43.042 [253/745] Generating lib/rte_pcapng_def with a custom command 00:01:43.042 [254/745] Generating lib/rte_pcapng_mingw with a custom command 00:01:43.042 [255/745] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.042 [256/745] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:01:43.042 [257/745] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:01:43.042 [258/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:01:43.042 [259/745] Linking static target lib/librte_bpf.a 00:01:43.042 [260/745] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:43.042 [261/745] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:43.042 [262/745] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:01:43.304 [263/745] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:01:43.304 [264/745] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.305 [265/745] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:01:43.305 [266/745] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:43.305 [267/745] Generating lib/rte_power_def with a custom command 00:01:43.305 [268/745] Generating lib/rte_power_mingw with a custom command 00:01:43.305 [269/745] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:01:43.305 [270/745] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:01:43.305 [271/745] Linking static target lib/librte_gpudev.a 00:01:43.305 [272/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:01:43.305 [273/745] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:01:43.305 [274/745] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:01:43.305 [275/745] Generating lib/rte_rawdev_def with a custom command 00:01:43.305 [276/745] Generating lib/rte_rawdev_mingw with a custom command 00:01:43.305 [277/745] Linking static target lib/librte_gro.a 00:01:43.305 [278/745] Generating lib/rte_regexdev_def with a custom command 00:01:43.305 [279/745] Generating lib/rte_regexdev_mingw with a custom command 00:01:43.305 [280/745] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:43.305 [281/745] Generating lib/rte_dmadev_def with a custom command 00:01:43.305 [282/745] Generating lib/rte_dmadev_mingw with a custom command 00:01:43.571 [283/745] Generating lib/rte_rib_def with a custom command 00:01:43.571 [284/745] Generating lib/rte_rib_mingw with a custom command 00:01:43.571 [285/745] Generating lib/rte_reorder_def with a custom command 00:01:43.571 [286/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:01:43.571 [287/745] Generating lib/rte_reorder_mingw with a custom command 00:01:43.571 [288/745] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:01:43.571 [289/745] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.571 [290/745] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.848 [291/745] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:01:43.848 [292/745] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.848 [293/745] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:01:43.848 [294/745] Generating lib/rte_sched_def with a custom command 00:01:43.848 [295/745] Generating lib/rte_sched_mingw with a custom command 00:01:43.849 [296/745] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:01:43.849 [297/745] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:01:43.849 [298/745] Linking static target lib/member/libsketch_avx512_tmp.a 00:01:43.849 [299/745] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:01:43.849 [300/745] Linking static target lib/librte_latencystats.a 00:01:43.849 [301/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:01:43.849 [302/745] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:01:43.849 [303/745] Generating lib/rte_security_def with a custom command 00:01:43.849 [304/745] Generating lib/rte_security_mingw with a custom command 00:01:43.849 [305/745] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:01:43.849 [306/745] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:01:43.849 [307/745] Generating lib/rte_stack_def with a custom command 00:01:43.849 [308/745] Generating lib/rte_stack_mingw with a custom command 00:01:43.849 [309/745] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:01:43.849 [310/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:01:43.849 [311/745] Linking static target lib/librte_rawdev.a 00:01:43.849 [312/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:01:43.849 [313/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:01:44.133 [314/745] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:01:44.133 [315/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:01:44.133 [316/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:01:44.133 [317/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:01:44.133 [318/745] Linking static target lib/librte_stack.a 00:01:44.133 [319/745] Generating lib/rte_vhost_mingw with a custom command 00:01:44.133 [320/745] Generating lib/rte_vhost_def with a custom command 00:01:44.133 [321/745] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:44.133 [322/745] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:44.133 [323/745] Linking static target lib/librte_dmadev.a 00:01:44.133 [324/745] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.133 [325/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:01:44.133 [326/745] Linking static target lib/librte_ip_frag.a 00:01:44.407 [327/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:01:44.407 [328/745] Generating lib/rte_ipsec_def with a custom command 00:01:44.407 [329/745] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.407 [330/745] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:01:44.407 [331/745] Generating lib/rte_ipsec_mingw with a custom command 00:01:44.407 [332/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:01:44.407 [333/745] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:01:44.671 [334/745] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.671 [335/745] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.671 [336/745] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.671 [337/745] Generating lib/rte_fib_def with a custom command 00:01:44.671 [338/745] Generating lib/rte_fib_mingw with a custom command 00:01:44.671 [339/745] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:01:44.671 [340/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:44.671 [341/745] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:01:44.671 [342/745] Linking static target lib/librte_gso.a 00:01:44.671 [343/745] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:44.932 [344/745] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:01:44.932 [345/745] Linking static target lib/librte_regexdev.a 00:01:44.932 [346/745] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.932 [347/745] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.194 [348/745] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:01:45.194 [349/745] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:01:45.194 [350/745] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:01:45.194 [351/745] Linking static target lib/librte_pcapng.a 00:01:45.194 [352/745] Linking static target lib/librte_efd.a 00:01:45.194 [353/745] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:01:45.194 [354/745] Linking static target lib/librte_lpm.a 00:01:45.194 [355/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:01:45.194 [356/745] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:45.458 [357/745] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:45.458 [358/745] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:01:45.458 [359/745] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:45.458 [360/745] Linking static target lib/librte_reorder.a 00:01:45.458 [361/745] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:45.458 [362/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:01:45.458 [363/745] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.458 [364/745] Generating lib/rte_port_def with a custom command 00:01:45.458 [365/745] Generating lib/rte_port_mingw with a custom command 00:01:45.458 [366/745] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:45.458 [367/745] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.722 [368/745] Generating lib/rte_pdump_def with a custom command 00:01:45.722 [369/745] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:01:45.722 [370/745] Generating lib/rte_pdump_mingw with a custom command 00:01:45.722 [371/745] Linking static target lib/fib/libtrie_avx512_tmp.a 00:01:45.722 [372/745] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:01:45.722 [373/745] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:01:45.722 [374/745] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:45.722 [375/745] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:01:45.722 [376/745] Linking static target lib/librte_security.a 00:01:45.722 [377/745] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:45.722 [378/745] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:01:45.722 [379/745] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.722 [380/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:01:45.722 [381/745] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.722 [382/745] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:45.722 [383/745] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:45.722 [384/745] Linking static target lib/librte_hash.a 00:01:45.723 [385/745] Linking static target lib/librte_power.a 00:01:45.987 [386/745] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:01:45.987 [387/745] Linking static target lib/acl/libavx2_tmp.a 00:01:45.987 [388/745] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:01:45.987 [389/745] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.987 [390/745] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:01:45.987 [391/745] Linking static target lib/librte_rib.a 00:01:46.251 [392/745] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:01:46.251 [393/745] Linking static target lib/acl/libavx512_tmp.a 00:01:46.251 [394/745] Linking static target lib/librte_acl.a 00:01:46.251 [395/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:01:46.251 [396/745] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:01:46.251 [397/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:01:46.251 [398/745] Generating lib/rte_table_def with a custom command 00:01:46.513 [399/745] Generating lib/rte_table_mingw with a custom command 00:01:46.513 [400/745] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.513 [401/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:46.513 [402/745] Linking static target lib/librte_ethdev.a 00:01:46.513 [403/745] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.513 [404/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:01:46.778 [405/745] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.778 [406/745] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:01:46.778 [407/745] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.778 [408/745] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:01:47.044 [409/745] Generating lib/rte_pipeline_def with a custom command 00:01:47.044 [410/745] Generating lib/rte_pipeline_mingw with a custom command 00:01:47.044 [411/745] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:01:47.044 [412/745] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:01:47.044 [413/745] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:01:47.044 [414/745] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:01:47.044 [415/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:01:47.044 [416/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:01:47.044 [417/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:01:47.044 [418/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:47.044 [419/745] Generating lib/rte_graph_mingw with a custom command 00:01:47.044 [420/745] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:01:47.044 [421/745] Generating lib/rte_graph_def with a custom command 00:01:47.044 [422/745] Linking static target lib/librte_fib.a 00:01:47.044 [423/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:47.044 [424/745] Linking static target lib/librte_mbuf.a 00:01:47.044 [425/745] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:01:47.304 [426/745] Linking static target lib/librte_member.a 00:01:47.304 [427/745] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.304 [428/745] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:01:47.304 [429/745] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:01:47.304 [430/745] Compiling C object lib/librte_node.a.p/node_null.c.o 00:01:47.304 [431/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:01:47.304 [432/745] Linking static target lib/librte_eventdev.a 00:01:47.567 [433/745] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:01:47.567 [434/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:01:47.567 [435/745] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:01:47.567 [436/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:01:47.567 [437/745] Generating lib/rte_node_def with a custom command 00:01:47.567 [438/745] Generating lib/rte_node_mingw with a custom command 00:01:47.567 [439/745] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:01:47.567 [440/745] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.567 [441/745] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:01:47.831 [442/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:47.831 [443/745] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.831 [444/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:47.831 [445/745] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:01:47.831 [446/745] Linking static target lib/librte_sched.a 00:01:47.831 [447/745] Generating drivers/rte_bus_pci_mingw with a custom command 00:01:47.831 [448/745] Generating drivers/rte_bus_pci_def with a custom command 00:01:47.831 [449/745] Generating drivers/rte_bus_vdev_def with a custom command 00:01:47.831 [450/745] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:47.831 [451/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:01:47.831 [452/745] Generating drivers/rte_bus_vdev_mingw with a custom command 00:01:47.831 [453/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:47.831 [454/745] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:01:47.831 [455/745] Generating drivers/rte_mempool_ring_def with a custom command 00:01:47.831 [456/745] Generating drivers/rte_mempool_ring_mingw with a custom command 00:01:47.832 [457/745] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.094 [458/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:01:48.094 [459/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:48.094 [460/745] Linking static target lib/librte_cryptodev.a 00:01:48.094 [461/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:01:48.094 [462/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:48.094 [463/745] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:01:48.094 [464/745] Linking static target lib/librte_pdump.a 00:01:48.094 [465/745] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:01:48.094 [466/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:48.094 [467/745] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:01:48.352 [468/745] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:48.352 [469/745] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:48.352 [470/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:01:48.352 [471/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:01:48.352 [472/745] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:01:48.352 [473/745] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:01:48.352 [474/745] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:01:48.352 [475/745] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.352 [476/745] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:01:48.352 [477/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:48.353 [478/745] Compiling C object lib/librte_node.a.p/node_log.c.o 00:01:48.353 [479/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:01:48.615 [480/745] Generating drivers/rte_net_i40e_def with a custom command 00:01:48.615 [481/745] Generating drivers/rte_net_i40e_mingw with a custom command 00:01:48.615 [482/745] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:48.615 [483/745] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.615 [484/745] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:01:48.615 [485/745] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:48.615 [486/745] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:48.615 [487/745] Linking static target drivers/librte_bus_vdev.a 00:01:48.615 [488/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:01:48.615 [489/745] Linking static target lib/librte_table.a 00:01:48.875 [490/745] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:01:48.875 [491/745] Linking static target lib/librte_ipsec.a 00:01:48.875 [492/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:01:49.139 [493/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:49.139 [494/745] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:49.139 [495/745] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.139 [496/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:01:49.139 [497/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:01:49.403 [498/745] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.403 [499/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:01:49.403 [500/745] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:01:49.403 [501/745] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:01:49.403 [502/745] Linking static target lib/librte_graph.a 00:01:49.403 [503/745] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:49.403 [504/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:01:49.403 [505/745] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:49.403 [506/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:01:49.403 [507/745] Linking static target drivers/librte_bus_pci.a 00:01:49.403 [508/745] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:49.403 [509/745] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:01:49.403 [510/745] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:01:49.403 [511/745] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:01:49.665 [512/745] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:01:49.665 [513/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:01:49.665 [514/745] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.935 [515/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:01:49.935 [516/745] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.205 [517/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:01:50.205 [518/745] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:01:50.205 [519/745] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.205 [520/745] Linking static target lib/librte_port.a 00:01:50.470 [521/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:01:50.470 [522/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:01:50.470 [523/745] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:50.470 [524/745] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:01:50.470 [525/745] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:50.470 [526/745] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:01:50.735 [527/745] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.735 [528/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:01:50.735 [529/745] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:50.998 [530/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:01:50.998 [531/745] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:50.998 [532/745] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:50.998 [533/745] Linking static target drivers/librte_mempool_ring.a 00:01:50.998 [534/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:01:50.998 [535/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:01:50.998 [536/745] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:01:50.998 [537/745] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:01:51.260 [538/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:01:51.260 [539/745] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.260 [540/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:01:51.260 [541/745] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.523 [542/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:01:51.788 [543/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:01:51.788 [544/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:01:51.788 [545/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:01:51.788 [546/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:01:52.053 [547/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:01:52.053 [548/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:01:52.053 [549/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:01:52.053 [550/745] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:01:52.053 [551/745] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:01:52.315 [552/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:01:52.315 [553/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:01:52.580 [554/745] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:01:52.580 [555/745] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:01:52.580 [556/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:01:52.580 [557/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:01:52.844 [558/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:01:52.844 [559/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:01:53.103 [560/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:01:53.103 [561/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:01:53.103 [562/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:01:53.103 [563/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:01:53.364 [564/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:01:53.364 [565/745] Linking static target drivers/net/i40e/base/libi40e_base.a 00:01:53.364 [566/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:01:53.364 [567/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:01:53.364 [568/745] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:01:53.364 [569/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:01:53.364 [570/745] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:01:53.364 [571/745] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.364 [572/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:01:53.628 [573/745] Linking target lib/librte_eal.so.23.0 00:01:53.628 [574/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:01:53.628 [575/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:01:53.628 [576/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:01:53.628 [577/745] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:01:53.892 [578/745] Linking target lib/librte_ring.so.23.0 00:01:53.892 [579/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:01:53.892 [580/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:01:53.892 [581/745] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.892 [582/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:01:53.892 [583/745] Linking target lib/librte_meter.so.23.0 00:01:53.892 [584/745] Linking target lib/librte_timer.so.23.0 00:01:53.892 [585/745] Linking target lib/librte_pci.so.23.0 00:01:53.892 [586/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:01:53.892 [587/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:01:53.892 [588/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:01:53.892 [589/745] Linking target lib/librte_acl.so.23.0 00:01:54.151 [590/745] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:01:54.151 [591/745] Linking target lib/librte_cfgfile.so.23.0 00:01:54.151 [592/745] Linking target lib/librte_jobstats.so.23.0 00:01:54.151 [593/745] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:01:54.151 [594/745] Linking target lib/librte_rawdev.so.23.0 00:01:54.151 [595/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:01:54.151 [596/745] Linking target lib/librte_rcu.so.23.0 00:01:54.151 [597/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:01:54.151 [598/745] Linking target lib/librte_mempool.so.23.0 00:01:54.151 [599/745] Linking target lib/librte_stack.so.23.0 00:01:54.151 [600/745] Linking target lib/librte_dmadev.so.23.0 00:01:54.151 [601/745] Linking target lib/librte_graph.so.23.0 00:01:54.151 [602/745] Linking target drivers/librte_bus_vdev.so.23.0 00:01:54.151 [603/745] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:01:54.151 [604/745] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:01:54.151 [605/745] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:01:54.151 [606/745] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:01:54.151 [607/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:01:54.151 [608/745] Linking target drivers/librte_bus_pci.so.23.0 00:01:54.414 [609/745] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:01:54.414 [610/745] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:01:54.414 [611/745] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:01:54.414 [612/745] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:01:54.414 [613/745] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:01:54.414 [614/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:01:54.414 [615/745] Linking target lib/librte_rib.so.23.0 00:01:54.414 [616/745] Linking target drivers/librte_mempool_ring.so.23.0 00:01:54.414 [617/745] Linking target lib/librte_mbuf.so.23.0 00:01:54.414 [618/745] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:01:54.676 [619/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:01:54.676 [620/745] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:01:54.676 [621/745] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:01:54.676 [622/745] Linking target lib/librte_bbdev.so.23.0 00:01:54.676 [623/745] Linking target lib/librte_net.so.23.0 00:01:54.937 [624/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:01:54.937 [625/745] Linking target lib/librte_compressdev.so.23.0 00:01:54.937 [626/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:01:54.937 [627/745] Linking target lib/librte_cryptodev.so.23.0 00:01:54.937 [628/745] Linking target lib/librte_distributor.so.23.0 00:01:54.937 [629/745] Linking target lib/librte_gpudev.so.23.0 00:01:54.937 [630/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:01:54.937 [631/745] Linking target lib/librte_regexdev.so.23.0 00:01:54.937 [632/745] Linking target lib/librte_reorder.so.23.0 00:01:54.937 [633/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:01:54.937 [634/745] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:01:54.937 [635/745] Linking target lib/librte_sched.so.23.0 00:01:54.937 [636/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:01:54.937 [637/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:01:54.937 [638/745] Linking target lib/librte_fib.so.23.0 00:01:55.203 [639/745] Linking target lib/librte_hash.so.23.0 00:01:55.203 [640/745] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:01:55.203 [641/745] Linking target lib/librte_cmdline.so.23.0 00:01:55.203 [642/745] Linking target lib/librte_ethdev.so.23.0 00:01:55.203 [643/745] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:01:55.203 [644/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:01:55.203 [645/745] Linking target lib/librte_security.so.23.0 00:01:55.203 [646/745] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:01:55.203 [647/745] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:01:55.203 [648/745] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:01:55.203 [649/745] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:01:55.203 [650/745] Linking target lib/librte_lpm.so.23.0 00:01:55.463 [651/745] Linking target lib/librte_efd.so.23.0 00:01:55.463 [652/745] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:01:55.463 [653/745] Linking target lib/librte_member.so.23.0 00:01:55.463 [654/745] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:01:55.463 [655/745] Linking target lib/librte_pcapng.so.23.0 00:01:55.463 [656/745] Linking target lib/librte_metrics.so.23.0 00:01:55.463 [657/745] Linking target lib/librte_gro.so.23.0 00:01:55.463 [658/745] Linking target lib/librte_bpf.so.23.0 00:01:55.463 [659/745] Linking target lib/librte_gso.so.23.0 00:01:55.463 [660/745] Linking target lib/librte_ip_frag.so.23.0 00:01:55.463 [661/745] Linking target lib/librte_power.so.23.0 00:01:55.463 [662/745] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:01:55.463 [663/745] Linking target lib/librte_eventdev.so.23.0 00:01:55.463 [664/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:01:55.463 [665/745] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:01:55.463 [666/745] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:01:55.463 [667/745] Linking target lib/librte_ipsec.so.23.0 00:01:55.463 [668/745] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:01:55.463 [669/745] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:01:55.463 [670/745] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:01:55.721 [671/745] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:01:55.721 [672/745] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:01:55.721 [673/745] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:01:55.721 [674/745] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:01:55.722 [675/745] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:01:55.722 [676/745] Linking target lib/librte_latencystats.so.23.0 00:01:55.722 [677/745] Linking target lib/librte_pdump.so.23.0 00:01:55.722 [678/745] Linking target lib/librte_bitratestats.so.23.0 00:01:55.722 [679/745] Linking target lib/librte_port.so.23.0 00:01:55.722 [680/745] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:01:55.722 [681/745] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:01:55.980 [682/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:01:55.980 [683/745] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:01:55.980 [684/745] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:01:55.980 [685/745] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:01:55.980 [686/745] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:01:55.980 [687/745] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:01:55.980 [688/745] Linking target lib/librte_table.so.23.0 00:01:56.237 [689/745] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:01:56.237 [690/745] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:01:56.237 [691/745] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:01:56.237 [692/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:01:56.237 [693/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:01:56.495 [694/745] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:01:56.495 [695/745] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:01:56.495 [696/745] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:01:56.754 [697/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:01:57.012 [698/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:01:57.270 [699/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:01:57.270 [700/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:01:57.270 [701/745] Linking static target drivers/libtmp_rte_net_i40e.a 00:01:57.270 [702/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:01:57.270 [703/745] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:01:57.529 [704/745] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:01:57.529 [705/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:01:57.529 [706/745] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:01:57.787 [707/745] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:57.787 [708/745] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:57.787 [709/745] Linking static target drivers/librte_net_i40e.a 00:01:58.045 [710/745] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:01:58.045 [711/745] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.335 [712/745] Linking target drivers/librte_net_i40e.so.23.0 00:01:58.901 [713/745] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:01:58.901 [714/745] Linking static target lib/librte_node.a 00:01:59.164 [715/745] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.164 [716/745] Linking target lib/librte_node.so.23.0 00:01:59.423 [717/745] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:01:59.988 [718/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:00.924 [719/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:09.031 [720/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:41.110 [721/745] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:41.110 [722/745] Linking static target lib/librte_vhost.a 00:02:41.368 [723/745] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.627 [724/745] Linking target lib/librte_vhost.so.23.0 00:02:49.753 [725/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:49.753 [726/745] Linking static target lib/librte_pipeline.a 00:02:49.753 [727/745] Linking target app/dpdk-dumpcap 00:02:49.753 [728/745] Linking target app/dpdk-test-cmdline 00:02:49.753 [729/745] Linking target app/dpdk-test-pipeline 00:02:49.753 [730/745] Linking target app/dpdk-test-regex 00:02:49.753 [731/745] Linking target app/dpdk-pdump 00:02:49.753 [732/745] Linking target app/dpdk-test-flow-perf 00:02:49.753 [733/745] Linking target app/dpdk-test-security-perf 00:02:49.753 [734/745] Linking target app/dpdk-test-fib 00:02:49.753 [735/745] Linking target app/dpdk-test-sad 00:02:49.753 [736/745] Linking target app/dpdk-proc-info 00:02:49.753 [737/745] Linking target app/dpdk-test-acl 00:02:49.753 [738/745] Linking target app/dpdk-test-gpudev 00:02:49.753 [739/745] Linking target app/dpdk-test-bbdev 00:02:49.753 [740/745] Linking target app/dpdk-test-eventdev 00:02:49.753 [741/745] Linking target app/dpdk-test-compress-perf 00:02:49.753 [742/745] Linking target app/dpdk-test-crypto-perf 00:02:49.753 [743/745] Linking target app/dpdk-testpmd 00:02:51.184 [744/745] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.443 [745/745] Linking target lib/librte_pipeline.so.23.0 00:02:51.443 00:42:35 -- common/autobuild_common.sh@190 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 install 00:02:51.443 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:02:51.443 [0/1] Installing files. 00:02:51.705 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:02:51.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:51.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:51.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:51.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:51.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:51.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:51.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/flow_classify.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:51.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/ipv4_rules_file.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:51.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:51.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:51.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:51.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:51.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:51.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:51.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:51.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:51.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:51.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:51.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:51.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:51.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:51.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:51.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:51.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:51.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:02:51.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:02:51.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:02:51.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:02:51.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:51.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:51.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:51.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:51.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:51.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:51.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:51.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:51.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:51.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:51.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:51.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:51.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:51.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:51.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:51.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:51.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:51.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:51.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:51.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:51.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:51.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:51.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:51.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:51.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:51.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:51.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:51.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:51.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:51.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:51.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:51.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:51.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:51.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:51.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:51.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:51.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:51.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:51.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:51.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:51.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:51.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:51.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:51.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:51.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:51.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:51.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:51.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:51.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:51.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:51.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:51.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:51.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:51.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:51.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:51.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:51.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:51.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:51.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:51.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:51.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:51.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:51.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:51.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:51.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:51.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:51.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:51.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:51.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:51.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:51.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:51.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:51.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:51.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:51.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:51.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:51.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:51.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:51.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:51.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:51.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:51.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:51.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:51.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:51.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:51.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:51.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:02:51.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:51.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:51.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:51.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:51.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:51.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:51.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:51.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:51.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:51.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:51.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:51.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:51.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:51.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:51.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:51.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:51.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:51.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:51.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:51.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:51.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:51.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:51.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:51.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:51.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:51.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:51.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:51.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:51.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:51.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:51.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:51.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:51.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:51.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:51.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:51.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:51.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:51.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:51.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:51.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:51.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:51.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:51.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:51.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:51.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:51.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:51.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:51.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.709 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.709 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.709 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.709 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.709 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:51.709 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:51.709 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:02:51.709 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:51.709 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:51.709 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:51.709 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:51.709 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:51.709 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:51.709 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:51.709 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:51.709 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:51.709 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.709 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.709 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.709 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.709 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.709 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.709 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.709 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.709 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/kni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.709 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.709 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.709 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.709 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.709 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.709 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.709 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.709 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.709 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.709 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.709 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.709 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.709 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.709 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.709 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.709 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.709 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.709 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.709 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.709 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/kni.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.709 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:51.709 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:51.709 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:51.709 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/kni.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:51.709 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:51.709 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:51.709 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:51.709 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:51.709 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:51.709 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:51.709 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:51.709 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:51.710 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:51.710 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:51.710 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:51.710 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:51.710 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:51.710 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:51.710 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:51.710 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:51.710 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:51.710 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:51.710 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:51.710 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:51.710 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:51.710 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:51.710 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:51.710 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:51.710 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:51.710 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:51.710 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:51.710 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:51.710 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.710 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.710 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.710 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.710 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.710 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.710 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.710 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.710 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.710 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.710 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.710 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.710 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.710 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.710 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.710 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.710 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.710 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.710 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.710 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.710 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.710 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.710 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.710 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.710 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.710 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.710 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.710 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.710 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.710 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.710 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.710 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.710 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.710 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.710 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.710 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.710 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:51.710 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:51.710 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:51.710 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:51.710 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:51.710 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:02:51.710 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:51.710 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:51.710 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:51.710 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:51.710 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:51.710 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:51.710 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:51.711 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:51.711 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:51.711 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:51.711 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:51.711 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:51.711 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:51.711 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.711 Installing lib/librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.711 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.711 Installing lib/librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.711 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.711 Installing lib/librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.711 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.711 Installing lib/librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.711 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.711 Installing lib/librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.711 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.969 Installing lib/librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.969 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.969 Installing lib/librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.969 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.969 Installing lib/librte_net.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.969 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.969 Installing lib/librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.969 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.969 Installing lib/librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.969 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.969 Installing lib/librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.970 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.970 Installing lib/librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.970 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.970 Installing lib/librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.970 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.970 Installing lib/librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.970 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.970 Installing lib/librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.970 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.970 Installing lib/librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.970 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.970 Installing lib/librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.970 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.970 Installing lib/librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.970 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.970 Installing lib/librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.970 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.970 Installing lib/librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.970 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.970 Installing lib/librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.970 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.970 Installing lib/librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.970 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.970 Installing lib/librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.970 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.970 Installing lib/librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.970 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.970 Installing lib/librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.970 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.970 Installing lib/librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.970 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.970 Installing lib/librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.970 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.970 Installing lib/librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.970 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.970 Installing lib/librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.970 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.970 Installing lib/librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.970 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.970 Installing lib/librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.970 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.970 Installing lib/librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.970 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.970 Installing lib/librte_member.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.970 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.970 Installing lib/librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.970 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.970 Installing lib/librte_power.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.970 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.970 Installing lib/librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.970 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.970 Installing lib/librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.970 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.970 Installing lib/librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.970 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.970 Installing lib/librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.970 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.970 Installing lib/librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.970 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.970 Installing lib/librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.970 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.970 Installing lib/librte_security.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.970 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.970 Installing lib/librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.970 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.970 Installing lib/librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.970 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.970 Installing lib/librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.970 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.970 Installing lib/librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.970 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.970 Installing lib/librte_port.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.970 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.970 Installing lib/librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.231 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.231 Installing lib/librte_table.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.231 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.231 Installing lib/librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.231 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.231 Installing lib/librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.231 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.231 Installing lib/librte_node.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.231 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.231 Installing drivers/librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:52.231 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.231 Installing drivers/librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:52.231 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.231 Installing drivers/librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:52.231 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.231 Installing drivers/librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:52.231 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:52.231 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:52.231 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:52.231 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:52.231 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:52.231 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:52.231 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:52.231 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:52.231 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:52.231 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:52.231 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:52.231 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:52.231 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:52.231 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:52.231 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:52.231 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:52.231 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:52.231 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.231 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.231 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.231 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:52.231 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:52.231 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:52.231 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:52.231 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:52.231 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:52.231 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:52.231 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:52.231 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:52.231 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:52.231 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:52.231 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:52.231 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.231 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.231 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.231 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.231 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.231 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.231 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.231 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.231 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.231 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_empty_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_intel_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:52.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:52.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:52.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:52.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:52.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:52.495 Installing symlink pointing to librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.23 00:02:52.495 Installing symlink pointing to librte_kvargs.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:02:52.495 Installing symlink pointing to librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.23 00:02:52.495 Installing symlink pointing to librte_telemetry.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:02:52.495 Installing symlink pointing to librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.23 00:02:52.495 Installing symlink pointing to librte_eal.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:02:52.495 Installing symlink pointing to librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.23 00:02:52.495 Installing symlink pointing to librte_ring.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:02:52.495 Installing symlink pointing to librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.23 00:02:52.495 Installing symlink pointing to librte_rcu.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:02:52.495 Installing symlink pointing to librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.23 00:02:52.495 Installing symlink pointing to librte_mempool.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:02:52.495 Installing symlink pointing to librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.23 00:02:52.495 Installing symlink pointing to librte_mbuf.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:02:52.496 Installing symlink pointing to librte_net.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.23 00:02:52.496 Installing symlink pointing to librte_net.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:02:52.496 Installing symlink pointing to librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.23 00:02:52.496 Installing symlink pointing to librte_meter.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:02:52.496 Installing symlink pointing to librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.23 00:02:52.496 Installing symlink pointing to librte_ethdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:02:52.496 Installing symlink pointing to librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.23 00:02:52.496 Installing symlink pointing to librte_pci.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:02:52.496 Installing symlink pointing to librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.23 00:02:52.496 Installing symlink pointing to librte_cmdline.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:02:52.496 Installing symlink pointing to librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.23 00:02:52.496 Installing symlink pointing to librte_metrics.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:02:52.496 Installing symlink pointing to librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.23 00:02:52.496 Installing symlink pointing to librte_hash.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:02:52.496 Installing symlink pointing to librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.23 00:02:52.496 Installing symlink pointing to librte_timer.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:02:52.496 Installing symlink pointing to librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.23 00:02:52.496 Installing symlink pointing to librte_acl.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:02:52.496 Installing symlink pointing to librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.23 00:02:52.496 Installing symlink pointing to librte_bbdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:02:52.496 Installing symlink pointing to librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.23 00:02:52.496 Installing symlink pointing to librte_bitratestats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:02:52.496 Installing symlink pointing to librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.23 00:02:52.496 Installing symlink pointing to librte_bpf.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:02:52.496 Installing symlink pointing to librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.23 00:02:52.496 Installing symlink pointing to librte_cfgfile.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:02:52.496 Installing symlink pointing to librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.23 00:02:52.496 Installing symlink pointing to librte_compressdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:02:52.496 Installing symlink pointing to librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.23 00:02:52.496 Installing symlink pointing to librte_cryptodev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:02:52.496 Installing symlink pointing to librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.23 00:02:52.496 Installing symlink pointing to librte_distributor.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:02:52.496 Installing symlink pointing to librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.23 00:02:52.496 Installing symlink pointing to librte_efd.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:02:52.496 Installing symlink pointing to librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.23 00:02:52.496 Installing symlink pointing to librte_eventdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:02:52.496 Installing symlink pointing to librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.23 00:02:52.496 Installing symlink pointing to librte_gpudev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:02:52.496 Installing symlink pointing to librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.23 00:02:52.496 Installing symlink pointing to librte_gro.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:02:52.496 Installing symlink pointing to librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.23 00:02:52.496 Installing symlink pointing to librte_gso.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:02:52.496 Installing symlink pointing to librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.23 00:02:52.496 Installing symlink pointing to librte_ip_frag.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:02:52.496 Installing symlink pointing to librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.23 00:02:52.496 Installing symlink pointing to librte_jobstats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:02:52.496 Installing symlink pointing to librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.23 00:02:52.496 Installing symlink pointing to librte_latencystats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:02:52.496 Installing symlink pointing to librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.23 00:02:52.496 Installing symlink pointing to librte_lpm.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:02:52.496 Installing symlink pointing to librte_member.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.23 00:02:52.496 Installing symlink pointing to librte_member.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:02:52.496 Installing symlink pointing to librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.23 00:02:52.496 Installing symlink pointing to librte_pcapng.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:02:52.496 Installing symlink pointing to librte_power.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.23 00:02:52.496 Installing symlink pointing to librte_power.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:02:52.496 Installing symlink pointing to librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.23 00:02:52.496 Installing symlink pointing to librte_rawdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:02:52.496 Installing symlink pointing to librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.23 00:02:52.496 Installing symlink pointing to librte_regexdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:02:52.496 Installing symlink pointing to librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.23 00:02:52.496 Installing symlink pointing to librte_dmadev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:02:52.496 Installing symlink pointing to librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.23 00:02:52.496 Installing symlink pointing to librte_rib.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:02:52.496 Installing symlink pointing to librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.23 00:02:52.496 Installing symlink pointing to librte_reorder.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:02:52.496 Installing symlink pointing to librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.23 00:02:52.496 Installing symlink pointing to librte_sched.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:02:52.496 Installing symlink pointing to librte_security.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.23 00:02:52.496 Installing symlink pointing to librte_security.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:02:52.496 Installing symlink pointing to librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.23 00:02:52.496 Installing symlink pointing to librte_stack.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:02:52.496 Installing symlink pointing to librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.23 00:02:52.496 Installing symlink pointing to librte_vhost.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:02:52.496 Installing symlink pointing to librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.23 00:02:52.496 Installing symlink pointing to librte_ipsec.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:02:52.496 Installing symlink pointing to librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.23 00:02:52.496 Installing symlink pointing to librte_fib.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:02:52.496 Installing symlink pointing to librte_port.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.23 00:02:52.496 Installing symlink pointing to librte_port.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:02:52.496 Installing symlink pointing to librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.23 00:02:52.496 Installing symlink pointing to librte_pdump.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:02:52.496 Installing symlink pointing to librte_table.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.23 00:02:52.496 Installing symlink pointing to librte_table.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:02:52.496 Installing symlink pointing to librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.23 00:02:52.496 Installing symlink pointing to librte_pipeline.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:02:52.496 Installing symlink pointing to librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.23 00:02:52.496 Installing symlink pointing to librte_graph.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:02:52.496 Installing symlink pointing to librte_node.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.23 00:02:52.496 Installing symlink pointing to librte_node.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:02:52.496 Installing symlink pointing to librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:02:52.496 Installing symlink pointing to librte_bus_pci.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:02:52.497 Installing symlink pointing to librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:02:52.497 Installing symlink pointing to librte_bus_vdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:02:52.497 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:02:52.497 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:02:52.497 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:02:52.497 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:02:52.497 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:02:52.497 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:02:52.497 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:02:52.497 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:02:52.497 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:02:52.497 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:02:52.497 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:02:52.497 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:02:52.497 Installing symlink pointing to librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:02:52.497 Installing symlink pointing to librte_mempool_ring.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:02:52.497 Installing symlink pointing to librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:02:52.497 Installing symlink pointing to librte_net_i40e.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:02:52.497 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:02:52.497 00:42:36 -- common/autobuild_common.sh@192 -- $ uname -s 00:02:52.497 00:42:36 -- common/autobuild_common.sh@192 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:52.497 00:42:36 -- common/autobuild_common.sh@203 -- $ cat 00:02:52.497 00:42:36 -- common/autobuild_common.sh@208 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:52.497 00:02:52.497 real 1m19.020s 00:02:52.497 user 14m20.602s 00:02:52.497 sys 1m47.948s 00:02:52.497 00:42:36 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:52.497 00:42:36 -- common/autotest_common.sh@10 -- $ set +x 00:02:52.497 ************************************ 00:02:52.497 END TEST build_native_dpdk 00:02:52.497 ************************************ 00:02:52.497 00:42:36 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:52.497 00:42:36 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:52.497 00:42:36 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:52.497 00:42:36 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:52.497 00:42:36 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:52.497 00:42:36 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:52.497 00:42:36 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:52.497 00:42:36 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:02:52.497 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:02:52.497 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.497 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.755 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:53.013 Using 'verbs' RDMA provider 00:03:03.242 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/isa-l/spdk-isal.log)...done. 00:03:13.213 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:03:13.213 Creating mk/config.mk...done. 00:03:13.213 Creating mk/cc.flags.mk...done. 00:03:13.213 Type 'make' to build. 00:03:13.213 00:42:56 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:03:13.213 00:42:56 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:03:13.213 00:42:56 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:03:13.213 00:42:56 -- common/autotest_common.sh@10 -- $ set +x 00:03:13.213 ************************************ 00:03:13.213 START TEST make 00:03:13.213 ************************************ 00:03:13.213 00:42:56 -- common/autotest_common.sh@1104 -- $ make -j48 00:03:13.213 make[1]: Nothing to be done for 'all'. 00:03:13.791 The Meson build system 00:03:13.791 Version: 1.3.1 00:03:13.791 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:03:13.791 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:13.791 Build type: native build 00:03:13.791 Project name: libvfio-user 00:03:13.791 Project version: 0.0.1 00:03:13.791 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:03:13.791 C linker for the host machine: gcc ld.bfd 2.39-16 00:03:13.791 Host machine cpu family: x86_64 00:03:13.791 Host machine cpu: x86_64 00:03:13.791 Run-time dependency threads found: YES 00:03:13.791 Library dl found: YES 00:03:13.791 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:13.791 Run-time dependency json-c found: YES 0.17 00:03:13.791 Run-time dependency cmocka found: YES 1.1.7 00:03:13.791 Program pytest-3 found: NO 00:03:13.791 Program flake8 found: NO 00:03:13.791 Program misspell-fixer found: NO 00:03:13.791 Program restructuredtext-lint found: NO 00:03:13.791 Program valgrind found: YES (/usr/bin/valgrind) 00:03:13.791 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:13.791 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:13.791 Compiler for C supports arguments -Wwrite-strings: YES 00:03:13.791 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:13.791 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:03:13.791 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:03:13.791 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:13.791 Build targets in project: 8 00:03:13.791 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:13.791 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:13.791 00:03:13.791 libvfio-user 0.0.1 00:03:13.791 00:03:13.791 User defined options 00:03:13.791 buildtype : debug 00:03:13.791 default_library: shared 00:03:13.791 libdir : /usr/local/lib 00:03:13.791 00:03:13.791 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:14.364 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:14.631 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:03:14.631 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:03:14.631 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:03:14.631 [4/37] Compiling C object samples/null.p/null.c.o 00:03:14.631 [5/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:14.631 [6/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:14.892 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:03:14.892 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:03:14.892 [9/37] Compiling C object samples/lspci.p/lspci.c.o 00:03:14.892 [10/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:14.892 [11/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:14.892 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:03:14.892 [13/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:14.892 [14/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:14.892 [15/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:14.892 [16/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:14.892 [17/37] Compiling C object test/unit_tests.p/mocks.c.o 00:03:14.892 [18/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:14.892 [19/37] Compiling C object samples/server.p/server.c.o 00:03:14.892 [20/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:14.892 [21/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:03:14.892 [22/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:14.892 [23/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:14.892 [24/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:14.892 [25/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:14.892 [26/37] Compiling C object samples/client.p/client.c.o 00:03:15.156 [27/37] Linking target samples/client 00:03:15.156 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:03:15.156 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:15.156 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:03:15.156 [31/37] Linking target test/unit_tests 00:03:15.419 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:03:15.419 [33/37] Linking target samples/null 00:03:15.419 [34/37] Linking target samples/gpio-pci-idio-16 00:03:15.419 [35/37] Linking target samples/server 00:03:15.419 [36/37] Linking target samples/shadow_ioeventfd_server 00:03:15.419 [37/37] Linking target samples/lspci 00:03:15.419 INFO: autodetecting backend as ninja 00:03:15.419 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:15.419 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:16.364 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:16.364 ninja: no work to do. 00:03:28.595 CC lib/ut/ut.o 00:03:28.595 CC lib/log/log.o 00:03:28.595 CC lib/log/log_flags.o 00:03:28.595 CC lib/log/log_deprecated.o 00:03:28.595 CC lib/ut_mock/mock.o 00:03:28.595 LIB libspdk_ut_mock.a 00:03:28.595 LIB libspdk_ut.a 00:03:28.595 LIB libspdk_log.a 00:03:28.595 SO libspdk_ut_mock.so.5.0 00:03:28.595 SO libspdk_ut.so.1.0 00:03:28.595 SO libspdk_log.so.6.1 00:03:28.595 SYMLINK libspdk_ut_mock.so 00:03:28.595 SYMLINK libspdk_ut.so 00:03:28.595 SYMLINK libspdk_log.so 00:03:28.595 CXX lib/trace_parser/trace.o 00:03:28.595 CC lib/dma/dma.o 00:03:28.595 CC lib/util/base64.o 00:03:28.595 CC lib/util/bit_array.o 00:03:28.595 CC lib/util/cpuset.o 00:03:28.595 CC lib/ioat/ioat.o 00:03:28.595 CC lib/util/crc16.o 00:03:28.595 CC lib/util/crc32.o 00:03:28.595 CC lib/util/crc32c.o 00:03:28.595 CC lib/util/crc32_ieee.o 00:03:28.595 CC lib/util/crc64.o 00:03:28.595 CC lib/util/dif.o 00:03:28.595 CC lib/util/fd.o 00:03:28.595 CC lib/util/file.o 00:03:28.595 CC lib/util/hexlify.o 00:03:28.595 CC lib/util/iov.o 00:03:28.595 CC lib/util/math.o 00:03:28.595 CC lib/util/pipe.o 00:03:28.595 CC lib/util/strerror_tls.o 00:03:28.595 CC lib/util/uuid.o 00:03:28.595 CC lib/util/string.o 00:03:28.595 CC lib/util/fd_group.o 00:03:28.595 CC lib/util/xor.o 00:03:28.595 CC lib/util/zipf.o 00:03:28.595 CC lib/vfio_user/host/vfio_user_pci.o 00:03:28.595 CC lib/vfio_user/host/vfio_user.o 00:03:28.595 LIB libspdk_dma.a 00:03:28.595 SO libspdk_dma.so.3.0 00:03:28.596 SYMLINK libspdk_dma.so 00:03:28.596 LIB libspdk_ioat.a 00:03:28.596 LIB libspdk_vfio_user.a 00:03:28.596 SO libspdk_ioat.so.6.0 00:03:28.596 SO libspdk_vfio_user.so.4.0 00:03:28.596 SYMLINK libspdk_ioat.so 00:03:28.596 SYMLINK libspdk_vfio_user.so 00:03:28.596 LIB libspdk_util.a 00:03:28.596 SO libspdk_util.so.8.0 00:03:28.596 SYMLINK libspdk_util.so 00:03:28.596 CC lib/env_dpdk/env.o 00:03:28.596 CC lib/rdma/common.o 00:03:28.596 CC lib/conf/conf.o 00:03:28.596 CC lib/vmd/vmd.o 00:03:28.596 CC lib/env_dpdk/memory.o 00:03:28.596 CC lib/json/json_parse.o 00:03:28.596 CC lib/rdma/rdma_verbs.o 00:03:28.596 CC lib/env_dpdk/pci.o 00:03:28.596 CC lib/vmd/led.o 00:03:28.596 CC lib/json/json_util.o 00:03:28.596 CC lib/env_dpdk/init.o 00:03:28.596 CC lib/json/json_write.o 00:03:28.596 CC lib/env_dpdk/threads.o 00:03:28.596 CC lib/idxd/idxd.o 00:03:28.596 CC lib/env_dpdk/pci_ioat.o 00:03:28.596 CC lib/idxd/idxd_user.o 00:03:28.596 CC lib/env_dpdk/pci_virtio.o 00:03:28.596 CC lib/idxd/idxd_kernel.o 00:03:28.596 CC lib/env_dpdk/pci_vmd.o 00:03:28.596 CC lib/env_dpdk/pci_idxd.o 00:03:28.596 CC lib/env_dpdk/pci_event.o 00:03:28.596 CC lib/env_dpdk/sigbus_handler.o 00:03:28.596 CC lib/env_dpdk/pci_dpdk.o 00:03:28.596 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:28.596 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:28.596 LIB libspdk_trace_parser.a 00:03:28.596 SO libspdk_trace_parser.so.4.0 00:03:28.854 LIB libspdk_conf.a 00:03:28.854 SYMLINK libspdk_trace_parser.so 00:03:28.854 SO libspdk_conf.so.5.0 00:03:28.854 LIB libspdk_json.a 00:03:28.854 SYMLINK libspdk_conf.so 00:03:28.854 SO libspdk_json.so.5.1 00:03:28.854 LIB libspdk_rdma.a 00:03:28.854 SYMLINK libspdk_json.so 00:03:28.854 SO libspdk_rdma.so.5.0 00:03:29.112 SYMLINK libspdk_rdma.so 00:03:29.112 CC lib/jsonrpc/jsonrpc_server.o 00:03:29.112 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:29.112 CC lib/jsonrpc/jsonrpc_client.o 00:03:29.112 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:29.112 LIB libspdk_idxd.a 00:03:29.112 SO libspdk_idxd.so.11.0 00:03:29.112 SYMLINK libspdk_idxd.so 00:03:29.370 LIB libspdk_vmd.a 00:03:29.370 SO libspdk_vmd.so.5.0 00:03:29.370 LIB libspdk_jsonrpc.a 00:03:29.370 SYMLINK libspdk_vmd.so 00:03:29.370 SO libspdk_jsonrpc.so.5.1 00:03:29.370 SYMLINK libspdk_jsonrpc.so 00:03:29.628 CC lib/rpc/rpc.o 00:03:29.628 LIB libspdk_rpc.a 00:03:29.628 SO libspdk_rpc.so.5.0 00:03:29.886 SYMLINK libspdk_rpc.so 00:03:29.886 CC lib/trace/trace.o 00:03:29.886 CC lib/trace/trace_flags.o 00:03:29.886 CC lib/trace/trace_rpc.o 00:03:29.886 CC lib/notify/notify.o 00:03:29.886 CC lib/sock/sock.o 00:03:29.886 CC lib/notify/notify_rpc.o 00:03:29.886 CC lib/sock/sock_rpc.o 00:03:30.144 LIB libspdk_notify.a 00:03:30.144 SO libspdk_notify.so.5.0 00:03:30.144 LIB libspdk_trace.a 00:03:30.144 SYMLINK libspdk_notify.so 00:03:30.144 SO libspdk_trace.so.9.0 00:03:30.144 SYMLINK libspdk_trace.so 00:03:30.402 LIB libspdk_sock.a 00:03:30.402 SO libspdk_sock.so.8.0 00:03:30.402 CC lib/thread/thread.o 00:03:30.402 CC lib/thread/iobuf.o 00:03:30.402 SYMLINK libspdk_sock.so 00:03:30.402 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:30.402 CC lib/nvme/nvme_ctrlr.o 00:03:30.402 CC lib/nvme/nvme_fabric.o 00:03:30.402 CC lib/nvme/nvme_ns_cmd.o 00:03:30.402 CC lib/nvme/nvme_ns.o 00:03:30.402 CC lib/nvme/nvme_pcie_common.o 00:03:30.402 CC lib/nvme/nvme_pcie.o 00:03:30.402 CC lib/nvme/nvme_qpair.o 00:03:30.402 CC lib/nvme/nvme.o 00:03:30.402 CC lib/nvme/nvme_quirks.o 00:03:30.402 CC lib/nvme/nvme_transport.o 00:03:30.402 CC lib/nvme/nvme_discovery.o 00:03:30.402 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:30.402 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:30.402 CC lib/nvme/nvme_tcp.o 00:03:30.402 CC lib/nvme/nvme_opal.o 00:03:30.402 CC lib/nvme/nvme_io_msg.o 00:03:30.402 CC lib/nvme/nvme_poll_group.o 00:03:30.402 CC lib/nvme/nvme_zns.o 00:03:30.402 CC lib/nvme/nvme_cuse.o 00:03:30.402 CC lib/nvme/nvme_vfio_user.o 00:03:30.402 CC lib/nvme/nvme_rdma.o 00:03:30.402 LIB libspdk_env_dpdk.a 00:03:30.661 SO libspdk_env_dpdk.so.13.0 00:03:30.661 SYMLINK libspdk_env_dpdk.so 00:03:32.037 LIB libspdk_thread.a 00:03:32.037 SO libspdk_thread.so.9.0 00:03:32.037 SYMLINK libspdk_thread.so 00:03:32.037 CC lib/accel/accel.o 00:03:32.037 CC lib/init/json_config.o 00:03:32.037 CC lib/vfu_tgt/tgt_endpoint.o 00:03:32.037 CC lib/accel/accel_rpc.o 00:03:32.037 CC lib/blob/blobstore.o 00:03:32.037 CC lib/vfu_tgt/tgt_rpc.o 00:03:32.037 CC lib/init/subsystem.o 00:03:32.037 CC lib/virtio/virtio.o 00:03:32.037 CC lib/accel/accel_sw.o 00:03:32.037 CC lib/init/subsystem_rpc.o 00:03:32.037 CC lib/blob/request.o 00:03:32.037 CC lib/virtio/virtio_vhost_user.o 00:03:32.037 CC lib/init/rpc.o 00:03:32.037 CC lib/blob/zeroes.o 00:03:32.037 CC lib/virtio/virtio_vfio_user.o 00:03:32.037 CC lib/blob/blob_bs_dev.o 00:03:32.037 CC lib/virtio/virtio_pci.o 00:03:32.295 LIB libspdk_init.a 00:03:32.295 SO libspdk_init.so.4.0 00:03:32.553 LIB libspdk_virtio.a 00:03:32.553 SYMLINK libspdk_init.so 00:03:32.553 LIB libspdk_vfu_tgt.a 00:03:32.553 SO libspdk_virtio.so.6.0 00:03:32.553 SO libspdk_vfu_tgt.so.2.0 00:03:32.553 SYMLINK libspdk_vfu_tgt.so 00:03:32.553 SYMLINK libspdk_virtio.so 00:03:32.553 CC lib/event/app.o 00:03:32.553 CC lib/event/reactor.o 00:03:32.553 CC lib/event/log_rpc.o 00:03:32.553 CC lib/event/app_rpc.o 00:03:32.553 CC lib/event/scheduler_static.o 00:03:32.812 LIB libspdk_nvme.a 00:03:32.812 SO libspdk_nvme.so.12.0 00:03:33.070 LIB libspdk_event.a 00:03:33.070 SO libspdk_event.so.12.0 00:03:33.070 SYMLINK libspdk_event.so 00:03:33.070 LIB libspdk_accel.a 00:03:33.070 SYMLINK libspdk_nvme.so 00:03:33.329 SO libspdk_accel.so.14.0 00:03:33.329 SYMLINK libspdk_accel.so 00:03:33.329 CC lib/bdev/bdev.o 00:03:33.329 CC lib/bdev/bdev_rpc.o 00:03:33.329 CC lib/bdev/bdev_zone.o 00:03:33.329 CC lib/bdev/part.o 00:03:33.329 CC lib/bdev/scsi_nvme.o 00:03:34.703 LIB libspdk_blob.a 00:03:34.961 SO libspdk_blob.so.10.1 00:03:34.961 SYMLINK libspdk_blob.so 00:03:34.961 CC lib/blobfs/blobfs.o 00:03:34.961 CC lib/blobfs/tree.o 00:03:34.961 CC lib/lvol/lvol.o 00:03:35.895 LIB libspdk_blobfs.a 00:03:35.895 SO libspdk_blobfs.so.9.0 00:03:35.895 LIB libspdk_lvol.a 00:03:35.895 LIB libspdk_bdev.a 00:03:35.895 SO libspdk_lvol.so.9.1 00:03:35.895 SYMLINK libspdk_blobfs.so 00:03:35.895 SO libspdk_bdev.so.14.0 00:03:35.895 SYMLINK libspdk_lvol.so 00:03:35.895 SYMLINK libspdk_bdev.so 00:03:36.161 CC lib/nvmf/ctrlr.o 00:03:36.161 CC lib/ublk/ublk.o 00:03:36.161 CC lib/nbd/nbd.o 00:03:36.161 CC lib/ublk/ublk_rpc.o 00:03:36.161 CC lib/nvmf/ctrlr_discovery.o 00:03:36.161 CC lib/scsi/dev.o 00:03:36.161 CC lib/nbd/nbd_rpc.o 00:03:36.161 CC lib/scsi/lun.o 00:03:36.161 CC lib/nvmf/ctrlr_bdev.o 00:03:36.161 CC lib/ftl/ftl_core.o 00:03:36.161 CC lib/nvmf/subsystem.o 00:03:36.161 CC lib/scsi/port.o 00:03:36.161 CC lib/ftl/ftl_init.o 00:03:36.161 CC lib/nvmf/nvmf.o 00:03:36.161 CC lib/scsi/scsi.o 00:03:36.161 CC lib/ftl/ftl_layout.o 00:03:36.161 CC lib/nvmf/nvmf_rpc.o 00:03:36.161 CC lib/scsi/scsi_bdev.o 00:03:36.161 CC lib/nvmf/transport.o 00:03:36.161 CC lib/scsi/scsi_pr.o 00:03:36.161 CC lib/ftl/ftl_debug.o 00:03:36.161 CC lib/scsi/scsi_rpc.o 00:03:36.161 CC lib/scsi/task.o 00:03:36.161 CC lib/nvmf/tcp.o 00:03:36.161 CC lib/ftl/ftl_io.o 00:03:36.161 CC lib/nvmf/vfio_user.o 00:03:36.161 CC lib/nvmf/rdma.o 00:03:36.161 CC lib/ftl/ftl_sb.o 00:03:36.161 CC lib/ftl/ftl_l2p.o 00:03:36.161 CC lib/ftl/ftl_l2p_flat.o 00:03:36.161 CC lib/ftl/ftl_nv_cache.o 00:03:36.161 CC lib/ftl/ftl_band.o 00:03:36.161 CC lib/ftl/ftl_band_ops.o 00:03:36.161 CC lib/ftl/ftl_writer.o 00:03:36.161 CC lib/ftl/ftl_rq.o 00:03:36.161 CC lib/ftl/ftl_l2p_cache.o 00:03:36.161 CC lib/ftl/ftl_reloc.o 00:03:36.161 CC lib/ftl/ftl_p2l.o 00:03:36.161 CC lib/ftl/mngt/ftl_mngt.o 00:03:36.161 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:36.161 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:36.161 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:36.161 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:36.161 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:36.161 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:36.161 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:36.161 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:36.161 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:36.422 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:36.422 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:36.422 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:36.422 CC lib/ftl/utils/ftl_conf.o 00:03:36.422 CC lib/ftl/utils/ftl_md.o 00:03:36.422 CC lib/ftl/utils/ftl_mempool.o 00:03:36.422 CC lib/ftl/utils/ftl_bitmap.o 00:03:36.422 CC lib/ftl/utils/ftl_property.o 00:03:36.422 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:36.683 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:36.683 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:36.683 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:36.683 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:36.683 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:36.683 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:36.683 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:36.683 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:36.683 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:36.683 CC lib/ftl/base/ftl_base_dev.o 00:03:36.683 CC lib/ftl/base/ftl_base_bdev.o 00:03:36.683 CC lib/ftl/ftl_trace.o 00:03:36.941 LIB libspdk_nbd.a 00:03:36.941 SO libspdk_nbd.so.6.0 00:03:36.941 SYMLINK libspdk_nbd.so 00:03:36.941 LIB libspdk_scsi.a 00:03:36.941 SO libspdk_scsi.so.8.0 00:03:37.199 SYMLINK libspdk_scsi.so 00:03:37.199 LIB libspdk_ublk.a 00:03:37.199 SO libspdk_ublk.so.2.0 00:03:37.199 SYMLINK libspdk_ublk.so 00:03:37.199 CC lib/iscsi/conn.o 00:03:37.199 CC lib/iscsi/init_grp.o 00:03:37.199 CC lib/iscsi/iscsi.o 00:03:37.199 CC lib/iscsi/md5.o 00:03:37.199 CC lib/iscsi/param.o 00:03:37.199 CC lib/iscsi/portal_grp.o 00:03:37.199 CC lib/vhost/vhost.o 00:03:37.199 CC lib/iscsi/tgt_node.o 00:03:37.199 CC lib/vhost/vhost_rpc.o 00:03:37.199 CC lib/iscsi/iscsi_subsystem.o 00:03:37.199 CC lib/iscsi/iscsi_rpc.o 00:03:37.199 CC lib/vhost/vhost_scsi.o 00:03:37.199 CC lib/iscsi/task.o 00:03:37.199 CC lib/vhost/vhost_blk.o 00:03:37.199 CC lib/vhost/rte_vhost_user.o 00:03:37.458 LIB libspdk_ftl.a 00:03:37.716 SO libspdk_ftl.so.8.0 00:03:37.974 SYMLINK libspdk_ftl.so 00:03:38.541 LIB libspdk_vhost.a 00:03:38.541 SO libspdk_vhost.so.7.1 00:03:38.541 SYMLINK libspdk_vhost.so 00:03:38.542 LIB libspdk_nvmf.a 00:03:38.542 LIB libspdk_iscsi.a 00:03:38.800 SO libspdk_nvmf.so.17.0 00:03:38.800 SO libspdk_iscsi.so.7.0 00:03:38.800 SYMLINK libspdk_iscsi.so 00:03:38.800 SYMLINK libspdk_nvmf.so 00:03:39.059 CC module/vfu_device/vfu_virtio.o 00:03:39.059 CC module/env_dpdk/env_dpdk_rpc.o 00:03:39.059 CC module/vfu_device/vfu_virtio_blk.o 00:03:39.059 CC module/vfu_device/vfu_virtio_scsi.o 00:03:39.059 CC module/vfu_device/vfu_virtio_rpc.o 00:03:39.059 CC module/accel/dsa/accel_dsa.o 00:03:39.059 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:39.059 CC module/accel/error/accel_error.o 00:03:39.059 CC module/accel/dsa/accel_dsa_rpc.o 00:03:39.059 CC module/accel/error/accel_error_rpc.o 00:03:39.059 CC module/sock/posix/posix.o 00:03:39.059 CC module/blob/bdev/blob_bdev.o 00:03:39.059 CC module/accel/ioat/accel_ioat.o 00:03:39.059 CC module/scheduler/gscheduler/gscheduler.o 00:03:39.059 CC module/accel/ioat/accel_ioat_rpc.o 00:03:39.059 CC module/accel/iaa/accel_iaa.o 00:03:39.059 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:39.059 CC module/accel/iaa/accel_iaa_rpc.o 00:03:39.316 LIB libspdk_env_dpdk_rpc.a 00:03:39.316 SO libspdk_env_dpdk_rpc.so.5.0 00:03:39.316 LIB libspdk_scheduler_gscheduler.a 00:03:39.316 SYMLINK libspdk_env_dpdk_rpc.so 00:03:39.316 LIB libspdk_scheduler_dpdk_governor.a 00:03:39.316 SO libspdk_scheduler_gscheduler.so.3.0 00:03:39.316 SO libspdk_scheduler_dpdk_governor.so.3.0 00:03:39.316 LIB libspdk_accel_error.a 00:03:39.316 LIB libspdk_accel_ioat.a 00:03:39.316 LIB libspdk_scheduler_dynamic.a 00:03:39.316 LIB libspdk_accel_iaa.a 00:03:39.316 SO libspdk_accel_error.so.1.0 00:03:39.316 SO libspdk_accel_ioat.so.5.0 00:03:39.316 SYMLINK libspdk_scheduler_gscheduler.so 00:03:39.316 SO libspdk_scheduler_dynamic.so.3.0 00:03:39.316 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:39.316 SO libspdk_accel_iaa.so.2.0 00:03:39.316 LIB libspdk_accel_dsa.a 00:03:39.316 LIB libspdk_blob_bdev.a 00:03:39.316 SYMLINK libspdk_accel_error.so 00:03:39.316 SYMLINK libspdk_accel_ioat.so 00:03:39.316 SYMLINK libspdk_scheduler_dynamic.so 00:03:39.316 SO libspdk_accel_dsa.so.4.0 00:03:39.316 SO libspdk_blob_bdev.so.10.1 00:03:39.316 SYMLINK libspdk_accel_iaa.so 00:03:39.575 SYMLINK libspdk_accel_dsa.so 00:03:39.575 SYMLINK libspdk_blob_bdev.so 00:03:39.575 CC module/bdev/lvol/vbdev_lvol.o 00:03:39.575 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:39.575 CC module/bdev/malloc/bdev_malloc.o 00:03:39.575 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:39.575 CC module/bdev/nvme/bdev_nvme.o 00:03:39.575 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:39.575 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:39.575 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:39.575 CC module/bdev/null/bdev_null.o 00:03:39.575 CC module/bdev/delay/vbdev_delay.o 00:03:39.575 CC module/bdev/error/vbdev_error.o 00:03:39.575 CC module/bdev/passthru/vbdev_passthru.o 00:03:39.575 CC module/bdev/nvme/nvme_rpc.o 00:03:39.575 CC module/bdev/null/bdev_null_rpc.o 00:03:39.575 CC module/bdev/aio/bdev_aio.o 00:03:39.575 CC module/bdev/aio/bdev_aio_rpc.o 00:03:39.575 CC module/bdev/nvme/bdev_mdns_client.o 00:03:39.575 CC module/bdev/ftl/bdev_ftl.o 00:03:39.575 CC module/bdev/error/vbdev_error_rpc.o 00:03:39.575 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:39.575 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:39.575 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:39.575 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:39.575 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:39.575 CC module/bdev/iscsi/bdev_iscsi.o 00:03:39.575 CC module/bdev/nvme/vbdev_opal.o 00:03:39.575 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:39.575 CC module/bdev/gpt/gpt.o 00:03:39.575 CC module/blobfs/bdev/blobfs_bdev.o 00:03:39.575 CC module/bdev/gpt/vbdev_gpt.o 00:03:39.575 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:39.575 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:39.575 CC module/bdev/split/vbdev_split.o 00:03:39.575 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:39.575 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:39.575 CC module/bdev/split/vbdev_split_rpc.o 00:03:39.575 CC module/bdev/raid/bdev_raid.o 00:03:39.575 CC module/bdev/raid/bdev_raid_rpc.o 00:03:39.575 CC module/bdev/raid/bdev_raid_sb.o 00:03:39.575 CC module/bdev/raid/raid0.o 00:03:39.575 CC module/bdev/raid/raid1.o 00:03:39.575 CC module/bdev/raid/concat.o 00:03:39.836 LIB libspdk_vfu_device.a 00:03:39.836 SO libspdk_vfu_device.so.2.0 00:03:39.836 SYMLINK libspdk_vfu_device.so 00:03:40.094 LIB libspdk_sock_posix.a 00:03:40.094 LIB libspdk_bdev_split.a 00:03:40.094 LIB libspdk_blobfs_bdev.a 00:03:40.094 SO libspdk_sock_posix.so.5.0 00:03:40.094 SO libspdk_bdev_split.so.5.0 00:03:40.094 SO libspdk_blobfs_bdev.so.5.0 00:03:40.094 LIB libspdk_bdev_null.a 00:03:40.094 SYMLINK libspdk_bdev_split.so 00:03:40.094 SYMLINK libspdk_sock_posix.so 00:03:40.094 SYMLINK libspdk_blobfs_bdev.so 00:03:40.094 SO libspdk_bdev_null.so.5.0 00:03:40.094 LIB libspdk_bdev_error.a 00:03:40.094 LIB libspdk_bdev_gpt.a 00:03:40.094 LIB libspdk_bdev_ftl.a 00:03:40.094 LIB libspdk_bdev_aio.a 00:03:40.094 SYMLINK libspdk_bdev_null.so 00:03:40.094 SO libspdk_bdev_error.so.5.0 00:03:40.094 SO libspdk_bdev_gpt.so.5.0 00:03:40.094 LIB libspdk_bdev_passthru.a 00:03:40.094 SO libspdk_bdev_ftl.so.5.0 00:03:40.094 SO libspdk_bdev_aio.so.5.0 00:03:40.094 LIB libspdk_bdev_zone_block.a 00:03:40.094 SO libspdk_bdev_passthru.so.5.0 00:03:40.094 LIB libspdk_bdev_iscsi.a 00:03:40.352 LIB libspdk_bdev_delay.a 00:03:40.352 SYMLINK libspdk_bdev_error.so 00:03:40.352 SO libspdk_bdev_zone_block.so.5.0 00:03:40.352 SYMLINK libspdk_bdev_gpt.so 00:03:40.352 SO libspdk_bdev_iscsi.so.5.0 00:03:40.352 LIB libspdk_bdev_malloc.a 00:03:40.352 SYMLINK libspdk_bdev_aio.so 00:03:40.352 SYMLINK libspdk_bdev_ftl.so 00:03:40.352 SO libspdk_bdev_delay.so.5.0 00:03:40.353 SYMLINK libspdk_bdev_passthru.so 00:03:40.353 SO libspdk_bdev_malloc.so.5.0 00:03:40.353 SYMLINK libspdk_bdev_zone_block.so 00:03:40.353 SYMLINK libspdk_bdev_iscsi.so 00:03:40.353 SYMLINK libspdk_bdev_delay.so 00:03:40.353 LIB libspdk_bdev_lvol.a 00:03:40.353 SYMLINK libspdk_bdev_malloc.so 00:03:40.353 SO libspdk_bdev_lvol.so.5.0 00:03:40.353 LIB libspdk_bdev_virtio.a 00:03:40.353 SYMLINK libspdk_bdev_lvol.so 00:03:40.353 SO libspdk_bdev_virtio.so.5.0 00:03:40.611 SYMLINK libspdk_bdev_virtio.so 00:03:40.611 LIB libspdk_bdev_raid.a 00:03:40.611 SO libspdk_bdev_raid.so.5.0 00:03:40.868 SYMLINK libspdk_bdev_raid.so 00:03:41.827 LIB libspdk_bdev_nvme.a 00:03:42.097 SO libspdk_bdev_nvme.so.6.0 00:03:42.097 SYMLINK libspdk_bdev_nvme.so 00:03:42.355 CC module/event/subsystems/vmd/vmd.o 00:03:42.355 CC module/event/subsystems/iobuf/iobuf.o 00:03:42.355 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:42.355 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:42.355 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:42.355 CC module/event/subsystems/scheduler/scheduler.o 00:03:42.355 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:42.355 CC module/event/subsystems/sock/sock.o 00:03:42.355 LIB libspdk_event_sock.a 00:03:42.355 LIB libspdk_event_vhost_blk.a 00:03:42.355 LIB libspdk_event_scheduler.a 00:03:42.355 LIB libspdk_event_vmd.a 00:03:42.355 LIB libspdk_event_vfu_tgt.a 00:03:42.355 SO libspdk_event_sock.so.4.0 00:03:42.355 LIB libspdk_event_iobuf.a 00:03:42.355 SO libspdk_event_scheduler.so.3.0 00:03:42.355 SO libspdk_event_vhost_blk.so.2.0 00:03:42.355 SO libspdk_event_vfu_tgt.so.2.0 00:03:42.355 SO libspdk_event_vmd.so.5.0 00:03:42.614 SO libspdk_event_iobuf.so.2.0 00:03:42.614 SYMLINK libspdk_event_sock.so 00:03:42.614 SYMLINK libspdk_event_vhost_blk.so 00:03:42.614 SYMLINK libspdk_event_scheduler.so 00:03:42.614 SYMLINK libspdk_event_vfu_tgt.so 00:03:42.614 SYMLINK libspdk_event_vmd.so 00:03:42.614 SYMLINK libspdk_event_iobuf.so 00:03:42.614 CC module/event/subsystems/accel/accel.o 00:03:42.873 LIB libspdk_event_accel.a 00:03:42.873 SO libspdk_event_accel.so.5.0 00:03:42.873 SYMLINK libspdk_event_accel.so 00:03:42.873 CC module/event/subsystems/bdev/bdev.o 00:03:43.132 LIB libspdk_event_bdev.a 00:03:43.132 SO libspdk_event_bdev.so.5.0 00:03:43.132 SYMLINK libspdk_event_bdev.so 00:03:43.390 CC module/event/subsystems/ublk/ublk.o 00:03:43.390 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:43.390 CC module/event/subsystems/scsi/scsi.o 00:03:43.390 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:43.390 CC module/event/subsystems/nbd/nbd.o 00:03:43.390 LIB libspdk_event_ublk.a 00:03:43.390 LIB libspdk_event_nbd.a 00:03:43.390 LIB libspdk_event_scsi.a 00:03:43.390 SO libspdk_event_nbd.so.5.0 00:03:43.390 SO libspdk_event_ublk.so.2.0 00:03:43.648 SO libspdk_event_scsi.so.5.0 00:03:43.648 SYMLINK libspdk_event_nbd.so 00:03:43.648 SYMLINK libspdk_event_ublk.so 00:03:43.648 SYMLINK libspdk_event_scsi.so 00:03:43.648 LIB libspdk_event_nvmf.a 00:03:43.648 SO libspdk_event_nvmf.so.5.0 00:03:43.648 SYMLINK libspdk_event_nvmf.so 00:03:43.648 CC module/event/subsystems/iscsi/iscsi.o 00:03:43.648 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:43.906 LIB libspdk_event_iscsi.a 00:03:43.906 LIB libspdk_event_vhost_scsi.a 00:03:43.906 SO libspdk_event_iscsi.so.5.0 00:03:43.906 SO libspdk_event_vhost_scsi.so.2.0 00:03:43.906 SYMLINK libspdk_event_vhost_scsi.so 00:03:43.906 SYMLINK libspdk_event_iscsi.so 00:03:43.906 SO libspdk.so.5.0 00:03:43.906 SYMLINK libspdk.so 00:03:44.172 CC app/trace_record/trace_record.o 00:03:44.172 CC app/spdk_nvme_perf/perf.o 00:03:44.172 CC app/spdk_top/spdk_top.o 00:03:44.172 CC app/spdk_lspci/spdk_lspci.o 00:03:44.172 CXX app/trace/trace.o 00:03:44.172 CC app/spdk_nvme_discover/discovery_aer.o 00:03:44.172 CC test/rpc_client/rpc_client_test.o 00:03:44.172 TEST_HEADER include/spdk/accel.h 00:03:44.172 CC app/spdk_nvme_identify/identify.o 00:03:44.172 TEST_HEADER include/spdk/accel_module.h 00:03:44.172 TEST_HEADER include/spdk/assert.h 00:03:44.172 TEST_HEADER include/spdk/barrier.h 00:03:44.172 TEST_HEADER include/spdk/base64.h 00:03:44.172 TEST_HEADER include/spdk/bdev.h 00:03:44.172 TEST_HEADER include/spdk/bdev_module.h 00:03:44.172 TEST_HEADER include/spdk/bdev_zone.h 00:03:44.172 TEST_HEADER include/spdk/bit_array.h 00:03:44.172 TEST_HEADER include/spdk/bit_pool.h 00:03:44.172 TEST_HEADER include/spdk/blob_bdev.h 00:03:44.172 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:44.172 TEST_HEADER include/spdk/blobfs.h 00:03:44.172 TEST_HEADER include/spdk/blob.h 00:03:44.172 TEST_HEADER include/spdk/conf.h 00:03:44.172 TEST_HEADER include/spdk/config.h 00:03:44.172 TEST_HEADER include/spdk/cpuset.h 00:03:44.172 TEST_HEADER include/spdk/crc16.h 00:03:44.172 TEST_HEADER include/spdk/crc32.h 00:03:44.172 TEST_HEADER include/spdk/crc64.h 00:03:44.172 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:44.172 TEST_HEADER include/spdk/dif.h 00:03:44.172 CC app/spdk_dd/spdk_dd.o 00:03:44.172 TEST_HEADER include/spdk/dma.h 00:03:44.172 TEST_HEADER include/spdk/endian.h 00:03:44.172 CC app/iscsi_tgt/iscsi_tgt.o 00:03:44.172 TEST_HEADER include/spdk/env_dpdk.h 00:03:44.172 TEST_HEADER include/spdk/env.h 00:03:44.172 CC app/nvmf_tgt/nvmf_main.o 00:03:44.172 TEST_HEADER include/spdk/event.h 00:03:44.172 TEST_HEADER include/spdk/fd_group.h 00:03:44.172 CC examples/ioat/verify/verify.o 00:03:44.172 TEST_HEADER include/spdk/fd.h 00:03:44.172 CC test/event/reactor/reactor.o 00:03:44.172 CC examples/ioat/perf/perf.o 00:03:44.172 CC app/vhost/vhost.o 00:03:44.172 CC test/event/event_perf/event_perf.o 00:03:44.172 CC test/nvme/aer/aer.o 00:03:44.172 TEST_HEADER include/spdk/file.h 00:03:44.172 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:44.172 CC examples/nvme/hello_world/hello_world.o 00:03:44.172 CC test/app/histogram_perf/histogram_perf.o 00:03:44.172 TEST_HEADER include/spdk/ftl.h 00:03:44.173 CC examples/accel/perf/accel_perf.o 00:03:44.173 CC test/thread/poller_perf/poller_perf.o 00:03:44.173 TEST_HEADER include/spdk/gpt_spec.h 00:03:44.173 CC test/event/reactor_perf/reactor_perf.o 00:03:44.173 CC test/app/jsoncat/jsoncat.o 00:03:44.173 CC app/fio/nvme/fio_plugin.o 00:03:44.173 CC examples/idxd/perf/perf.o 00:03:44.173 TEST_HEADER include/spdk/hexlify.h 00:03:44.173 CC test/env/vtophys/vtophys.o 00:03:44.173 CC examples/sock/hello_world/hello_sock.o 00:03:44.173 CC examples/vmd/lsvmd/lsvmd.o 00:03:44.173 CC examples/nvme/reconnect/reconnect.o 00:03:44.173 CC examples/util/zipf/zipf.o 00:03:44.173 TEST_HEADER include/spdk/histogram_data.h 00:03:44.173 TEST_HEADER include/spdk/idxd.h 00:03:44.437 TEST_HEADER include/spdk/idxd_spec.h 00:03:44.437 TEST_HEADER include/spdk/init.h 00:03:44.437 TEST_HEADER include/spdk/ioat.h 00:03:44.437 TEST_HEADER include/spdk/ioat_spec.h 00:03:44.437 CC app/spdk_tgt/spdk_tgt.o 00:03:44.437 TEST_HEADER include/spdk/iscsi_spec.h 00:03:44.437 TEST_HEADER include/spdk/json.h 00:03:44.437 TEST_HEADER include/spdk/jsonrpc.h 00:03:44.437 TEST_HEADER include/spdk/likely.h 00:03:44.437 TEST_HEADER include/spdk/log.h 00:03:44.438 TEST_HEADER include/spdk/lvol.h 00:03:44.438 TEST_HEADER include/spdk/memory.h 00:03:44.438 TEST_HEADER include/spdk/mmio.h 00:03:44.438 CC test/app/bdev_svc/bdev_svc.o 00:03:44.438 TEST_HEADER include/spdk/nbd.h 00:03:44.438 CC test/accel/dif/dif.o 00:03:44.438 CC examples/bdev/bdevperf/bdevperf.o 00:03:44.438 CC examples/blob/hello_world/hello_blob.o 00:03:44.438 TEST_HEADER include/spdk/notify.h 00:03:44.438 CC test/bdev/bdevio/bdevio.o 00:03:44.438 TEST_HEADER include/spdk/nvme.h 00:03:44.438 CC examples/thread/thread/thread_ex.o 00:03:44.438 CC test/blobfs/mkfs/mkfs.o 00:03:44.438 TEST_HEADER include/spdk/nvme_intel.h 00:03:44.438 CC examples/bdev/hello_world/hello_bdev.o 00:03:44.438 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:44.438 CC app/fio/bdev/fio_plugin.o 00:03:44.438 CC test/env/mem_callbacks/mem_callbacks.o 00:03:44.438 CC test/dma/test_dma/test_dma.o 00:03:44.438 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:44.438 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:44.438 TEST_HEADER include/spdk/nvme_spec.h 00:03:44.438 TEST_HEADER include/spdk/nvme_zns.h 00:03:44.438 CC examples/nvmf/nvmf/nvmf.o 00:03:44.438 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:44.438 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:44.438 CC test/lvol/esnap/esnap.o 00:03:44.438 TEST_HEADER include/spdk/nvmf.h 00:03:44.438 TEST_HEADER include/spdk/nvmf_spec.h 00:03:44.438 TEST_HEADER include/spdk/nvmf_transport.h 00:03:44.438 TEST_HEADER include/spdk/opal.h 00:03:44.438 TEST_HEADER include/spdk/opal_spec.h 00:03:44.438 TEST_HEADER include/spdk/pci_ids.h 00:03:44.438 TEST_HEADER include/spdk/pipe.h 00:03:44.438 TEST_HEADER include/spdk/queue.h 00:03:44.438 TEST_HEADER include/spdk/reduce.h 00:03:44.438 TEST_HEADER include/spdk/rpc.h 00:03:44.438 TEST_HEADER include/spdk/scheduler.h 00:03:44.438 TEST_HEADER include/spdk/scsi.h 00:03:44.438 TEST_HEADER include/spdk/scsi_spec.h 00:03:44.438 TEST_HEADER include/spdk/sock.h 00:03:44.438 TEST_HEADER include/spdk/stdinc.h 00:03:44.438 TEST_HEADER include/spdk/string.h 00:03:44.438 TEST_HEADER include/spdk/thread.h 00:03:44.438 TEST_HEADER include/spdk/trace.h 00:03:44.438 TEST_HEADER include/spdk/trace_parser.h 00:03:44.438 TEST_HEADER include/spdk/tree.h 00:03:44.438 TEST_HEADER include/spdk/ublk.h 00:03:44.438 LINK spdk_lspci 00:03:44.438 TEST_HEADER include/spdk/util.h 00:03:44.438 TEST_HEADER include/spdk/uuid.h 00:03:44.438 TEST_HEADER include/spdk/version.h 00:03:44.438 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:44.438 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:44.438 TEST_HEADER include/spdk/vhost.h 00:03:44.438 TEST_HEADER include/spdk/vmd.h 00:03:44.438 TEST_HEADER include/spdk/xor.h 00:03:44.438 TEST_HEADER include/spdk/zipf.h 00:03:44.438 CXX test/cpp_headers/accel.o 00:03:44.438 LINK rpc_client_test 00:03:44.705 LINK reactor 00:03:44.705 LINK lsvmd 00:03:44.705 LINK event_perf 00:03:44.705 LINK spdk_nvme_discover 00:03:44.705 LINK histogram_perf 00:03:44.705 LINK reactor_perf 00:03:44.705 LINK jsoncat 00:03:44.705 LINK poller_perf 00:03:44.705 LINK env_dpdk_post_init 00:03:44.705 LINK vtophys 00:03:44.705 LINK interrupt_tgt 00:03:44.705 LINK zipf 00:03:44.705 LINK nvmf_tgt 00:03:44.705 LINK spdk_trace_record 00:03:44.705 LINK vhost 00:03:44.705 LINK iscsi_tgt 00:03:44.705 LINK ioat_perf 00:03:44.705 LINK bdev_svc 00:03:44.705 LINK verify 00:03:44.705 LINK hello_world 00:03:44.705 LINK spdk_tgt 00:03:44.705 LINK mkfs 00:03:44.705 LINK mem_callbacks 00:03:44.705 LINK hello_sock 00:03:44.705 LINK aer 00:03:44.966 LINK hello_blob 00:03:44.966 LINK hello_bdev 00:03:44.966 LINK thread 00:03:44.966 CXX test/cpp_headers/accel_module.o 00:03:44.966 CXX test/cpp_headers/assert.o 00:03:44.966 CXX test/cpp_headers/barrier.o 00:03:44.966 CXX test/cpp_headers/base64.o 00:03:44.966 LINK spdk_dd 00:03:44.966 LINK nvmf 00:03:44.966 CC test/event/app_repeat/app_repeat.o 00:03:44.966 CC test/env/memory/memory_ut.o 00:03:44.966 LINK reconnect 00:03:44.966 CC test/app/stub/stub.o 00:03:44.966 LINK idxd_perf 00:03:44.966 CC test/env/pci/pci_ut.o 00:03:44.966 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:44.966 LINK spdk_trace 00:03:44.966 CXX test/cpp_headers/bdev.o 00:03:44.966 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:44.966 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:44.966 CC examples/vmd/led/led.o 00:03:44.966 CC test/event/scheduler/scheduler.o 00:03:44.966 CC examples/blob/cli/blobcli.o 00:03:44.966 LINK dif 00:03:44.966 CC examples/nvme/arbitration/arbitration.o 00:03:44.966 LINK test_dma 00:03:44.966 CXX test/cpp_headers/bdev_module.o 00:03:44.966 CC examples/nvme/hotplug/hotplug.o 00:03:44.966 LINK bdevio 00:03:45.231 CXX test/cpp_headers/bdev_zone.o 00:03:45.231 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:45.231 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:45.231 CC examples/nvme/abort/abort.o 00:03:45.231 CXX test/cpp_headers/bit_array.o 00:03:45.231 LINK accel_perf 00:03:45.231 LINK nvme_fuzz 00:03:45.231 CC test/nvme/reset/reset.o 00:03:45.231 CC test/nvme/sgl/sgl.o 00:03:45.231 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:45.231 CXX test/cpp_headers/bit_pool.o 00:03:45.231 CXX test/cpp_headers/blob_bdev.o 00:03:45.231 LINK spdk_nvme 00:03:45.231 CXX test/cpp_headers/blobfs_bdev.o 00:03:45.231 CC test/nvme/e2edp/nvme_dp.o 00:03:45.231 CXX test/cpp_headers/blobfs.o 00:03:45.231 LINK app_repeat 00:03:45.231 CXX test/cpp_headers/blob.o 00:03:45.231 CC test/nvme/overhead/overhead.o 00:03:45.231 LINK spdk_bdev 00:03:45.231 LINK stub 00:03:45.231 CXX test/cpp_headers/conf.o 00:03:45.231 CXX test/cpp_headers/config.o 00:03:45.231 CC test/nvme/err_injection/err_injection.o 00:03:45.494 CC test/nvme/reserve/reserve.o 00:03:45.494 CC test/nvme/startup/startup.o 00:03:45.494 LINK led 00:03:45.494 CC test/nvme/simple_copy/simple_copy.o 00:03:45.494 CC test/nvme/connect_stress/connect_stress.o 00:03:45.494 CXX test/cpp_headers/cpuset.o 00:03:45.494 CC test/nvme/boot_partition/boot_partition.o 00:03:45.494 CXX test/cpp_headers/crc16.o 00:03:45.494 LINK scheduler 00:03:45.494 CC test/nvme/compliance/nvme_compliance.o 00:03:45.494 CXX test/cpp_headers/crc32.o 00:03:45.494 CXX test/cpp_headers/crc64.o 00:03:45.494 CXX test/cpp_headers/dif.o 00:03:45.494 CXX test/cpp_headers/dma.o 00:03:45.494 LINK cmb_copy 00:03:45.494 CC test/nvme/fused_ordering/fused_ordering.o 00:03:45.494 CXX test/cpp_headers/endian.o 00:03:45.494 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:45.494 LINK hotplug 00:03:45.494 CXX test/cpp_headers/env_dpdk.o 00:03:45.494 CC test/nvme/fdp/fdp.o 00:03:45.494 LINK pmr_persistence 00:03:45.494 CC test/nvme/cuse/cuse.o 00:03:45.760 CXX test/cpp_headers/env.o 00:03:45.760 CXX test/cpp_headers/event.o 00:03:45.760 CXX test/cpp_headers/fd_group.o 00:03:45.760 CXX test/cpp_headers/fd.o 00:03:45.760 CXX test/cpp_headers/file.o 00:03:45.760 CXX test/cpp_headers/ftl.o 00:03:45.760 LINK arbitration 00:03:45.760 CXX test/cpp_headers/gpt_spec.o 00:03:45.760 CXX test/cpp_headers/hexlify.o 00:03:45.760 LINK spdk_nvme_perf 00:03:45.760 LINK pci_ut 00:03:45.760 LINK err_injection 00:03:45.760 LINK reset 00:03:45.760 LINK startup 00:03:45.760 LINK spdk_nvme_identify 00:03:45.760 LINK bdevperf 00:03:45.760 LINK connect_stress 00:03:45.760 LINK sgl 00:03:45.760 LINK reserve 00:03:45.760 CXX test/cpp_headers/histogram_data.o 00:03:45.760 LINK nvme_dp 00:03:45.760 LINK boot_partition 00:03:45.760 CXX test/cpp_headers/idxd.o 00:03:45.760 LINK spdk_top 00:03:45.760 CXX test/cpp_headers/idxd_spec.o 00:03:45.760 LINK overhead 00:03:45.760 CXX test/cpp_headers/init.o 00:03:45.760 CXX test/cpp_headers/ioat.o 00:03:45.760 LINK abort 00:03:45.760 CXX test/cpp_headers/ioat_spec.o 00:03:45.760 LINK simple_copy 00:03:45.760 CXX test/cpp_headers/iscsi_spec.o 00:03:45.760 CXX test/cpp_headers/json.o 00:03:46.022 LINK memory_ut 00:03:46.022 CXX test/cpp_headers/jsonrpc.o 00:03:46.022 LINK vhost_fuzz 00:03:46.022 CXX test/cpp_headers/likely.o 00:03:46.022 LINK nvme_manage 00:03:46.022 CXX test/cpp_headers/log.o 00:03:46.022 LINK doorbell_aers 00:03:46.022 CXX test/cpp_headers/lvol.o 00:03:46.022 LINK fused_ordering 00:03:46.022 CXX test/cpp_headers/memory.o 00:03:46.022 LINK blobcli 00:03:46.022 CXX test/cpp_headers/mmio.o 00:03:46.022 CXX test/cpp_headers/nbd.o 00:03:46.022 CXX test/cpp_headers/nvme.o 00:03:46.022 CXX test/cpp_headers/notify.o 00:03:46.022 CXX test/cpp_headers/nvme_intel.o 00:03:46.022 CXX test/cpp_headers/nvme_ocssd.o 00:03:46.022 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:46.022 CXX test/cpp_headers/nvme_spec.o 00:03:46.022 CXX test/cpp_headers/nvme_zns.o 00:03:46.022 CXX test/cpp_headers/nvmf_cmd.o 00:03:46.022 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:46.022 CXX test/cpp_headers/nvmf.o 00:03:46.022 CXX test/cpp_headers/nvmf_spec.o 00:03:46.022 CXX test/cpp_headers/nvmf_transport.o 00:03:46.022 CXX test/cpp_headers/opal.o 00:03:46.022 CXX test/cpp_headers/opal_spec.o 00:03:46.022 CXX test/cpp_headers/pci_ids.o 00:03:46.022 LINK nvme_compliance 00:03:46.022 CXX test/cpp_headers/pipe.o 00:03:46.284 CXX test/cpp_headers/queue.o 00:03:46.284 CXX test/cpp_headers/reduce.o 00:03:46.284 CXX test/cpp_headers/rpc.o 00:03:46.284 CXX test/cpp_headers/scheduler.o 00:03:46.284 CXX test/cpp_headers/scsi.o 00:03:46.284 CXX test/cpp_headers/scsi_spec.o 00:03:46.284 CXX test/cpp_headers/sock.o 00:03:46.284 CXX test/cpp_headers/stdinc.o 00:03:46.284 CXX test/cpp_headers/string.o 00:03:46.284 CXX test/cpp_headers/thread.o 00:03:46.284 CXX test/cpp_headers/trace.o 00:03:46.284 CXX test/cpp_headers/trace_parser.o 00:03:46.284 CXX test/cpp_headers/tree.o 00:03:46.284 LINK fdp 00:03:46.284 CXX test/cpp_headers/ublk.o 00:03:46.284 CXX test/cpp_headers/util.o 00:03:46.284 CXX test/cpp_headers/uuid.o 00:03:46.284 CXX test/cpp_headers/version.o 00:03:46.284 CXX test/cpp_headers/vfio_user_pci.o 00:03:46.284 CXX test/cpp_headers/vfio_user_spec.o 00:03:46.284 CXX test/cpp_headers/vhost.o 00:03:46.284 CXX test/cpp_headers/vmd.o 00:03:46.284 CXX test/cpp_headers/xor.o 00:03:46.284 CXX test/cpp_headers/zipf.o 00:03:47.218 LINK cuse 00:03:47.218 LINK iscsi_fuzz 00:03:49.753 LINK esnap 00:03:50.012 00:03:50.012 real 0m37.896s 00:03:50.012 user 7m14.363s 00:03:50.012 sys 1m36.955s 00:03:50.012 00:43:33 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:03:50.012 00:43:33 -- common/autotest_common.sh@10 -- $ set +x 00:03:50.012 ************************************ 00:03:50.012 END TEST make 00:03:50.012 ************************************ 00:03:50.012 00:43:34 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:50.012 00:43:34 -- nvmf/common.sh@7 -- # uname -s 00:03:50.012 00:43:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:50.012 00:43:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:50.012 00:43:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:50.012 00:43:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:50.012 00:43:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:50.012 00:43:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:50.012 00:43:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:50.012 00:43:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:50.012 00:43:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:50.012 00:43:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:50.012 00:43:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:03:50.012 00:43:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:03:50.012 00:43:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:50.012 00:43:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:50.012 00:43:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:50.012 00:43:34 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:50.012 00:43:34 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:50.012 00:43:34 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:50.012 00:43:34 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:50.012 00:43:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:50.012 00:43:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:50.012 00:43:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:50.012 00:43:34 -- paths/export.sh@5 -- # export PATH 00:03:50.012 00:43:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:50.012 00:43:34 -- nvmf/common.sh@46 -- # : 0 00:03:50.012 00:43:34 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:03:50.012 00:43:34 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:03:50.012 00:43:34 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:03:50.012 00:43:34 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:50.013 00:43:34 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:50.013 00:43:34 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:03:50.013 00:43:34 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:03:50.013 00:43:34 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:03:50.013 00:43:34 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:50.013 00:43:34 -- spdk/autotest.sh@32 -- # uname -s 00:03:50.013 00:43:34 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:50.013 00:43:34 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:50.013 00:43:34 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:50.013 00:43:34 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:50.013 00:43:34 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:50.013 00:43:34 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:50.013 00:43:34 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:50.013 00:43:34 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:50.013 00:43:34 -- spdk/autotest.sh@48 -- # udevadm_pid=3245952 00:03:50.013 00:43:34 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:50.013 00:43:34 -- spdk/autotest.sh@51 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:03:50.013 00:43:34 -- spdk/autotest.sh@54 -- # echo 3245954 00:03:50.013 00:43:34 -- spdk/autotest.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:03:50.013 00:43:34 -- spdk/autotest.sh@56 -- # echo 3245955 00:03:50.013 00:43:34 -- spdk/autotest.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:03:50.013 00:43:34 -- spdk/autotest.sh@58 -- # [[ ............................... != QEMU ]] 00:03:50.013 00:43:34 -- spdk/autotest.sh@60 -- # echo 3245956 00:03:50.013 00:43:34 -- spdk/autotest.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l 00:03:50.013 00:43:34 -- spdk/autotest.sh@62 -- # echo 3245957 00:03:50.013 00:43:34 -- spdk/autotest.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l 00:03:50.013 00:43:34 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:50.013 00:43:34 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:03:50.013 00:43:34 -- common/autotest_common.sh@712 -- # xtrace_disable 00:03:50.013 00:43:34 -- common/autotest_common.sh@10 -- # set +x 00:03:50.013 00:43:34 -- spdk/autotest.sh@70 -- # create_test_list 00:03:50.013 00:43:34 -- common/autotest_common.sh@736 -- # xtrace_disable 00:03:50.013 00:43:34 -- common/autotest_common.sh@10 -- # set +x 00:03:50.013 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.bmc.pm.log 00:03:50.013 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pm.log 00:03:50.013 00:43:34 -- spdk/autotest.sh@72 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:50.013 00:43:34 -- spdk/autotest.sh@72 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:50.013 00:43:34 -- spdk/autotest.sh@72 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:50.013 00:43:34 -- spdk/autotest.sh@73 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:50.013 00:43:34 -- spdk/autotest.sh@74 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:50.013 00:43:34 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:03:50.013 00:43:34 -- common/autotest_common.sh@1440 -- # uname 00:03:50.013 00:43:34 -- common/autotest_common.sh@1440 -- # '[' Linux = FreeBSD ']' 00:03:50.013 00:43:34 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:03:50.013 00:43:34 -- common/autotest_common.sh@1460 -- # uname 00:03:50.013 00:43:34 -- common/autotest_common.sh@1460 -- # [[ Linux = FreeBSD ]] 00:03:50.013 00:43:34 -- spdk/autotest.sh@82 -- # grep CC_TYPE mk/cc.mk 00:03:50.013 00:43:34 -- spdk/autotest.sh@82 -- # CC_TYPE=CC_TYPE=gcc 00:03:50.013 00:43:34 -- spdk/autotest.sh@83 -- # hash lcov 00:03:50.013 00:43:34 -- spdk/autotest.sh@83 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:50.013 00:43:34 -- spdk/autotest.sh@91 -- # export 'LCOV_OPTS= 00:03:50.013 --rc lcov_branch_coverage=1 00:03:50.013 --rc lcov_function_coverage=1 00:03:50.013 --rc genhtml_branch_coverage=1 00:03:50.013 --rc genhtml_function_coverage=1 00:03:50.013 --rc genhtml_legend=1 00:03:50.013 --rc geninfo_all_blocks=1 00:03:50.013 ' 00:03:50.013 00:43:34 -- spdk/autotest.sh@91 -- # LCOV_OPTS=' 00:03:50.013 --rc lcov_branch_coverage=1 00:03:50.013 --rc lcov_function_coverage=1 00:03:50.013 --rc genhtml_branch_coverage=1 00:03:50.013 --rc genhtml_function_coverage=1 00:03:50.013 --rc genhtml_legend=1 00:03:50.013 --rc geninfo_all_blocks=1 00:03:50.013 ' 00:03:50.013 00:43:34 -- spdk/autotest.sh@92 -- # export 'LCOV=lcov 00:03:50.013 --rc lcov_branch_coverage=1 00:03:50.013 --rc lcov_function_coverage=1 00:03:50.013 --rc genhtml_branch_coverage=1 00:03:50.013 --rc genhtml_function_coverage=1 00:03:50.013 --rc genhtml_legend=1 00:03:50.013 --rc geninfo_all_blocks=1 00:03:50.013 --no-external' 00:03:50.013 00:43:34 -- spdk/autotest.sh@92 -- # LCOV='lcov 00:03:50.013 --rc lcov_branch_coverage=1 00:03:50.013 --rc lcov_function_coverage=1 00:03:50.013 --rc genhtml_branch_coverage=1 00:03:50.013 --rc genhtml_function_coverage=1 00:03:50.013 --rc genhtml_legend=1 00:03:50.013 --rc geninfo_all_blocks=1 00:03:50.013 --no-external' 00:03:50.013 00:43:34 -- spdk/autotest.sh@94 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:50.013 lcov: LCOV version 1.14 00:03:50.013 00:43:34 -- spdk/autotest.sh@96 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:53.297 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:03:53.297 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:03:53.297 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:03:53.297 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:03:53.297 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:03:53.297 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:04:19.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:19.836 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:04:19.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:19.836 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:04:19.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:19.836 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:04:19.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:19.836 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:04:19.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:19.836 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:04:19.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:19.836 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:04:19.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:19.836 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:04:19.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:19.836 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:04:19.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:19.836 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:04:19.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:19.836 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:04:19.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:19.836 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:04:19.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:19.836 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:19.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:19.836 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:04:19.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:19.836 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:04:19.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:19.836 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:04:19.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:04:19.836 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:04:19.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:19.836 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:04:19.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:19.836 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:04:19.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:19.836 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:04:19.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:19.836 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:04:19.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:19.836 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:04:19.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:19.836 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:04:19.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:19.836 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:04:19.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:19.836 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:04:19.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:04:19.836 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:04:19.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:04:19.836 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:04:19.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:19.836 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:04:19.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:19.836 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:04:19.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:04:19.836 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:04:19.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:19.836 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:04:19.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:19.836 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:04:19.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:19.836 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:04:19.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:19.836 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:04:19.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:19.836 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:04:19.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:19.836 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:04:19.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:04:19.836 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:04:19.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:19.836 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:04:19.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:19.836 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:04:19.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:19.836 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:19.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:04:19.836 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:04:19.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:19.836 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:04:19.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:19.837 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:04:19.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:04:19.837 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:04:19.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:19.837 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:04:19.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:19.837 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:04:19.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:19.837 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:04:19.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:19.837 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:04:19.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:19.837 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:04:19.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:19.837 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:04:19.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:19.837 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:04:19.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:19.837 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:19.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:19.837 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:19.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:19.837 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:04:19.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:19.837 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:04:19.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:19.837 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:19.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:19.837 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:19.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:19.837 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:04:19.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:19.837 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:19.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:19.837 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:19.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:19.837 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:04:19.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:19.837 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:04:19.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:19.837 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:04:19.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:19.837 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:04:19.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:19.837 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:04:19.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:19.837 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:04:19.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:19.837 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:04:19.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:19.837 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:04:19.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:19.837 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:04:19.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:19.837 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:04:19.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:19.837 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:04:19.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:19.837 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:04:19.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:04:19.837 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:04:19.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:19.837 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:04:19.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:19.837 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:04:19.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:19.837 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:04:19.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:19.837 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:04:19.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:19.837 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:04:19.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:04:19.837 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:04:19.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:19.837 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:04:19.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:04:19.837 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:04:19.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:19.837 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:19.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:19.837 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:19.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:19.837 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:04:19.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:19.837 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:04:19.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:19.837 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:04:19.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:19.837 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:04:21.774 00:44:05 -- spdk/autotest.sh@100 -- # timing_enter pre_cleanup 00:04:21.774 00:44:05 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:21.774 00:44:05 -- common/autotest_common.sh@10 -- # set +x 00:04:21.774 00:44:05 -- spdk/autotest.sh@102 -- # rm -f 00:04:21.774 00:44:05 -- spdk/autotest.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:22.709 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:04:22.709 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:04:22.709 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:04:22.709 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:04:22.709 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:04:22.709 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:04:22.709 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:04:22.968 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:04:22.968 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:04:22.968 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:04:22.968 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:04:22.968 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:04:22.968 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:04:22.968 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:04:22.968 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:04:22.968 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:04:22.968 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:04:22.968 00:44:07 -- spdk/autotest.sh@107 -- # get_zoned_devs 00:04:22.968 00:44:07 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:04:22.968 00:44:07 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:04:22.968 00:44:07 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:04:22.968 00:44:07 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:22.968 00:44:07 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:04:22.968 00:44:07 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:04:22.968 00:44:07 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:22.968 00:44:07 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:22.968 00:44:07 -- spdk/autotest.sh@109 -- # (( 0 > 0 )) 00:04:22.968 00:44:07 -- spdk/autotest.sh@121 -- # grep -v p 00:04:22.968 00:44:07 -- spdk/autotest.sh@121 -- # ls /dev/nvme0n1 00:04:22.968 00:44:07 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:22.968 00:44:07 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:04:22.968 00:44:07 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0n1 00:04:22.968 00:44:07 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:04:22.968 00:44:07 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:23.226 No valid GPT data, bailing 00:04:23.226 00:44:07 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:23.226 00:44:07 -- scripts/common.sh@393 -- # pt= 00:04:23.226 00:44:07 -- scripts/common.sh@394 -- # return 1 00:04:23.226 00:44:07 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:23.226 1+0 records in 00:04:23.226 1+0 records out 00:04:23.226 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00186086 s, 563 MB/s 00:04:23.226 00:44:07 -- spdk/autotest.sh@129 -- # sync 00:04:23.226 00:44:07 -- spdk/autotest.sh@131 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:23.226 00:44:07 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:23.226 00:44:07 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:25.128 00:44:09 -- spdk/autotest.sh@135 -- # uname -s 00:04:25.128 00:44:09 -- spdk/autotest.sh@135 -- # '[' Linux = Linux ']' 00:04:25.128 00:44:09 -- spdk/autotest.sh@136 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:25.128 00:44:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:25.128 00:44:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:25.128 00:44:09 -- common/autotest_common.sh@10 -- # set +x 00:04:25.128 ************************************ 00:04:25.128 START TEST setup.sh 00:04:25.128 ************************************ 00:04:25.128 00:44:09 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:25.128 * Looking for test storage... 00:04:25.128 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:25.128 00:44:09 -- setup/test-setup.sh@10 -- # uname -s 00:04:25.128 00:44:09 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:25.128 00:44:09 -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:25.128 00:44:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:25.128 00:44:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:25.128 00:44:09 -- common/autotest_common.sh@10 -- # set +x 00:04:25.128 ************************************ 00:04:25.128 START TEST acl 00:04:25.128 ************************************ 00:04:25.128 00:44:09 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:25.128 * Looking for test storage... 00:04:25.128 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:25.128 00:44:09 -- setup/acl.sh@10 -- # get_zoned_devs 00:04:25.128 00:44:09 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:04:25.128 00:44:09 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:04:25.128 00:44:09 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:04:25.128 00:44:09 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:25.128 00:44:09 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:04:25.128 00:44:09 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:04:25.128 00:44:09 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:25.128 00:44:09 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:25.128 00:44:09 -- setup/acl.sh@12 -- # devs=() 00:04:25.128 00:44:09 -- setup/acl.sh@12 -- # declare -a devs 00:04:25.128 00:44:09 -- setup/acl.sh@13 -- # drivers=() 00:04:25.128 00:44:09 -- setup/acl.sh@13 -- # declare -A drivers 00:04:25.128 00:44:09 -- setup/acl.sh@51 -- # setup reset 00:04:25.128 00:44:09 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:25.128 00:44:09 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:27.032 00:44:10 -- setup/acl.sh@52 -- # collect_setup_devs 00:04:27.032 00:44:10 -- setup/acl.sh@16 -- # local dev driver 00:04:27.032 00:44:10 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:27.032 00:44:10 -- setup/acl.sh@15 -- # setup output status 00:04:27.032 00:44:10 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:27.032 00:44:10 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:27.599 Hugepages 00:04:27.599 node hugesize free / total 00:04:27.599 00:44:11 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:27.599 00:44:11 -- setup/acl.sh@19 -- # continue 00:04:27.599 00:44:11 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:27.599 00:44:11 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:27.599 00:44:11 -- setup/acl.sh@19 -- # continue 00:04:27.599 00:44:11 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:27.600 00:44:11 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:27.600 00:44:11 -- setup/acl.sh@19 -- # continue 00:04:27.600 00:44:11 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:27.600 00:04:27.600 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:27.600 00:44:11 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:27.600 00:44:11 -- setup/acl.sh@19 -- # continue 00:04:27.600 00:44:11 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:27.600 00:44:11 -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:04:27.600 00:44:11 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:27.600 00:44:11 -- setup/acl.sh@20 -- # continue 00:04:27.600 00:44:11 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:27.600 00:44:11 -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:04:27.600 00:44:11 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:27.600 00:44:11 -- setup/acl.sh@20 -- # continue 00:04:27.600 00:44:11 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:27.600 00:44:11 -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:04:27.600 00:44:11 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:27.600 00:44:11 -- setup/acl.sh@20 -- # continue 00:04:27.600 00:44:11 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:27.600 00:44:11 -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:04:27.600 00:44:11 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:27.600 00:44:11 -- setup/acl.sh@20 -- # continue 00:04:27.600 00:44:11 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:27.600 00:44:11 -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:04:27.600 00:44:11 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:27.600 00:44:11 -- setup/acl.sh@20 -- # continue 00:04:27.600 00:44:11 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:27.600 00:44:11 -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:04:27.600 00:44:11 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:27.600 00:44:11 -- setup/acl.sh@20 -- # continue 00:04:27.600 00:44:11 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:27.600 00:44:11 -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:04:27.600 00:44:11 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:27.600 00:44:11 -- setup/acl.sh@20 -- # continue 00:04:27.600 00:44:11 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:27.600 00:44:11 -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:04:27.600 00:44:11 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:27.600 00:44:11 -- setup/acl.sh@20 -- # continue 00:04:27.600 00:44:11 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:27.600 00:44:11 -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:04:27.600 00:44:11 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:27.600 00:44:11 -- setup/acl.sh@20 -- # continue 00:04:27.600 00:44:11 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:27.600 00:44:11 -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:04:27.600 00:44:11 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:27.600 00:44:11 -- setup/acl.sh@20 -- # continue 00:04:27.600 00:44:11 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:27.600 00:44:11 -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:04:27.600 00:44:11 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:27.600 00:44:11 -- setup/acl.sh@20 -- # continue 00:04:27.600 00:44:11 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:27.600 00:44:11 -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:04:27.600 00:44:11 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:27.600 00:44:11 -- setup/acl.sh@20 -- # continue 00:04:27.600 00:44:11 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:27.600 00:44:11 -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:04:27.600 00:44:11 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:27.600 00:44:11 -- setup/acl.sh@20 -- # continue 00:04:27.600 00:44:11 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:27.600 00:44:11 -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:04:27.600 00:44:11 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:27.600 00:44:11 -- setup/acl.sh@20 -- # continue 00:04:27.600 00:44:11 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:27.600 00:44:11 -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:04:27.600 00:44:11 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:27.600 00:44:11 -- setup/acl.sh@20 -- # continue 00:04:27.600 00:44:11 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:27.600 00:44:11 -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:04:27.600 00:44:11 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:27.600 00:44:11 -- setup/acl.sh@20 -- # continue 00:04:27.600 00:44:11 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:27.600 00:44:11 -- setup/acl.sh@19 -- # [[ 0000:88:00.0 == *:*:*.* ]] 00:04:27.600 00:44:11 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:27.600 00:44:11 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:04:27.600 00:44:11 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:27.600 00:44:11 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:27.600 00:44:11 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:27.600 00:44:11 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:04:27.600 00:44:11 -- setup/acl.sh@54 -- # run_test denied denied 00:04:27.600 00:44:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:27.600 00:44:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:27.600 00:44:11 -- common/autotest_common.sh@10 -- # set +x 00:04:27.600 ************************************ 00:04:27.600 START TEST denied 00:04:27.600 ************************************ 00:04:27.600 00:44:11 -- common/autotest_common.sh@1104 -- # denied 00:04:27.600 00:44:11 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:88:00.0' 00:04:27.600 00:44:11 -- setup/acl.sh@38 -- # setup output config 00:04:27.600 00:44:11 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:88:00.0' 00:04:27.600 00:44:11 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:27.600 00:44:11 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:29.504 0000:88:00.0 (8086 0a54): Skipping denied controller at 0000:88:00.0 00:04:29.504 00:44:13 -- setup/acl.sh@40 -- # verify 0000:88:00.0 00:04:29.504 00:44:13 -- setup/acl.sh@28 -- # local dev driver 00:04:29.504 00:44:13 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:29.504 00:44:13 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:88:00.0 ]] 00:04:29.504 00:44:13 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:88:00.0/driver 00:04:29.504 00:44:13 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:29.504 00:44:13 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:29.504 00:44:13 -- setup/acl.sh@41 -- # setup reset 00:04:29.504 00:44:13 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:29.504 00:44:13 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:31.406 00:04:31.406 real 0m3.840s 00:04:31.406 user 0m1.082s 00:04:31.406 sys 0m1.856s 00:04:31.406 00:44:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:31.406 00:44:15 -- common/autotest_common.sh@10 -- # set +x 00:04:31.406 ************************************ 00:04:31.406 END TEST denied 00:04:31.406 ************************************ 00:04:31.663 00:44:15 -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:31.663 00:44:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:31.663 00:44:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:31.663 00:44:15 -- common/autotest_common.sh@10 -- # set +x 00:04:31.663 ************************************ 00:04:31.663 START TEST allowed 00:04:31.663 ************************************ 00:04:31.663 00:44:15 -- common/autotest_common.sh@1104 -- # allowed 00:04:31.663 00:44:15 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:88:00.0 00:04:31.663 00:44:15 -- setup/acl.sh@45 -- # setup output config 00:04:31.663 00:44:15 -- setup/acl.sh@46 -- # grep -E '0000:88:00.0 .*: nvme -> .*' 00:04:31.663 00:44:15 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:31.663 00:44:15 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:34.196 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:34.196 00:44:18 -- setup/acl.sh@47 -- # verify 00:04:34.196 00:44:18 -- setup/acl.sh@28 -- # local dev driver 00:04:34.196 00:44:18 -- setup/acl.sh@48 -- # setup reset 00:04:34.196 00:44:18 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:34.196 00:44:18 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:35.573 00:04:35.573 real 0m4.010s 00:04:35.573 user 0m1.081s 00:04:35.573 sys 0m1.797s 00:04:35.573 00:44:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:35.573 00:44:19 -- common/autotest_common.sh@10 -- # set +x 00:04:35.573 ************************************ 00:04:35.573 END TEST allowed 00:04:35.573 ************************************ 00:04:35.573 00:04:35.573 real 0m10.481s 00:04:35.573 user 0m3.185s 00:04:35.573 sys 0m5.342s 00:04:35.573 00:44:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:35.573 00:44:19 -- common/autotest_common.sh@10 -- # set +x 00:04:35.573 ************************************ 00:04:35.573 END TEST acl 00:04:35.573 ************************************ 00:04:35.573 00:44:19 -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:35.573 00:44:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:35.573 00:44:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:35.573 00:44:19 -- common/autotest_common.sh@10 -- # set +x 00:04:35.573 ************************************ 00:04:35.573 START TEST hugepages 00:04:35.573 ************************************ 00:04:35.573 00:44:19 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:35.573 * Looking for test storage... 00:04:35.573 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:35.573 00:44:19 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:35.573 00:44:19 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:35.573 00:44:19 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:35.573 00:44:19 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:35.573 00:44:19 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:35.573 00:44:19 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:35.573 00:44:19 -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:35.573 00:44:19 -- setup/common.sh@18 -- # local node= 00:04:35.573 00:44:19 -- setup/common.sh@19 -- # local var val 00:04:35.573 00:44:19 -- setup/common.sh@20 -- # local mem_f mem 00:04:35.573 00:44:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.573 00:44:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:35.573 00:44:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:35.573 00:44:19 -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.573 00:44:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.573 00:44:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.573 00:44:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.573 00:44:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 41758068 kB' 'MemAvailable: 45268320 kB' 'Buffers: 2704 kB' 'Cached: 12212000 kB' 'SwapCached: 0 kB' 'Active: 9149588 kB' 'Inactive: 3508456 kB' 'Active(anon): 8754540 kB' 'Inactive(anon): 0 kB' 'Active(file): 395048 kB' 'Inactive(file): 3508456 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 446564 kB' 'Mapped: 216256 kB' 'Shmem: 8311200 kB' 'KReclaimable: 200740 kB' 'Slab: 586740 kB' 'SReclaimable: 200740 kB' 'SUnreclaim: 386000 kB' 'KernelStack: 12800 kB' 'PageTables: 8312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562304 kB' 'Committed_AS: 9910608 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196828 kB' 'VmallocChunk: 0 kB' 'Percpu: 37440 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 2358876 kB' 'DirectMap2M: 20629504 kB' 'DirectMap1G: 46137344 kB' 00:04:35.573 00:44:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.573 00:44:19 -- setup/common.sh@32 -- # continue 00:04:35.573 00:44:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.573 00:44:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.573 00:44:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.573 00:44:19 -- setup/common.sh@32 -- # continue 00:04:35.573 00:44:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.573 00:44:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.573 00:44:19 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.573 00:44:19 -- setup/common.sh@32 -- # continue 00:04:35.573 00:44:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.573 00:44:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.573 00:44:19 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.573 00:44:19 -- setup/common.sh@32 -- # continue 00:04:35.573 00:44:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.573 00:44:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.573 00:44:19 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.573 00:44:19 -- setup/common.sh@32 -- # continue 00:04:35.573 00:44:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.573 00:44:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.573 00:44:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.573 00:44:19 -- setup/common.sh@32 -- # continue 00:04:35.573 00:44:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.573 00:44:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.573 00:44:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.573 00:44:19 -- setup/common.sh@32 -- # continue 00:04:35.573 00:44:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.573 00:44:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.573 00:44:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.573 00:44:19 -- setup/common.sh@32 -- # continue 00:04:35.573 00:44:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.573 00:44:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.573 00:44:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.573 00:44:19 -- setup/common.sh@32 -- # continue 00:04:35.573 00:44:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.573 00:44:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.573 00:44:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.573 00:44:19 -- setup/common.sh@32 -- # continue 00:04:35.573 00:44:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.573 00:44:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.573 00:44:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.573 00:44:19 -- setup/common.sh@32 -- # continue 00:04:35.573 00:44:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.573 00:44:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.573 00:44:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.573 00:44:19 -- setup/common.sh@32 -- # continue 00:04:35.573 00:44:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.573 00:44:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.573 00:44:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.573 00:44:19 -- setup/common.sh@32 -- # continue 00:04:35.573 00:44:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.573 00:44:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.573 00:44:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.573 00:44:19 -- setup/common.sh@32 -- # continue 00:04:35.573 00:44:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.573 00:44:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.573 00:44:19 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.573 00:44:19 -- setup/common.sh@32 -- # continue 00:04:35.573 00:44:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.573 00:44:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.573 00:44:19 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.573 00:44:19 -- setup/common.sh@32 -- # continue 00:04:35.573 00:44:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.573 00:44:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.573 00:44:19 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.573 00:44:19 -- setup/common.sh@32 -- # continue 00:04:35.573 00:44:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.573 00:44:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.573 00:44:19 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.573 00:44:19 -- setup/common.sh@32 -- # continue 00:04:35.573 00:44:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.573 00:44:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.573 00:44:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.573 00:44:19 -- setup/common.sh@32 -- # continue 00:04:35.573 00:44:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.573 00:44:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.573 00:44:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.573 00:44:19 -- setup/common.sh@32 -- # continue 00:04:35.573 00:44:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.573 00:44:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.573 00:44:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.573 00:44:19 -- setup/common.sh@32 -- # continue 00:04:35.573 00:44:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.573 00:44:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.573 00:44:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.573 00:44:19 -- setup/common.sh@32 -- # continue 00:04:35.573 00:44:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.573 00:44:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.573 00:44:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.573 00:44:19 -- setup/common.sh@32 -- # continue 00:04:35.573 00:44:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.573 00:44:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.573 00:44:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.573 00:44:19 -- setup/common.sh@32 -- # continue 00:04:35.573 00:44:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.573 00:44:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.573 00:44:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.573 00:44:19 -- setup/common.sh@32 -- # continue 00:04:35.573 00:44:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.573 00:44:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.574 00:44:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.574 00:44:19 -- setup/common.sh@32 -- # continue 00:04:35.574 00:44:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.574 00:44:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.574 00:44:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.574 00:44:19 -- setup/common.sh@32 -- # continue 00:04:35.574 00:44:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.574 00:44:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.574 00:44:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.574 00:44:19 -- setup/common.sh@32 -- # continue 00:04:35.574 00:44:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.574 00:44:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.574 00:44:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.574 00:44:19 -- setup/common.sh@32 -- # continue 00:04:35.574 00:44:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.574 00:44:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.574 00:44:19 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.574 00:44:19 -- setup/common.sh@32 -- # continue 00:04:35.574 00:44:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.574 00:44:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.574 00:44:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.574 00:44:19 -- setup/common.sh@32 -- # continue 00:04:35.574 00:44:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.574 00:44:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.574 00:44:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.574 00:44:19 -- setup/common.sh@32 -- # continue 00:04:35.574 00:44:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.574 00:44:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.574 00:44:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.574 00:44:19 -- setup/common.sh@32 -- # continue 00:04:35.574 00:44:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.574 00:44:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.574 00:44:19 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.574 00:44:19 -- setup/common.sh@32 -- # continue 00:04:35.574 00:44:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.574 00:44:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.574 00:44:19 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.574 00:44:19 -- setup/common.sh@32 -- # continue 00:04:35.574 00:44:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.574 00:44:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.574 00:44:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.574 00:44:19 -- setup/common.sh@32 -- # continue 00:04:35.574 00:44:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.574 00:44:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.574 00:44:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.574 00:44:19 -- setup/common.sh@32 -- # continue 00:04:35.574 00:44:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.574 00:44:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.574 00:44:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.574 00:44:19 -- setup/common.sh@32 -- # continue 00:04:35.574 00:44:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.574 00:44:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.574 00:44:19 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.574 00:44:19 -- setup/common.sh@32 -- # continue 00:04:35.574 00:44:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.574 00:44:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.574 00:44:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.574 00:44:19 -- setup/common.sh@32 -- # continue 00:04:35.574 00:44:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.574 00:44:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.574 00:44:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.574 00:44:19 -- setup/common.sh@32 -- # continue 00:04:35.574 00:44:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.574 00:44:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.574 00:44:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.574 00:44:19 -- setup/common.sh@32 -- # continue 00:04:35.574 00:44:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.574 00:44:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.574 00:44:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.574 00:44:19 -- setup/common.sh@32 -- # continue 00:04:35.574 00:44:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.574 00:44:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.574 00:44:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.574 00:44:19 -- setup/common.sh@32 -- # continue 00:04:35.574 00:44:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.574 00:44:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.574 00:44:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.574 00:44:19 -- setup/common.sh@32 -- # continue 00:04:35.574 00:44:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.574 00:44:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.574 00:44:19 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.574 00:44:19 -- setup/common.sh@32 -- # continue 00:04:35.574 00:44:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.574 00:44:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.574 00:44:19 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.574 00:44:19 -- setup/common.sh@32 -- # continue 00:04:35.574 00:44:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.574 00:44:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.574 00:44:19 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.574 00:44:19 -- setup/common.sh@32 -- # continue 00:04:35.574 00:44:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.574 00:44:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.574 00:44:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.574 00:44:19 -- setup/common.sh@32 -- # continue 00:04:35.574 00:44:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.574 00:44:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.574 00:44:19 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.574 00:44:19 -- setup/common.sh@32 -- # continue 00:04:35.574 00:44:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.574 00:44:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.574 00:44:19 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.574 00:44:19 -- setup/common.sh@32 -- # continue 00:04:35.574 00:44:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.574 00:44:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.574 00:44:19 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.574 00:44:19 -- setup/common.sh@32 -- # continue 00:04:35.574 00:44:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.574 00:44:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.574 00:44:19 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.574 00:44:19 -- setup/common.sh@33 -- # echo 2048 00:04:35.574 00:44:19 -- setup/common.sh@33 -- # return 0 00:04:35.574 00:44:19 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:35.574 00:44:19 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:35.574 00:44:19 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:35.574 00:44:19 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:35.574 00:44:19 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:35.574 00:44:19 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:35.574 00:44:19 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:35.574 00:44:19 -- setup/hugepages.sh@207 -- # get_nodes 00:04:35.574 00:44:19 -- setup/hugepages.sh@27 -- # local node 00:04:35.574 00:44:19 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:35.574 00:44:19 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:35.574 00:44:19 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:35.574 00:44:19 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:35.574 00:44:19 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:35.574 00:44:19 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:35.574 00:44:19 -- setup/hugepages.sh@208 -- # clear_hp 00:04:35.574 00:44:19 -- setup/hugepages.sh@37 -- # local node hp 00:04:35.574 00:44:19 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:35.574 00:44:19 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:35.574 00:44:19 -- setup/hugepages.sh@41 -- # echo 0 00:04:35.574 00:44:19 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:35.574 00:44:19 -- setup/hugepages.sh@41 -- # echo 0 00:04:35.574 00:44:19 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:35.574 00:44:19 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:35.574 00:44:19 -- setup/hugepages.sh@41 -- # echo 0 00:04:35.574 00:44:19 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:35.574 00:44:19 -- setup/hugepages.sh@41 -- # echo 0 00:04:35.574 00:44:19 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:35.574 00:44:19 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:35.834 00:44:19 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:35.834 00:44:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:35.834 00:44:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:35.834 00:44:19 -- common/autotest_common.sh@10 -- # set +x 00:04:35.834 ************************************ 00:04:35.834 START TEST default_setup 00:04:35.834 ************************************ 00:04:35.834 00:44:19 -- common/autotest_common.sh@1104 -- # default_setup 00:04:35.834 00:44:19 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:35.834 00:44:19 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:35.834 00:44:19 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:35.834 00:44:19 -- setup/hugepages.sh@51 -- # shift 00:04:35.834 00:44:19 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:35.834 00:44:19 -- setup/hugepages.sh@52 -- # local node_ids 00:04:35.834 00:44:19 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:35.834 00:44:19 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:35.834 00:44:19 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:35.834 00:44:19 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:35.834 00:44:19 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:35.834 00:44:19 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:35.834 00:44:19 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:35.834 00:44:19 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:35.834 00:44:19 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:35.834 00:44:19 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:35.834 00:44:19 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:35.834 00:44:19 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:35.834 00:44:19 -- setup/hugepages.sh@73 -- # return 0 00:04:35.834 00:44:19 -- setup/hugepages.sh@137 -- # setup output 00:04:35.834 00:44:19 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:35.834 00:44:19 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:36.770 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:36.770 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:36.770 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:36.770 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:36.770 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:36.770 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:36.770 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:36.770 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:36.770 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:36.770 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:36.770 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:36.770 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:37.029 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:37.029 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:37.029 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:37.029 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:37.970 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:37.970 00:44:21 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:37.970 00:44:21 -- setup/hugepages.sh@89 -- # local node 00:04:37.970 00:44:21 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:37.970 00:44:21 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:37.970 00:44:21 -- setup/hugepages.sh@92 -- # local surp 00:04:37.970 00:44:21 -- setup/hugepages.sh@93 -- # local resv 00:04:37.970 00:44:21 -- setup/hugepages.sh@94 -- # local anon 00:04:37.970 00:44:21 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:37.970 00:44:21 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:37.970 00:44:21 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:37.970 00:44:21 -- setup/common.sh@18 -- # local node= 00:04:37.970 00:44:21 -- setup/common.sh@19 -- # local var val 00:04:37.970 00:44:21 -- setup/common.sh@20 -- # local mem_f mem 00:04:37.970 00:44:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:37.970 00:44:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:37.970 00:44:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:37.970 00:44:21 -- setup/common.sh@28 -- # mapfile -t mem 00:04:37.970 00:44:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:37.970 00:44:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.970 00:44:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.970 00:44:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43878260 kB' 'MemAvailable: 47388616 kB' 'Buffers: 2704 kB' 'Cached: 12212092 kB' 'SwapCached: 0 kB' 'Active: 9165108 kB' 'Inactive: 3508456 kB' 'Active(anon): 8770060 kB' 'Inactive(anon): 0 kB' 'Active(file): 395048 kB' 'Inactive(file): 3508456 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 462036 kB' 'Mapped: 216408 kB' 'Shmem: 8311292 kB' 'KReclaimable: 200948 kB' 'Slab: 586512 kB' 'SReclaimable: 200948 kB' 'SUnreclaim: 385564 kB' 'KernelStack: 12640 kB' 'PageTables: 7904 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9925880 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197064 kB' 'VmallocChunk: 0 kB' 'Percpu: 37440 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2358876 kB' 'DirectMap2M: 20629504 kB' 'DirectMap1G: 46137344 kB' 00:04:37.970 00:44:21 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.970 00:44:21 -- setup/common.sh@32 -- # continue 00:04:37.970 00:44:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.970 00:44:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.970 00:44:21 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.970 00:44:21 -- setup/common.sh@32 -- # continue 00:04:37.970 00:44:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.970 00:44:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.970 00:44:21 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.970 00:44:21 -- setup/common.sh@32 -- # continue 00:04:37.970 00:44:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.970 00:44:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.970 00:44:21 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.970 00:44:21 -- setup/common.sh@32 -- # continue 00:04:37.970 00:44:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.970 00:44:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.970 00:44:21 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.970 00:44:21 -- setup/common.sh@32 -- # continue 00:04:37.970 00:44:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.970 00:44:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.970 00:44:21 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.970 00:44:21 -- setup/common.sh@32 -- # continue 00:04:37.970 00:44:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.970 00:44:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.970 00:44:21 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.970 00:44:21 -- setup/common.sh@32 -- # continue 00:04:37.970 00:44:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.970 00:44:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.970 00:44:21 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.970 00:44:21 -- setup/common.sh@32 -- # continue 00:04:37.970 00:44:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.970 00:44:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.970 00:44:21 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.970 00:44:21 -- setup/common.sh@32 -- # continue 00:04:37.970 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.970 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.970 00:44:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.970 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.970 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.970 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.970 00:44:22 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.970 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.970 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.970 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.970 00:44:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.970 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.970 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.970 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.970 00:44:22 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.970 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.970 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.970 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.970 00:44:22 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.970 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.970 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.970 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.970 00:44:22 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.970 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.970 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.970 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.970 00:44:22 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.970 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.970 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.970 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.970 00:44:22 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.970 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.970 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.970 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.970 00:44:22 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.970 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.970 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.971 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.971 00:44:22 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.971 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.971 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.971 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.971 00:44:22 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.971 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.971 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.971 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.971 00:44:22 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.971 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.971 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.971 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.971 00:44:22 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.971 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.971 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.971 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.971 00:44:22 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.971 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.971 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.971 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.971 00:44:22 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.971 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.971 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.971 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.971 00:44:22 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.971 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.971 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.971 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.971 00:44:22 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.971 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.971 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.971 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.971 00:44:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.971 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.971 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.971 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.971 00:44:22 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.971 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.971 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.971 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.971 00:44:22 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.971 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.971 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.971 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.971 00:44:22 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.971 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.971 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.971 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.971 00:44:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.971 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.971 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.971 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.971 00:44:22 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.971 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.971 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.971 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.971 00:44:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.971 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.971 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.971 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.971 00:44:22 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.971 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.971 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.971 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.971 00:44:22 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.971 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.971 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.971 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.971 00:44:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.971 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.971 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.971 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.971 00:44:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.971 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.971 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.971 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.971 00:44:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.971 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.971 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.971 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.971 00:44:22 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.971 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.971 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.971 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.971 00:44:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.971 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.971 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.971 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.971 00:44:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.971 00:44:22 -- setup/common.sh@33 -- # echo 0 00:04:37.971 00:44:22 -- setup/common.sh@33 -- # return 0 00:04:37.971 00:44:22 -- setup/hugepages.sh@97 -- # anon=0 00:04:37.971 00:44:22 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:37.971 00:44:22 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:37.971 00:44:22 -- setup/common.sh@18 -- # local node= 00:04:37.971 00:44:22 -- setup/common.sh@19 -- # local var val 00:04:37.971 00:44:22 -- setup/common.sh@20 -- # local mem_f mem 00:04:37.971 00:44:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:37.971 00:44:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:37.971 00:44:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:37.971 00:44:22 -- setup/common.sh@28 -- # mapfile -t mem 00:04:37.971 00:44:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:37.971 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.971 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.971 00:44:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43882372 kB' 'MemAvailable: 47392728 kB' 'Buffers: 2704 kB' 'Cached: 12212096 kB' 'SwapCached: 0 kB' 'Active: 9167760 kB' 'Inactive: 3508456 kB' 'Active(anon): 8772712 kB' 'Inactive(anon): 0 kB' 'Active(file): 395048 kB' 'Inactive(file): 3508456 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 464792 kB' 'Mapped: 216808 kB' 'Shmem: 8311296 kB' 'KReclaimable: 200948 kB' 'Slab: 586472 kB' 'SReclaimable: 200948 kB' 'SUnreclaim: 385524 kB' 'KernelStack: 12736 kB' 'PageTables: 8272 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9927356 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197020 kB' 'VmallocChunk: 0 kB' 'Percpu: 37440 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2358876 kB' 'DirectMap2M: 20629504 kB' 'DirectMap1G: 46137344 kB' 00:04:37.971 00:44:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.971 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.971 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.971 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.971 00:44:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.971 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.971 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.971 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.971 00:44:22 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.971 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.971 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.971 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.971 00:44:22 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.971 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.971 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.971 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.971 00:44:22 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.971 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.971 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.971 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.971 00:44:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.971 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.971 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.971 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.971 00:44:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.972 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.972 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.972 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.972 00:44:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.972 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.972 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.972 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.972 00:44:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.972 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.972 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.972 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.972 00:44:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.972 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.972 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.972 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.972 00:44:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.972 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.972 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.972 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.972 00:44:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.972 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.972 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.972 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.972 00:44:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.972 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.972 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.972 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.972 00:44:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.972 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.972 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.972 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.972 00:44:22 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.972 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.972 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.972 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.972 00:44:22 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.972 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.972 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.972 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.972 00:44:22 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.972 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.972 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.972 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.972 00:44:22 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.972 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.972 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.972 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.972 00:44:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.972 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.972 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.972 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.972 00:44:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.972 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.972 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.972 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.972 00:44:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.972 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.972 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.972 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.972 00:44:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.972 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.972 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.972 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.972 00:44:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.972 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.972 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.972 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.972 00:44:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.972 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.972 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.972 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.972 00:44:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.972 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.972 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.972 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.972 00:44:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.972 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.972 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.972 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.972 00:44:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.972 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.972 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.972 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.972 00:44:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.972 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.972 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.972 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.972 00:44:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.972 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.972 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.972 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.972 00:44:22 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.972 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.972 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.972 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.972 00:44:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.972 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.972 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.972 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.972 00:44:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.972 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.972 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.972 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.972 00:44:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.972 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.972 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.972 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.972 00:44:22 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.972 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.972 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.972 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.972 00:44:22 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.972 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.972 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.972 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.972 00:44:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.972 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.972 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.972 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.972 00:44:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.972 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.972 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.972 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.972 00:44:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.972 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.972 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.972 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.972 00:44:22 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.972 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.972 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.972 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.972 00:44:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.972 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.972 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.972 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.972 00:44:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.972 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.972 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.972 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.972 00:44:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.972 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.972 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.972 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.972 00:44:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.972 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.972 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.972 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.972 00:44:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.972 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.972 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.972 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.972 00:44:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.972 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.972 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.972 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.973 00:44:22 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.973 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.973 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.973 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.973 00:44:22 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.973 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.973 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.973 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.973 00:44:22 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.973 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.973 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.973 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.973 00:44:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.973 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.973 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.973 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.973 00:44:22 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.973 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.973 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.973 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.973 00:44:22 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.973 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.973 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.973 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.973 00:44:22 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.973 00:44:22 -- setup/common.sh@33 -- # echo 0 00:04:37.973 00:44:22 -- setup/common.sh@33 -- # return 0 00:04:37.973 00:44:22 -- setup/hugepages.sh@99 -- # surp=0 00:04:37.973 00:44:22 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:37.973 00:44:22 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:37.973 00:44:22 -- setup/common.sh@18 -- # local node= 00:04:37.973 00:44:22 -- setup/common.sh@19 -- # local var val 00:04:37.973 00:44:22 -- setup/common.sh@20 -- # local mem_f mem 00:04:37.973 00:44:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:37.973 00:44:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:37.973 00:44:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:37.973 00:44:22 -- setup/common.sh@28 -- # mapfile -t mem 00:04:37.973 00:44:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:37.973 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.973 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.973 00:44:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43883264 kB' 'MemAvailable: 47393600 kB' 'Buffers: 2704 kB' 'Cached: 12212096 kB' 'SwapCached: 0 kB' 'Active: 9161056 kB' 'Inactive: 3508456 kB' 'Active(anon): 8766008 kB' 'Inactive(anon): 0 kB' 'Active(file): 395048 kB' 'Inactive(file): 3508456 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 458048 kB' 'Mapped: 216292 kB' 'Shmem: 8311296 kB' 'KReclaimable: 200908 kB' 'Slab: 586440 kB' 'SReclaimable: 200908 kB' 'SUnreclaim: 385532 kB' 'KernelStack: 12736 kB' 'PageTables: 8260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9921252 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197032 kB' 'VmallocChunk: 0 kB' 'Percpu: 37440 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2358876 kB' 'DirectMap2M: 20629504 kB' 'DirectMap1G: 46137344 kB' 00:04:37.973 00:44:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.973 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.973 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.973 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.973 00:44:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.973 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.973 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.973 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.973 00:44:22 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.973 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.973 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.973 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.973 00:44:22 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.973 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.973 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.973 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.973 00:44:22 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.973 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.973 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.973 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.973 00:44:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.973 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.973 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.973 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.973 00:44:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.973 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.973 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.973 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.973 00:44:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.973 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.973 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.973 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.973 00:44:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.973 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.973 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.973 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.973 00:44:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.973 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.973 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.973 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.973 00:44:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.973 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.973 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.973 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.973 00:44:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.973 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.973 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.973 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.973 00:44:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.973 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.973 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.973 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.973 00:44:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.973 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.973 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.973 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.973 00:44:22 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.973 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.973 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.973 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.973 00:44:22 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.973 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.973 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.973 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.973 00:44:22 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.973 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.973 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.973 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.973 00:44:22 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.973 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.973 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.973 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.973 00:44:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.973 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.973 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.973 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.973 00:44:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.973 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.973 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.973 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.973 00:44:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.973 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.973 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.973 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.973 00:44:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.973 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.973 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.973 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.973 00:44:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.973 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.973 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.973 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.973 00:44:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.973 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.973 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.973 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.973 00:44:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.973 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.973 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.973 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.973 00:44:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.973 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.973 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.973 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.974 00:44:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.974 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.974 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.974 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.974 00:44:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.974 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.974 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.974 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.974 00:44:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.974 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.974 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.974 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.974 00:44:22 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.974 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.974 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.974 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.974 00:44:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.974 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.974 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.974 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.974 00:44:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.974 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.974 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.974 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.974 00:44:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.974 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.974 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.974 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.974 00:44:22 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.974 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.974 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.974 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.974 00:44:22 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.974 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.974 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.974 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.974 00:44:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.974 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.974 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.974 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.974 00:44:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.974 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.974 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.974 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.974 00:44:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.974 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.974 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.974 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.974 00:44:22 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.974 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.974 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.974 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.974 00:44:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.974 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.974 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.974 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.974 00:44:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.974 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.974 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.974 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.974 00:44:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.974 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.974 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.974 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.974 00:44:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.974 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.974 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.974 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.974 00:44:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.974 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.974 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.974 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.974 00:44:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.974 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.974 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.974 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.974 00:44:22 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.974 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.974 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.974 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.974 00:44:22 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.974 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.974 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.974 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.974 00:44:22 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.974 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.974 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.974 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.974 00:44:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.974 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.974 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.974 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.974 00:44:22 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.974 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.974 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.974 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.974 00:44:22 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.974 00:44:22 -- setup/common.sh@33 -- # echo 0 00:04:37.974 00:44:22 -- setup/common.sh@33 -- # return 0 00:04:37.974 00:44:22 -- setup/hugepages.sh@100 -- # resv=0 00:04:37.974 00:44:22 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:37.974 nr_hugepages=1024 00:04:37.974 00:44:22 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:37.974 resv_hugepages=0 00:04:37.974 00:44:22 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:37.974 surplus_hugepages=0 00:04:37.974 00:44:22 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:37.974 anon_hugepages=0 00:04:37.974 00:44:22 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:37.974 00:44:22 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:37.974 00:44:22 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:37.974 00:44:22 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:37.974 00:44:22 -- setup/common.sh@18 -- # local node= 00:04:37.974 00:44:22 -- setup/common.sh@19 -- # local var val 00:04:37.974 00:44:22 -- setup/common.sh@20 -- # local mem_f mem 00:04:37.974 00:44:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:37.974 00:44:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:37.974 00:44:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:37.974 00:44:22 -- setup/common.sh@28 -- # mapfile -t mem 00:04:37.974 00:44:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:37.974 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.974 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.974 00:44:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43883012 kB' 'MemAvailable: 47393348 kB' 'Buffers: 2704 kB' 'Cached: 12212124 kB' 'SwapCached: 0 kB' 'Active: 9161020 kB' 'Inactive: 3508456 kB' 'Active(anon): 8765972 kB' 'Inactive(anon): 0 kB' 'Active(file): 395048 kB' 'Inactive(file): 3508456 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 458040 kB' 'Mapped: 215876 kB' 'Shmem: 8311324 kB' 'KReclaimable: 200908 kB' 'Slab: 586440 kB' 'SReclaimable: 200908 kB' 'SUnreclaim: 385532 kB' 'KernelStack: 12720 kB' 'PageTables: 8176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9921268 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197016 kB' 'VmallocChunk: 0 kB' 'Percpu: 37440 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2358876 kB' 'DirectMap2M: 20629504 kB' 'DirectMap1G: 46137344 kB' 00:04:37.974 00:44:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.974 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.974 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.974 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.974 00:44:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.974 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.974 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.974 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.974 00:44:22 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.974 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.974 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.974 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.974 00:44:22 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.974 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.974 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.974 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.974 00:44:22 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.974 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.974 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.974 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.975 00:44:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.975 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.975 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.975 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.975 00:44:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.975 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.975 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.975 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.975 00:44:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.975 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.975 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.975 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.975 00:44:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.975 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.975 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.975 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.975 00:44:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.975 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.975 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.975 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.975 00:44:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.975 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.975 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.975 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.975 00:44:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.975 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.975 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.975 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.975 00:44:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.975 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.975 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.975 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.975 00:44:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.975 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.975 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.975 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.975 00:44:22 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.975 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.975 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.975 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.975 00:44:22 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.975 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.975 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.975 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.975 00:44:22 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.975 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.975 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.975 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.975 00:44:22 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.975 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.975 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.975 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.975 00:44:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.975 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.975 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.975 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.975 00:44:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.975 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.975 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.975 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.975 00:44:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.975 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.975 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.975 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.975 00:44:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.975 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.975 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.975 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.975 00:44:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.975 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.975 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.975 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.975 00:44:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.975 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.975 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.975 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.975 00:44:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.975 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.975 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.975 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.975 00:44:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.975 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.975 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.975 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.975 00:44:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.975 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.975 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.975 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.975 00:44:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.975 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.975 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.975 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.975 00:44:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.975 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.975 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.975 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.975 00:44:22 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.975 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.975 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.975 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.975 00:44:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.975 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.975 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.975 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.975 00:44:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.975 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.975 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.975 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.975 00:44:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.975 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.975 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.975 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.975 00:44:22 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.975 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.975 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.975 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.975 00:44:22 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.975 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.975 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.975 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.975 00:44:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.975 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.975 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.975 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.976 00:44:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.976 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.976 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.976 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.976 00:44:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.976 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.976 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.976 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.976 00:44:22 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.976 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.976 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.976 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.976 00:44:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.976 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.976 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.976 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.976 00:44:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.976 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.976 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.976 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.976 00:44:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.976 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.976 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.976 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.976 00:44:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.976 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.976 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.976 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.976 00:44:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.976 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.976 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.976 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.976 00:44:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.976 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.976 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.976 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.976 00:44:22 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.976 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.976 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.976 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.976 00:44:22 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.976 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.976 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.976 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.976 00:44:22 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.976 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.976 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.976 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.976 00:44:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.976 00:44:22 -- setup/common.sh@33 -- # echo 1024 00:04:37.976 00:44:22 -- setup/common.sh@33 -- # return 0 00:04:37.976 00:44:22 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:37.976 00:44:22 -- setup/hugepages.sh@112 -- # get_nodes 00:04:37.976 00:44:22 -- setup/hugepages.sh@27 -- # local node 00:04:37.976 00:44:22 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:37.976 00:44:22 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:37.976 00:44:22 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:37.976 00:44:22 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:37.976 00:44:22 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:37.976 00:44:22 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:37.976 00:44:22 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:37.976 00:44:22 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:37.976 00:44:22 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:37.976 00:44:22 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:37.976 00:44:22 -- setup/common.sh@18 -- # local node=0 00:04:37.976 00:44:22 -- setup/common.sh@19 -- # local var val 00:04:37.976 00:44:22 -- setup/common.sh@20 -- # local mem_f mem 00:04:37.976 00:44:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:37.976 00:44:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:37.976 00:44:22 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:37.976 00:44:22 -- setup/common.sh@28 -- # mapfile -t mem 00:04:37.976 00:44:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:37.976 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.976 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.976 00:44:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 25722948 kB' 'MemUsed: 7106936 kB' 'SwapCached: 0 kB' 'Active: 3686436 kB' 'Inactive: 155168 kB' 'Active(anon): 3525144 kB' 'Inactive(anon): 0 kB' 'Active(file): 161292 kB' 'Inactive(file): 155168 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3651308 kB' 'Mapped: 112468 kB' 'AnonPages: 193500 kB' 'Shmem: 3334848 kB' 'KernelStack: 6728 kB' 'PageTables: 3632 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 101936 kB' 'Slab: 331520 kB' 'SReclaimable: 101936 kB' 'SUnreclaim: 229584 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:37.976 00:44:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.976 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.976 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.976 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.976 00:44:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.976 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.976 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.976 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.976 00:44:22 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.976 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.976 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.976 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.976 00:44:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.976 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.976 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.976 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.976 00:44:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.976 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.976 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.976 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.976 00:44:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.976 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.976 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.976 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.976 00:44:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.976 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.976 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.976 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.976 00:44:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.976 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.976 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.976 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.976 00:44:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.976 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.976 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.976 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.976 00:44:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.976 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.976 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.976 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.976 00:44:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.976 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.976 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.976 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.976 00:44:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.976 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.976 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.976 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.976 00:44:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.976 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.976 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.976 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.976 00:44:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.976 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.976 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.976 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.976 00:44:22 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.976 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.976 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.976 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.976 00:44:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.976 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.976 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.976 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.976 00:44:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.976 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.976 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.976 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.976 00:44:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.976 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.976 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.976 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.976 00:44:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.976 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.977 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.977 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.977 00:44:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.977 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.977 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.977 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.977 00:44:22 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.977 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.977 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.977 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.977 00:44:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.977 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.977 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.977 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.977 00:44:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.977 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.977 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.977 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.977 00:44:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.977 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.977 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.977 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.977 00:44:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.977 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.977 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.977 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.977 00:44:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.977 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.977 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.977 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.977 00:44:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.977 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.977 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.977 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.977 00:44:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.977 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.977 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.977 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.977 00:44:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.977 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.977 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.977 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.977 00:44:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.977 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.977 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.977 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.977 00:44:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.977 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.977 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.977 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.977 00:44:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.977 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.977 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.977 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.977 00:44:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.977 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.977 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.977 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.977 00:44:22 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.977 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.977 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.977 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.977 00:44:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.977 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.977 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.977 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.977 00:44:22 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.977 00:44:22 -- setup/common.sh@32 -- # continue 00:04:37.977 00:44:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.977 00:44:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.977 00:44:22 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.977 00:44:22 -- setup/common.sh@33 -- # echo 0 00:04:37.977 00:44:22 -- setup/common.sh@33 -- # return 0 00:04:37.977 00:44:22 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:37.977 00:44:22 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:37.977 00:44:22 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:37.977 00:44:22 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:37.977 00:44:22 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:37.977 node0=1024 expecting 1024 00:04:37.977 00:44:22 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:37.977 00:04:37.977 real 0m2.340s 00:04:37.977 user 0m0.660s 00:04:37.977 sys 0m0.813s 00:04:37.977 00:44:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:37.977 00:44:22 -- common/autotest_common.sh@10 -- # set +x 00:04:37.977 ************************************ 00:04:37.977 END TEST default_setup 00:04:37.977 ************************************ 00:04:37.977 00:44:22 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:37.977 00:44:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:37.977 00:44:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:37.977 00:44:22 -- common/autotest_common.sh@10 -- # set +x 00:04:37.977 ************************************ 00:04:37.977 START TEST per_node_1G_alloc 00:04:37.977 ************************************ 00:04:37.977 00:44:22 -- common/autotest_common.sh@1104 -- # per_node_1G_alloc 00:04:37.977 00:44:22 -- setup/hugepages.sh@143 -- # local IFS=, 00:04:37.977 00:44:22 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:37.977 00:44:22 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:37.977 00:44:22 -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:37.977 00:44:22 -- setup/hugepages.sh@51 -- # shift 00:04:37.977 00:44:22 -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:37.977 00:44:22 -- setup/hugepages.sh@52 -- # local node_ids 00:04:37.977 00:44:22 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:37.977 00:44:22 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:37.977 00:44:22 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:37.977 00:44:22 -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:37.977 00:44:22 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:37.977 00:44:22 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:37.977 00:44:22 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:37.977 00:44:22 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:37.977 00:44:22 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:37.977 00:44:22 -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:37.977 00:44:22 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:37.977 00:44:22 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:37.977 00:44:22 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:37.977 00:44:22 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:37.977 00:44:22 -- setup/hugepages.sh@73 -- # return 0 00:04:37.977 00:44:22 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:37.977 00:44:22 -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:37.977 00:44:22 -- setup/hugepages.sh@146 -- # setup output 00:04:37.977 00:44:22 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:37.977 00:44:22 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:39.356 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:39.356 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:39.356 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:39.356 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:39.356 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:39.356 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:39.356 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:39.356 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:39.356 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:39.356 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:39.356 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:39.356 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:39.356 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:39.356 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:39.356 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:39.356 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:39.356 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:39.356 00:44:23 -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:39.356 00:44:23 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:39.356 00:44:23 -- setup/hugepages.sh@89 -- # local node 00:04:39.356 00:44:23 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:39.356 00:44:23 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:39.356 00:44:23 -- setup/hugepages.sh@92 -- # local surp 00:04:39.356 00:44:23 -- setup/hugepages.sh@93 -- # local resv 00:04:39.356 00:44:23 -- setup/hugepages.sh@94 -- # local anon 00:04:39.356 00:44:23 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:39.356 00:44:23 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:39.356 00:44:23 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:39.356 00:44:23 -- setup/common.sh@18 -- # local node= 00:04:39.356 00:44:23 -- setup/common.sh@19 -- # local var val 00:04:39.356 00:44:23 -- setup/common.sh@20 -- # local mem_f mem 00:04:39.356 00:44:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.356 00:44:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:39.356 00:44:23 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:39.356 00:44:23 -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.356 00:44:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.356 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.356 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.356 00:44:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43898128 kB' 'MemAvailable: 47408460 kB' 'Buffers: 2704 kB' 'Cached: 12212176 kB' 'SwapCached: 0 kB' 'Active: 9162056 kB' 'Inactive: 3508456 kB' 'Active(anon): 8767008 kB' 'Inactive(anon): 0 kB' 'Active(file): 395048 kB' 'Inactive(file): 3508456 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 458900 kB' 'Mapped: 215904 kB' 'Shmem: 8311376 kB' 'KReclaimable: 200900 kB' 'Slab: 586280 kB' 'SReclaimable: 200900 kB' 'SUnreclaim: 385380 kB' 'KernelStack: 12768 kB' 'PageTables: 8248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9921440 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197096 kB' 'VmallocChunk: 0 kB' 'Percpu: 37440 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2358876 kB' 'DirectMap2M: 20629504 kB' 'DirectMap1G: 46137344 kB' 00:04:39.356 00:44:23 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.356 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.356 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.356 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.356 00:44:23 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.357 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.357 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.357 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.357 00:44:23 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.357 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.357 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.357 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.357 00:44:23 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.357 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.357 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.357 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.357 00:44:23 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.357 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.357 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.357 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.357 00:44:23 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.357 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.357 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.357 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.357 00:44:23 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.357 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.357 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.357 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.357 00:44:23 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.357 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.357 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.357 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.357 00:44:23 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.357 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.357 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.357 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.357 00:44:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.357 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.357 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.357 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.357 00:44:23 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.357 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.357 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.357 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.357 00:44:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.357 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.357 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.357 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.357 00:44:23 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.357 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.357 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.357 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.357 00:44:23 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.357 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.357 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.357 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.357 00:44:23 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.357 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.357 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.357 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.357 00:44:23 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.357 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.357 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.357 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.357 00:44:23 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.357 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.357 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.357 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.357 00:44:23 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.357 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.357 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.357 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.357 00:44:23 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.357 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.357 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.357 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.357 00:44:23 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.357 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.357 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.357 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.357 00:44:23 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.357 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.357 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.357 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.357 00:44:23 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.357 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.357 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.357 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.357 00:44:23 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.357 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.357 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.357 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.357 00:44:23 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.357 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.357 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.357 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.357 00:44:23 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.357 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.357 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.357 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.357 00:44:23 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.357 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.357 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.357 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.357 00:44:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.357 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.357 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.357 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.357 00:44:23 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.357 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.357 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.357 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.357 00:44:23 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.357 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.357 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.357 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.357 00:44:23 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.357 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.357 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.357 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.357 00:44:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.357 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.357 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.357 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.357 00:44:23 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.357 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.357 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.357 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.357 00:44:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.357 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.357 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.357 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.357 00:44:23 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.357 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.357 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.357 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.357 00:44:23 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.357 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.357 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.357 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.357 00:44:23 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.357 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.357 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.357 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.357 00:44:23 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.357 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.357 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.357 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.357 00:44:23 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.357 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.357 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.357 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.357 00:44:23 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.357 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.357 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.357 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.357 00:44:23 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.357 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.357 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.357 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.357 00:44:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.357 00:44:23 -- setup/common.sh@33 -- # echo 0 00:04:39.357 00:44:23 -- setup/common.sh@33 -- # return 0 00:04:39.357 00:44:23 -- setup/hugepages.sh@97 -- # anon=0 00:04:39.357 00:44:23 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:39.357 00:44:23 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:39.357 00:44:23 -- setup/common.sh@18 -- # local node= 00:04:39.357 00:44:23 -- setup/common.sh@19 -- # local var val 00:04:39.357 00:44:23 -- setup/common.sh@20 -- # local mem_f mem 00:04:39.357 00:44:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.357 00:44:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:39.357 00:44:23 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:39.358 00:44:23 -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.358 00:44:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.358 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.358 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.358 00:44:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43898236 kB' 'MemAvailable: 47408568 kB' 'Buffers: 2704 kB' 'Cached: 12212176 kB' 'SwapCached: 0 kB' 'Active: 9162592 kB' 'Inactive: 3508456 kB' 'Active(anon): 8767544 kB' 'Inactive(anon): 0 kB' 'Active(file): 395048 kB' 'Inactive(file): 3508456 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 459448 kB' 'Mapped: 215904 kB' 'Shmem: 8311376 kB' 'KReclaimable: 200900 kB' 'Slab: 586292 kB' 'SReclaimable: 200900 kB' 'SUnreclaim: 385392 kB' 'KernelStack: 12736 kB' 'PageTables: 8128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9921452 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197048 kB' 'VmallocChunk: 0 kB' 'Percpu: 37440 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2358876 kB' 'DirectMap2M: 20629504 kB' 'DirectMap1G: 46137344 kB' 00:04:39.358 00:44:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.358 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.358 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.358 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.358 00:44:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.358 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.358 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.358 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.358 00:44:23 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.358 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.358 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.358 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.358 00:44:23 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.358 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.358 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.358 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.358 00:44:23 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.358 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.358 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.358 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.358 00:44:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.358 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.358 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.358 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.358 00:44:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.358 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.358 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.358 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.358 00:44:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.358 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.358 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.358 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.358 00:44:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.358 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.358 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.358 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.358 00:44:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.358 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.358 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.358 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.358 00:44:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.358 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.358 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.358 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.358 00:44:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.358 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.358 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.358 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.358 00:44:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.358 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.358 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.358 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.358 00:44:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.358 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.358 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.358 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.358 00:44:23 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.358 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.358 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.358 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.358 00:44:23 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.358 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.358 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.358 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.358 00:44:23 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.358 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.358 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.358 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.358 00:44:23 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.358 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.358 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.358 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.358 00:44:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.358 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.358 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.358 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.358 00:44:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.358 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.358 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.358 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.358 00:44:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.358 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.358 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.358 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.358 00:44:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.358 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.358 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.358 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.358 00:44:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.358 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.358 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.358 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.358 00:44:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.358 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.358 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.358 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.358 00:44:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.358 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.358 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.358 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.358 00:44:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.358 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.358 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.358 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.358 00:44:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.358 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.358 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.358 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.358 00:44:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.358 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.358 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.358 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.358 00:44:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.358 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.358 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.358 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.358 00:44:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.358 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.358 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.358 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.358 00:44:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.358 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.358 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.358 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.358 00:44:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.358 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.358 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.358 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.358 00:44:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.358 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.358 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.358 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.358 00:44:23 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.358 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.358 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.358 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.358 00:44:23 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.358 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.358 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.359 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.359 00:44:23 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.359 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.359 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.359 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.359 00:44:23 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.359 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.359 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.359 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.359 00:44:23 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.359 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.359 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.359 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.359 00:44:23 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.359 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.359 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.359 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.359 00:44:23 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.359 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.359 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.359 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.359 00:44:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.359 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.359 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.359 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.359 00:44:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.359 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.359 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.359 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.359 00:44:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.359 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.359 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.359 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.359 00:44:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.359 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.359 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.359 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.359 00:44:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.359 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.359 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.359 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.359 00:44:23 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.359 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.359 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.359 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.359 00:44:23 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.359 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.359 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.359 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.359 00:44:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.359 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.359 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.359 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.359 00:44:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.359 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.359 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.359 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.359 00:44:23 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.359 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.359 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.359 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.359 00:44:23 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.359 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.359 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.359 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.359 00:44:23 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.359 00:44:23 -- setup/common.sh@33 -- # echo 0 00:04:39.359 00:44:23 -- setup/common.sh@33 -- # return 0 00:04:39.359 00:44:23 -- setup/hugepages.sh@99 -- # surp=0 00:04:39.359 00:44:23 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:39.359 00:44:23 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:39.359 00:44:23 -- setup/common.sh@18 -- # local node= 00:04:39.359 00:44:23 -- setup/common.sh@19 -- # local var val 00:04:39.359 00:44:23 -- setup/common.sh@20 -- # local mem_f mem 00:04:39.359 00:44:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.359 00:44:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:39.359 00:44:23 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:39.359 00:44:23 -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.359 00:44:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.359 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.359 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.359 00:44:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43898520 kB' 'MemAvailable: 47408852 kB' 'Buffers: 2704 kB' 'Cached: 12212188 kB' 'SwapCached: 0 kB' 'Active: 9161420 kB' 'Inactive: 3508456 kB' 'Active(anon): 8766372 kB' 'Inactive(anon): 0 kB' 'Active(file): 395048 kB' 'Inactive(file): 3508456 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 458292 kB' 'Mapped: 215884 kB' 'Shmem: 8311388 kB' 'KReclaimable: 200900 kB' 'Slab: 586380 kB' 'SReclaimable: 200900 kB' 'SUnreclaim: 385480 kB' 'KernelStack: 12784 kB' 'PageTables: 8276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9921468 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197048 kB' 'VmallocChunk: 0 kB' 'Percpu: 37440 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2358876 kB' 'DirectMap2M: 20629504 kB' 'DirectMap1G: 46137344 kB' 00:04:39.359 00:44:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.359 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.359 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.359 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.359 00:44:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.359 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.359 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.359 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.359 00:44:23 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.359 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.359 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.359 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.359 00:44:23 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.359 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.359 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.359 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.359 00:44:23 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.359 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.359 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.359 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.359 00:44:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.359 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.359 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.359 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.359 00:44:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.359 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.359 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.359 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.359 00:44:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.359 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.359 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.359 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.359 00:44:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.359 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.359 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.359 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.359 00:44:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.359 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.359 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.359 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.359 00:44:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.359 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.359 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.359 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.359 00:44:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.359 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.359 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.359 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.359 00:44:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.359 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.359 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.359 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.359 00:44:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.359 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.359 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.359 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.359 00:44:23 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.359 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.359 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.359 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.360 00:44:23 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.360 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.360 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.360 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.360 00:44:23 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.360 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.360 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.360 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.360 00:44:23 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.360 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.360 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.360 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.360 00:44:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.360 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.360 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.360 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.360 00:44:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.360 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.360 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.360 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.360 00:44:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.360 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.360 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.360 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.360 00:44:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.360 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.360 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.360 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.360 00:44:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.360 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.360 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.360 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.360 00:44:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.360 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.360 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.360 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.360 00:44:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.360 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.360 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.360 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.360 00:44:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.360 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.360 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.360 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.360 00:44:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.360 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.360 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.360 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.360 00:44:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.360 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.360 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.360 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.360 00:44:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.360 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.360 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.360 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.360 00:44:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.360 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.360 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.360 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.360 00:44:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.360 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.360 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.360 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.360 00:44:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.360 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.360 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.360 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.360 00:44:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.360 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.360 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.360 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.360 00:44:23 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.360 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.360 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.360 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.360 00:44:23 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.360 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.360 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.360 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.360 00:44:23 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.360 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.360 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.360 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.360 00:44:23 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.360 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.360 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.360 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.360 00:44:23 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.360 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.360 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.360 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.360 00:44:23 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.360 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.360 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.360 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.360 00:44:23 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.360 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.360 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.360 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.360 00:44:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.360 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.360 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.360 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.360 00:44:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.360 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.360 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.360 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.360 00:44:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.360 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.360 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.360 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.360 00:44:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.360 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.360 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.360 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.360 00:44:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.360 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.360 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.360 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.360 00:44:23 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.360 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.360 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.360 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.360 00:44:23 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.360 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.360 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.360 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.360 00:44:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.360 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.360 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.360 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.360 00:44:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.360 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.360 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.360 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.360 00:44:23 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.360 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.360 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.360 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.360 00:44:23 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.360 00:44:23 -- setup/common.sh@33 -- # echo 0 00:04:39.360 00:44:23 -- setup/common.sh@33 -- # return 0 00:04:39.360 00:44:23 -- setup/hugepages.sh@100 -- # resv=0 00:04:39.360 00:44:23 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:39.360 nr_hugepages=1024 00:04:39.360 00:44:23 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:39.360 resv_hugepages=0 00:04:39.360 00:44:23 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:39.360 surplus_hugepages=0 00:04:39.360 00:44:23 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:39.360 anon_hugepages=0 00:04:39.360 00:44:23 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:39.360 00:44:23 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:39.360 00:44:23 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:39.360 00:44:23 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:39.360 00:44:23 -- setup/common.sh@18 -- # local node= 00:04:39.360 00:44:23 -- setup/common.sh@19 -- # local var val 00:04:39.360 00:44:23 -- setup/common.sh@20 -- # local mem_f mem 00:04:39.360 00:44:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.360 00:44:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:39.361 00:44:23 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:39.361 00:44:23 -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.361 00:44:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.361 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.361 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.361 00:44:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43897696 kB' 'MemAvailable: 47408028 kB' 'Buffers: 2704 kB' 'Cached: 12212204 kB' 'SwapCached: 0 kB' 'Active: 9161456 kB' 'Inactive: 3508456 kB' 'Active(anon): 8766408 kB' 'Inactive(anon): 0 kB' 'Active(file): 395048 kB' 'Inactive(file): 3508456 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 458300 kB' 'Mapped: 215884 kB' 'Shmem: 8311404 kB' 'KReclaimable: 200900 kB' 'Slab: 586380 kB' 'SReclaimable: 200900 kB' 'SUnreclaim: 385480 kB' 'KernelStack: 12784 kB' 'PageTables: 8276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9921480 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197048 kB' 'VmallocChunk: 0 kB' 'Percpu: 37440 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2358876 kB' 'DirectMap2M: 20629504 kB' 'DirectMap1G: 46137344 kB' 00:04:39.361 00:44:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.361 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.361 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.361 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.361 00:44:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.361 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.361 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.361 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.361 00:44:23 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.361 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.361 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.361 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.361 00:44:23 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.361 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.361 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.361 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.361 00:44:23 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.361 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.361 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.361 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.361 00:44:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.361 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.361 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.361 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.361 00:44:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.361 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.361 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.361 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.361 00:44:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.361 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.361 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.361 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.361 00:44:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.361 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.361 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.361 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.361 00:44:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.361 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.361 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.361 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.361 00:44:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.361 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.361 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.361 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.361 00:44:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.361 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.361 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.361 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.361 00:44:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.361 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.361 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.361 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.361 00:44:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.361 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.361 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.361 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.361 00:44:23 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.361 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.361 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.361 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.361 00:44:23 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.361 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.361 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.361 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.361 00:44:23 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.361 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.361 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.361 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.361 00:44:23 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.361 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.361 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.361 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.361 00:44:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.361 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.361 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.361 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.361 00:44:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.361 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.361 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.361 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.361 00:44:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.361 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.361 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.361 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.361 00:44:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.361 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.361 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.361 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.361 00:44:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.361 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.361 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.361 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.361 00:44:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.361 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.361 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.361 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.361 00:44:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.361 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.361 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.361 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.361 00:44:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.361 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.361 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.361 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.361 00:44:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.361 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.361 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.362 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.362 00:44:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.362 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.362 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.362 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.362 00:44:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.362 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.362 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.362 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.362 00:44:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.362 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.362 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.362 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.362 00:44:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.362 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.362 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.362 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.362 00:44:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.362 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.362 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.362 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.362 00:44:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.362 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.362 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.362 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.362 00:44:23 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.362 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.362 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.362 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.362 00:44:23 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.362 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.362 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.362 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.362 00:44:23 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.362 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.362 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.362 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.362 00:44:23 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.362 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.362 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.362 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.362 00:44:23 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.362 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.362 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.362 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.362 00:44:23 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.362 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.362 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.362 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.362 00:44:23 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.362 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.362 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.362 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.362 00:44:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.362 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.362 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.362 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.362 00:44:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.362 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.362 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.362 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.362 00:44:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.362 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.362 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.362 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.362 00:44:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.362 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.362 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.362 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.362 00:44:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.362 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.362 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.362 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.362 00:44:23 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.362 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.362 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.362 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.362 00:44:23 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.362 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.362 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.362 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.362 00:44:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.362 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.362 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.362 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.362 00:44:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.362 00:44:23 -- setup/common.sh@33 -- # echo 1024 00:04:39.362 00:44:23 -- setup/common.sh@33 -- # return 0 00:04:39.362 00:44:23 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:39.362 00:44:23 -- setup/hugepages.sh@112 -- # get_nodes 00:04:39.362 00:44:23 -- setup/hugepages.sh@27 -- # local node 00:04:39.362 00:44:23 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:39.362 00:44:23 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:39.362 00:44:23 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:39.362 00:44:23 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:39.362 00:44:23 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:39.362 00:44:23 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:39.362 00:44:23 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:39.362 00:44:23 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:39.362 00:44:23 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:39.362 00:44:23 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:39.362 00:44:23 -- setup/common.sh@18 -- # local node=0 00:04:39.362 00:44:23 -- setup/common.sh@19 -- # local var val 00:04:39.362 00:44:23 -- setup/common.sh@20 -- # local mem_f mem 00:04:39.362 00:44:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.362 00:44:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:39.362 00:44:23 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:39.362 00:44:23 -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.362 00:44:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.362 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.362 00:44:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 26787928 kB' 'MemUsed: 6041956 kB' 'SwapCached: 0 kB' 'Active: 3687496 kB' 'Inactive: 155168 kB' 'Active(anon): 3526204 kB' 'Inactive(anon): 0 kB' 'Active(file): 161292 kB' 'Inactive(file): 155168 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3651360 kB' 'Mapped: 112476 kB' 'AnonPages: 194524 kB' 'Shmem: 3334900 kB' 'KernelStack: 6792 kB' 'PageTables: 3832 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 101928 kB' 'Slab: 331528 kB' 'SReclaimable: 101928 kB' 'SUnreclaim: 229600 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:39.362 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.362 00:44:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.362 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.362 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.362 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.362 00:44:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.362 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.362 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.362 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.362 00:44:23 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.362 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.362 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.362 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.362 00:44:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.362 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.362 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.362 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.362 00:44:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.362 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.362 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.362 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.362 00:44:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.362 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.362 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.362 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.362 00:44:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.362 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.362 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.362 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.362 00:44:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.362 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.362 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.362 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.362 00:44:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.362 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.363 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.363 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.363 00:44:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.363 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.363 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.363 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.363 00:44:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.363 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.363 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.363 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.363 00:44:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.363 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.363 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.363 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.363 00:44:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.363 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.363 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.363 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.363 00:44:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.363 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.363 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.363 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.363 00:44:23 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.363 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.363 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.363 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.363 00:44:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.363 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.363 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.363 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.363 00:44:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.363 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.363 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.363 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.363 00:44:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.363 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.363 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.363 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.363 00:44:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.363 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.363 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.363 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.363 00:44:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.363 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.363 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.363 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.363 00:44:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.363 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.363 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.363 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.363 00:44:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.363 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.363 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.363 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.363 00:44:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.363 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.363 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.363 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.363 00:44:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.363 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.363 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.363 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.363 00:44:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.363 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.363 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.363 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.363 00:44:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.363 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.363 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.363 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.363 00:44:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.363 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.363 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.363 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.363 00:44:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.363 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.363 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.363 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.363 00:44:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.363 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.363 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.363 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.363 00:44:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.363 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.363 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.363 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.363 00:44:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.363 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.363 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.363 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.363 00:44:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.363 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.363 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.363 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.363 00:44:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.363 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.363 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.363 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.363 00:44:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.363 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.363 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.363 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.363 00:44:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.363 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.363 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.363 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.363 00:44:23 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.363 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.363 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.363 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.363 00:44:23 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.363 00:44:23 -- setup/common.sh@33 -- # echo 0 00:04:39.363 00:44:23 -- setup/common.sh@33 -- # return 0 00:04:39.363 00:44:23 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:39.363 00:44:23 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:39.363 00:44:23 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:39.363 00:44:23 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:39.363 00:44:23 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:39.363 00:44:23 -- setup/common.sh@18 -- # local node=1 00:04:39.363 00:44:23 -- setup/common.sh@19 -- # local var val 00:04:39.363 00:44:23 -- setup/common.sh@20 -- # local mem_f mem 00:04:39.363 00:44:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.363 00:44:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:39.363 00:44:23 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:39.363 00:44:23 -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.363 00:44:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.363 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.363 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.363 00:44:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 17111348 kB' 'MemUsed: 10600476 kB' 'SwapCached: 0 kB' 'Active: 5474080 kB' 'Inactive: 3353288 kB' 'Active(anon): 5240324 kB' 'Inactive(anon): 0 kB' 'Active(file): 233756 kB' 'Inactive(file): 3353288 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8563560 kB' 'Mapped: 103408 kB' 'AnonPages: 263868 kB' 'Shmem: 4976516 kB' 'KernelStack: 5992 kB' 'PageTables: 4444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 98972 kB' 'Slab: 254852 kB' 'SReclaimable: 98972 kB' 'SUnreclaim: 155880 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:39.363 00:44:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.363 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.363 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.363 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.363 00:44:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.363 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.363 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.363 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.363 00:44:23 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.363 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.363 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.363 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.363 00:44:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.363 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.363 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.363 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.363 00:44:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.363 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.363 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.363 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.363 00:44:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.364 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.364 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.364 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.364 00:44:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.364 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.364 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.364 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.364 00:44:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.364 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.364 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.364 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.364 00:44:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.364 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.364 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.364 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.364 00:44:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.364 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.364 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.364 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.364 00:44:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.364 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.364 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.364 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.364 00:44:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.364 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.364 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.364 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.364 00:44:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.364 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.364 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.364 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.364 00:44:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.364 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.364 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.364 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.364 00:44:23 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.364 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.364 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.364 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.364 00:44:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.364 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.364 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.364 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.364 00:44:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.364 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.364 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.364 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.364 00:44:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.364 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.364 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.364 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.364 00:44:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.364 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.364 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.364 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.364 00:44:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.364 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.364 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.364 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.364 00:44:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.364 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.364 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.364 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.364 00:44:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.364 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.364 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.364 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.364 00:44:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.364 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.364 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.364 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.364 00:44:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.364 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.364 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.364 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.364 00:44:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.364 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.364 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.364 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.364 00:44:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.364 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.364 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.364 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.364 00:44:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.364 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.364 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.364 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.364 00:44:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.364 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.364 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.364 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.364 00:44:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.364 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.364 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.364 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.364 00:44:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.364 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.364 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.364 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.364 00:44:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.364 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.364 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.364 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.364 00:44:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.364 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.364 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.364 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.364 00:44:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.364 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.364 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.364 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.364 00:44:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.364 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.364 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.364 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.364 00:44:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.364 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.364 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.364 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.364 00:44:23 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.364 00:44:23 -- setup/common.sh@32 -- # continue 00:04:39.364 00:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.364 00:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.364 00:44:23 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.364 00:44:23 -- setup/common.sh@33 -- # echo 0 00:04:39.364 00:44:23 -- setup/common.sh@33 -- # return 0 00:04:39.364 00:44:23 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:39.364 00:44:23 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:39.364 00:44:23 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:39.364 00:44:23 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:39.364 00:44:23 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:39.364 node0=512 expecting 512 00:04:39.364 00:44:23 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:39.364 00:44:23 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:39.364 00:44:23 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:39.364 00:44:23 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:39.364 node1=512 expecting 512 00:04:39.364 00:44:23 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:39.364 00:04:39.364 real 0m1.360s 00:04:39.364 user 0m0.560s 00:04:39.364 sys 0m0.763s 00:04:39.364 00:44:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:39.364 00:44:23 -- common/autotest_common.sh@10 -- # set +x 00:04:39.364 ************************************ 00:04:39.364 END TEST per_node_1G_alloc 00:04:39.364 ************************************ 00:04:39.364 00:44:23 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:39.364 00:44:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:39.364 00:44:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:39.364 00:44:23 -- common/autotest_common.sh@10 -- # set +x 00:04:39.364 ************************************ 00:04:39.364 START TEST even_2G_alloc 00:04:39.364 ************************************ 00:04:39.364 00:44:23 -- common/autotest_common.sh@1104 -- # even_2G_alloc 00:04:39.364 00:44:23 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:39.364 00:44:23 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:39.364 00:44:23 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:39.364 00:44:23 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:39.364 00:44:23 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:39.364 00:44:23 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:39.364 00:44:23 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:39.364 00:44:23 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:39.365 00:44:23 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:39.365 00:44:23 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:39.365 00:44:23 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:39.365 00:44:23 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:39.365 00:44:23 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:39.365 00:44:23 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:39.365 00:44:23 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:39.365 00:44:23 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:39.365 00:44:23 -- setup/hugepages.sh@83 -- # : 512 00:04:39.365 00:44:23 -- setup/hugepages.sh@84 -- # : 1 00:04:39.365 00:44:23 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:39.365 00:44:23 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:39.365 00:44:23 -- setup/hugepages.sh@83 -- # : 0 00:04:39.365 00:44:23 -- setup/hugepages.sh@84 -- # : 0 00:04:39.365 00:44:23 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:39.365 00:44:23 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:39.365 00:44:23 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:39.365 00:44:23 -- setup/hugepages.sh@153 -- # setup output 00:04:39.365 00:44:23 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:39.365 00:44:23 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:40.771 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:40.771 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:40.771 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:40.771 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:40.771 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:40.771 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:40.771 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:40.771 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:40.771 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:40.771 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:40.771 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:40.771 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:40.771 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:40.771 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:40.771 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:40.771 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:40.771 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:40.771 00:44:24 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:40.771 00:44:24 -- setup/hugepages.sh@89 -- # local node 00:04:40.771 00:44:24 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:40.771 00:44:24 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:40.771 00:44:24 -- setup/hugepages.sh@92 -- # local surp 00:04:40.771 00:44:24 -- setup/hugepages.sh@93 -- # local resv 00:04:40.771 00:44:24 -- setup/hugepages.sh@94 -- # local anon 00:04:40.771 00:44:24 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:40.771 00:44:24 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:40.771 00:44:24 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:40.771 00:44:24 -- setup/common.sh@18 -- # local node= 00:04:40.771 00:44:24 -- setup/common.sh@19 -- # local var val 00:04:40.771 00:44:24 -- setup/common.sh@20 -- # local mem_f mem 00:04:40.771 00:44:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:40.771 00:44:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:40.771 00:44:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:40.771 00:44:24 -- setup/common.sh@28 -- # mapfile -t mem 00:04:40.771 00:44:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:40.771 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.771 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.771 00:44:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43895884 kB' 'MemAvailable: 47406216 kB' 'Buffers: 2704 kB' 'Cached: 12212272 kB' 'SwapCached: 0 kB' 'Active: 9161984 kB' 'Inactive: 3508456 kB' 'Active(anon): 8766936 kB' 'Inactive(anon): 0 kB' 'Active(file): 395048 kB' 'Inactive(file): 3508456 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 458628 kB' 'Mapped: 215888 kB' 'Shmem: 8311472 kB' 'KReclaimable: 200900 kB' 'Slab: 586252 kB' 'SReclaimable: 200900 kB' 'SUnreclaim: 385352 kB' 'KernelStack: 12784 kB' 'PageTables: 8236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9921664 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197064 kB' 'VmallocChunk: 0 kB' 'Percpu: 37440 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2358876 kB' 'DirectMap2M: 20629504 kB' 'DirectMap1G: 46137344 kB' 00:04:40.771 00:44:24 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.771 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.771 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.771 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.771 00:44:24 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.771 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.771 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.771 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.771 00:44:24 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.771 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.771 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.771 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.771 00:44:24 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.771 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.771 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.771 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.771 00:44:24 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.771 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.771 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.771 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.771 00:44:24 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.771 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.771 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.771 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.771 00:44:24 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.771 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.771 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.771 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.771 00:44:24 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.771 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.771 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.771 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.771 00:44:24 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.771 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.771 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.771 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.771 00:44:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.771 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.771 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.771 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.771 00:44:24 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.771 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.771 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.771 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.771 00:44:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.771 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.771 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.771 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.771 00:44:24 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.771 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.771 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.771 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.771 00:44:24 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.771 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.772 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.772 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.772 00:44:24 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.772 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.772 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.772 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.772 00:44:24 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.772 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.772 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.772 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.772 00:44:24 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.772 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.772 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.772 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.772 00:44:24 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.772 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.772 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.772 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.772 00:44:24 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.772 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.772 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.772 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.772 00:44:24 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.772 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.772 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.772 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.772 00:44:24 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.772 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.772 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.772 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.772 00:44:24 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.772 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.772 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.772 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.772 00:44:24 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.772 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.772 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.772 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.772 00:44:24 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.772 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.772 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.772 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.772 00:44:24 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.772 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.772 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.772 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.772 00:44:24 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.772 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.772 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.772 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.772 00:44:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.772 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.772 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.772 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.772 00:44:24 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.772 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.772 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.772 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.772 00:44:24 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.772 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.772 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.772 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.772 00:44:24 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.772 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.772 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.772 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.772 00:44:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.772 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.772 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.772 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.772 00:44:24 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.772 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.772 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.772 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.772 00:44:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.772 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.772 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.772 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.772 00:44:24 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.772 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.772 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.772 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.772 00:44:24 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.772 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.772 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.772 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.772 00:44:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.772 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.772 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.772 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.772 00:44:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.772 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.772 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.772 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.772 00:44:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.772 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.772 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.772 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.772 00:44:24 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.772 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.772 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.772 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.772 00:44:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.772 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.772 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.772 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.772 00:44:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.772 00:44:24 -- setup/common.sh@33 -- # echo 0 00:04:40.772 00:44:24 -- setup/common.sh@33 -- # return 0 00:04:40.772 00:44:24 -- setup/hugepages.sh@97 -- # anon=0 00:04:40.772 00:44:24 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:40.772 00:44:24 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:40.772 00:44:24 -- setup/common.sh@18 -- # local node= 00:04:40.772 00:44:24 -- setup/common.sh@19 -- # local var val 00:04:40.772 00:44:24 -- setup/common.sh@20 -- # local mem_f mem 00:04:40.772 00:44:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:40.772 00:44:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:40.772 00:44:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:40.772 00:44:24 -- setup/common.sh@28 -- # mapfile -t mem 00:04:40.772 00:44:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:40.772 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.772 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.772 00:44:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43896488 kB' 'MemAvailable: 47406820 kB' 'Buffers: 2704 kB' 'Cached: 12212276 kB' 'SwapCached: 0 kB' 'Active: 9162108 kB' 'Inactive: 3508456 kB' 'Active(anon): 8767060 kB' 'Inactive(anon): 0 kB' 'Active(file): 395048 kB' 'Inactive(file): 3508456 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 458744 kB' 'Mapped: 215888 kB' 'Shmem: 8311476 kB' 'KReclaimable: 200900 kB' 'Slab: 586256 kB' 'SReclaimable: 200900 kB' 'SUnreclaim: 385356 kB' 'KernelStack: 12720 kB' 'PageTables: 8044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9921676 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197048 kB' 'VmallocChunk: 0 kB' 'Percpu: 37440 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2358876 kB' 'DirectMap2M: 20629504 kB' 'DirectMap1G: 46137344 kB' 00:04:40.772 00:44:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.772 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.772 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.772 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.772 00:44:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.772 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.772 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.772 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.772 00:44:24 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.772 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.772 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.772 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.772 00:44:24 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.772 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.772 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.772 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.772 00:44:24 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.772 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.772 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.772 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.772 00:44:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.773 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.773 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.773 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.773 00:44:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.773 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.773 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.773 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.773 00:44:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.773 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.773 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.773 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.773 00:44:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.773 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.773 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.773 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.773 00:44:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.773 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.773 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.773 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.773 00:44:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.773 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.773 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.773 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.773 00:44:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.773 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.773 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.773 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.773 00:44:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.773 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.773 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.773 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.773 00:44:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.773 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.773 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.773 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.773 00:44:24 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.773 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.773 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.773 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.773 00:44:24 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.773 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.773 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.773 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.773 00:44:24 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.773 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.773 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.773 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.773 00:44:24 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.773 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.773 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.773 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.773 00:44:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.773 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.773 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.773 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.773 00:44:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.773 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.773 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.773 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.773 00:44:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.773 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.773 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.773 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.773 00:44:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.773 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.773 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.773 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.773 00:44:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.773 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.773 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.773 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.773 00:44:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.773 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.773 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.773 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.773 00:44:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.773 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.773 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.773 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.773 00:44:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.773 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.773 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.773 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.773 00:44:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.773 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.773 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.773 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.773 00:44:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.773 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.773 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.773 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.773 00:44:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.773 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.773 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.773 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.773 00:44:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.773 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.773 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.773 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.773 00:44:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.773 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.773 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.773 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.773 00:44:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.773 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.773 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.773 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.773 00:44:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.773 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.773 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.773 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.773 00:44:24 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.773 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.773 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.773 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.773 00:44:24 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.773 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.773 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.773 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.773 00:44:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.773 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.773 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.773 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.773 00:44:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.773 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.773 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.773 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.773 00:44:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.773 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.773 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.773 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.773 00:44:24 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.773 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.773 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.773 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.773 00:44:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.773 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.773 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.773 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.773 00:44:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.773 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.773 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.773 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.773 00:44:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.773 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.773 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.773 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.773 00:44:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.773 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.773 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.773 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.773 00:44:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.773 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.773 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.773 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.773 00:44:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.773 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.773 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.773 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.773 00:44:24 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.773 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.773 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.773 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.774 00:44:24 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.774 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.774 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.774 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.774 00:44:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.774 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.774 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.774 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.774 00:44:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.774 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.774 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.774 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.774 00:44:24 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.774 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.774 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.774 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.774 00:44:24 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.774 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.774 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.774 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.774 00:44:24 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.774 00:44:24 -- setup/common.sh@33 -- # echo 0 00:04:40.774 00:44:24 -- setup/common.sh@33 -- # return 0 00:04:40.774 00:44:24 -- setup/hugepages.sh@99 -- # surp=0 00:04:40.774 00:44:24 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:40.774 00:44:24 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:40.774 00:44:24 -- setup/common.sh@18 -- # local node= 00:04:40.774 00:44:24 -- setup/common.sh@19 -- # local var val 00:04:40.774 00:44:24 -- setup/common.sh@20 -- # local mem_f mem 00:04:40.774 00:44:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:40.774 00:44:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:40.774 00:44:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:40.774 00:44:24 -- setup/common.sh@28 -- # mapfile -t mem 00:04:40.774 00:44:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:40.774 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.774 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.774 00:44:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43896736 kB' 'MemAvailable: 47407068 kB' 'Buffers: 2704 kB' 'Cached: 12212288 kB' 'SwapCached: 0 kB' 'Active: 9161644 kB' 'Inactive: 3508456 kB' 'Active(anon): 8766596 kB' 'Inactive(anon): 0 kB' 'Active(file): 395048 kB' 'Inactive(file): 3508456 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 458304 kB' 'Mapped: 215888 kB' 'Shmem: 8311488 kB' 'KReclaimable: 200900 kB' 'Slab: 586376 kB' 'SReclaimable: 200900 kB' 'SUnreclaim: 385476 kB' 'KernelStack: 12768 kB' 'PageTables: 8184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9921692 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197048 kB' 'VmallocChunk: 0 kB' 'Percpu: 37440 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2358876 kB' 'DirectMap2M: 20629504 kB' 'DirectMap1G: 46137344 kB' 00:04:40.774 00:44:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.774 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.774 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.774 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.774 00:44:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.774 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.774 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.774 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.774 00:44:24 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.774 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.774 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.774 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.774 00:44:24 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.774 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.774 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.774 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.774 00:44:24 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.774 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.774 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.774 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.774 00:44:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.774 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.774 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.774 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.774 00:44:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.774 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.774 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.774 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.774 00:44:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.774 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.774 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.774 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.774 00:44:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.774 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.774 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.774 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.774 00:44:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.774 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.774 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.774 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.774 00:44:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.774 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.774 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.774 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.774 00:44:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.774 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.774 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.774 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.774 00:44:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.774 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.774 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.774 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.774 00:44:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.774 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.774 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.774 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.774 00:44:24 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.774 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.774 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.774 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.774 00:44:24 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.774 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.774 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.774 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.774 00:44:24 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.774 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.774 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.774 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.774 00:44:24 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.774 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.774 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.774 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.774 00:44:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.774 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.774 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.774 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.774 00:44:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.774 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.774 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.774 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.774 00:44:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.774 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.774 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.774 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.774 00:44:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.774 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.774 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.774 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.774 00:44:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.774 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.774 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.774 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.774 00:44:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.774 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.774 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.774 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.774 00:44:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.774 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.774 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.774 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.774 00:44:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.774 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.774 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.774 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.774 00:44:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.774 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.774 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.774 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.774 00:44:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.775 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.775 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.775 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.775 00:44:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.775 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.775 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.775 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.775 00:44:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.775 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.775 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.775 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.775 00:44:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.775 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.775 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.775 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.775 00:44:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.775 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.775 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.775 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.775 00:44:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.775 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.775 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.775 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.775 00:44:24 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.775 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.775 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.775 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.775 00:44:24 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.775 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.775 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.775 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.775 00:44:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.775 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.775 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.775 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.775 00:44:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.775 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.775 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.775 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.775 00:44:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.775 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.775 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.775 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.775 00:44:24 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.775 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.775 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.775 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.775 00:44:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.775 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.775 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.775 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.775 00:44:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.775 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.775 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.775 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.775 00:44:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.775 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.775 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.775 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.775 00:44:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.775 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.775 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.775 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.775 00:44:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.775 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.775 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.775 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.775 00:44:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.775 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.775 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.775 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.775 00:44:24 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.775 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.775 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.775 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.775 00:44:24 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.775 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.775 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.775 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.775 00:44:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.775 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.775 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.775 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.775 00:44:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.775 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.775 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.775 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.775 00:44:24 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.775 00:44:24 -- setup/common.sh@32 -- # continue 00:04:40.775 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.775 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.775 00:44:24 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.775 00:44:24 -- setup/common.sh@33 -- # echo 0 00:04:40.775 00:44:24 -- setup/common.sh@33 -- # return 0 00:04:40.775 00:44:24 -- setup/hugepages.sh@100 -- # resv=0 00:04:40.775 00:44:24 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:40.775 nr_hugepages=1024 00:04:40.775 00:44:24 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:40.775 resv_hugepages=0 00:04:40.775 00:44:24 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:40.775 surplus_hugepages=0 00:04:40.775 00:44:24 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:40.775 anon_hugepages=0 00:04:40.775 00:44:24 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:40.775 00:44:24 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:40.775 00:44:24 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:40.775 00:44:24 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:40.775 00:44:24 -- setup/common.sh@18 -- # local node= 00:04:40.775 00:44:24 -- setup/common.sh@19 -- # local var val 00:04:40.775 00:44:24 -- setup/common.sh@20 -- # local mem_f mem 00:04:40.775 00:44:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:40.775 00:44:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:40.775 00:44:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:40.775 00:44:24 -- setup/common.sh@28 -- # mapfile -t mem 00:04:40.775 00:44:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.036 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.036 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.036 00:44:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43896484 kB' 'MemAvailable: 47406816 kB' 'Buffers: 2704 kB' 'Cached: 12212300 kB' 'SwapCached: 0 kB' 'Active: 9161712 kB' 'Inactive: 3508456 kB' 'Active(anon): 8766664 kB' 'Inactive(anon): 0 kB' 'Active(file): 395048 kB' 'Inactive(file): 3508456 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 458308 kB' 'Mapped: 215888 kB' 'Shmem: 8311500 kB' 'KReclaimable: 200900 kB' 'Slab: 586376 kB' 'SReclaimable: 200900 kB' 'SUnreclaim: 385476 kB' 'KernelStack: 12768 kB' 'PageTables: 8184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9921704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197048 kB' 'VmallocChunk: 0 kB' 'Percpu: 37440 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2358876 kB' 'DirectMap2M: 20629504 kB' 'DirectMap1G: 46137344 kB' 00:04:41.036 00:44:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.036 00:44:24 -- setup/common.sh@32 -- # continue 00:04:41.036 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.036 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.036 00:44:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.036 00:44:24 -- setup/common.sh@32 -- # continue 00:04:41.036 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.036 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.036 00:44:24 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.036 00:44:24 -- setup/common.sh@32 -- # continue 00:04:41.036 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.036 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.036 00:44:24 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.036 00:44:24 -- setup/common.sh@32 -- # continue 00:04:41.036 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.036 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.036 00:44:24 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.036 00:44:24 -- setup/common.sh@32 -- # continue 00:04:41.036 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.036 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.036 00:44:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.036 00:44:24 -- setup/common.sh@32 -- # continue 00:04:41.036 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.036 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.036 00:44:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.036 00:44:24 -- setup/common.sh@32 -- # continue 00:04:41.036 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.036 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.036 00:44:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.036 00:44:24 -- setup/common.sh@32 -- # continue 00:04:41.036 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.036 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.036 00:44:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.036 00:44:24 -- setup/common.sh@32 -- # continue 00:04:41.036 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.036 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.036 00:44:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.036 00:44:24 -- setup/common.sh@32 -- # continue 00:04:41.036 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.036 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.036 00:44:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.036 00:44:24 -- setup/common.sh@32 -- # continue 00:04:41.036 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.036 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.036 00:44:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.036 00:44:24 -- setup/common.sh@32 -- # continue 00:04:41.036 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.036 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.036 00:44:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.036 00:44:24 -- setup/common.sh@32 -- # continue 00:04:41.036 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.036 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.036 00:44:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.036 00:44:24 -- setup/common.sh@32 -- # continue 00:04:41.036 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.036 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.036 00:44:24 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.036 00:44:24 -- setup/common.sh@32 -- # continue 00:04:41.036 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.036 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.036 00:44:24 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.036 00:44:24 -- setup/common.sh@32 -- # continue 00:04:41.036 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.036 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.036 00:44:24 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.036 00:44:24 -- setup/common.sh@32 -- # continue 00:04:41.036 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.036 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.036 00:44:24 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.036 00:44:24 -- setup/common.sh@32 -- # continue 00:04:41.036 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.036 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.036 00:44:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.036 00:44:24 -- setup/common.sh@32 -- # continue 00:04:41.036 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.036 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.036 00:44:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.036 00:44:24 -- setup/common.sh@32 -- # continue 00:04:41.036 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.036 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.036 00:44:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.036 00:44:24 -- setup/common.sh@32 -- # continue 00:04:41.036 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.036 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.036 00:44:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.036 00:44:24 -- setup/common.sh@32 -- # continue 00:04:41.036 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.036 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.036 00:44:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.036 00:44:24 -- setup/common.sh@32 -- # continue 00:04:41.036 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.036 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.036 00:44:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.036 00:44:24 -- setup/common.sh@32 -- # continue 00:04:41.036 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.036 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.036 00:44:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.037 00:44:24 -- setup/common.sh@32 -- # continue 00:04:41.037 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.037 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.037 00:44:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.037 00:44:24 -- setup/common.sh@32 -- # continue 00:04:41.037 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.037 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.037 00:44:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.037 00:44:24 -- setup/common.sh@32 -- # continue 00:04:41.037 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.037 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.037 00:44:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.037 00:44:24 -- setup/common.sh@32 -- # continue 00:04:41.037 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.037 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.037 00:44:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.037 00:44:24 -- setup/common.sh@32 -- # continue 00:04:41.037 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.037 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.037 00:44:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.037 00:44:24 -- setup/common.sh@32 -- # continue 00:04:41.037 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.037 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.037 00:44:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.037 00:44:24 -- setup/common.sh@32 -- # continue 00:04:41.037 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.037 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.037 00:44:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.037 00:44:24 -- setup/common.sh@32 -- # continue 00:04:41.037 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.037 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.037 00:44:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.037 00:44:24 -- setup/common.sh@32 -- # continue 00:04:41.037 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.037 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.037 00:44:24 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.037 00:44:24 -- setup/common.sh@32 -- # continue 00:04:41.037 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.037 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.037 00:44:24 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.037 00:44:24 -- setup/common.sh@32 -- # continue 00:04:41.037 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.037 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.037 00:44:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.037 00:44:24 -- setup/common.sh@32 -- # continue 00:04:41.037 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.037 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.037 00:44:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.037 00:44:24 -- setup/common.sh@32 -- # continue 00:04:41.037 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.037 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.037 00:44:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.037 00:44:24 -- setup/common.sh@32 -- # continue 00:04:41.037 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.037 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.037 00:44:24 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.037 00:44:24 -- setup/common.sh@32 -- # continue 00:04:41.037 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.037 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.037 00:44:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.037 00:44:24 -- setup/common.sh@32 -- # continue 00:04:41.037 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.037 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.037 00:44:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.037 00:44:24 -- setup/common.sh@32 -- # continue 00:04:41.037 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.037 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.037 00:44:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.037 00:44:24 -- setup/common.sh@32 -- # continue 00:04:41.037 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.037 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.037 00:44:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.037 00:44:24 -- setup/common.sh@32 -- # continue 00:04:41.037 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.037 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.037 00:44:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.037 00:44:24 -- setup/common.sh@32 -- # continue 00:04:41.037 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.037 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.037 00:44:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.037 00:44:24 -- setup/common.sh@32 -- # continue 00:04:41.037 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.037 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.037 00:44:24 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.037 00:44:24 -- setup/common.sh@32 -- # continue 00:04:41.037 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.037 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.037 00:44:24 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.037 00:44:24 -- setup/common.sh@32 -- # continue 00:04:41.037 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.037 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.037 00:44:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.037 00:44:24 -- setup/common.sh@32 -- # continue 00:04:41.037 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.037 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.037 00:44:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.037 00:44:24 -- setup/common.sh@33 -- # echo 1024 00:04:41.037 00:44:24 -- setup/common.sh@33 -- # return 0 00:04:41.037 00:44:24 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:41.037 00:44:24 -- setup/hugepages.sh@112 -- # get_nodes 00:04:41.037 00:44:24 -- setup/hugepages.sh@27 -- # local node 00:04:41.037 00:44:24 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:41.037 00:44:24 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:41.037 00:44:24 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:41.037 00:44:24 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:41.037 00:44:24 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:41.037 00:44:24 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:41.037 00:44:24 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:41.037 00:44:24 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:41.037 00:44:24 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:41.037 00:44:24 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:41.037 00:44:24 -- setup/common.sh@18 -- # local node=0 00:04:41.037 00:44:24 -- setup/common.sh@19 -- # local var val 00:04:41.037 00:44:24 -- setup/common.sh@20 -- # local mem_f mem 00:04:41.037 00:44:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.037 00:44:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:41.037 00:44:24 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:41.037 00:44:24 -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.037 00:44:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.037 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.037 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.037 00:44:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 26792228 kB' 'MemUsed: 6037656 kB' 'SwapCached: 0 kB' 'Active: 3686892 kB' 'Inactive: 155168 kB' 'Active(anon): 3525600 kB' 'Inactive(anon): 0 kB' 'Active(file): 161292 kB' 'Inactive(file): 155168 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3651404 kB' 'Mapped: 112480 kB' 'AnonPages: 193756 kB' 'Shmem: 3334944 kB' 'KernelStack: 6744 kB' 'PageTables: 3636 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 101928 kB' 'Slab: 331580 kB' 'SReclaimable: 101928 kB' 'SUnreclaim: 229652 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:41.037 00:44:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.037 00:44:24 -- setup/common.sh@32 -- # continue 00:04:41.037 00:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.037 00:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.037 00:44:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.037 00:44:25 -- setup/common.sh@32 -- # continue 00:04:41.037 00:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.037 00:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.037 00:44:25 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.037 00:44:25 -- setup/common.sh@32 -- # continue 00:04:41.037 00:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.037 00:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.037 00:44:25 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.037 00:44:25 -- setup/common.sh@32 -- # continue 00:04:41.037 00:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.037 00:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.037 00:44:25 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.037 00:44:25 -- setup/common.sh@32 -- # continue 00:04:41.037 00:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.037 00:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.037 00:44:25 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.037 00:44:25 -- setup/common.sh@32 -- # continue 00:04:41.037 00:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.037 00:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.037 00:44:25 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.037 00:44:25 -- setup/common.sh@32 -- # continue 00:04:41.037 00:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.038 00:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.038 00:44:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.038 00:44:25 -- setup/common.sh@32 -- # continue 00:04:41.038 00:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.038 00:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.038 00:44:25 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.038 00:44:25 -- setup/common.sh@32 -- # continue 00:04:41.038 00:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.038 00:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.038 00:44:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.038 00:44:25 -- setup/common.sh@32 -- # continue 00:04:41.038 00:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.038 00:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.038 00:44:25 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.038 00:44:25 -- setup/common.sh@32 -- # continue 00:04:41.038 00:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.038 00:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.038 00:44:25 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.038 00:44:25 -- setup/common.sh@32 -- # continue 00:04:41.038 00:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.038 00:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.038 00:44:25 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.038 00:44:25 -- setup/common.sh@32 -- # continue 00:04:41.038 00:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.038 00:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.038 00:44:25 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.038 00:44:25 -- setup/common.sh@32 -- # continue 00:04:41.038 00:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.038 00:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.038 00:44:25 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.038 00:44:25 -- setup/common.sh@32 -- # continue 00:04:41.038 00:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.038 00:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.038 00:44:25 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.038 00:44:25 -- setup/common.sh@32 -- # continue 00:04:41.038 00:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.038 00:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.038 00:44:25 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.038 00:44:25 -- setup/common.sh@32 -- # continue 00:04:41.038 00:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.038 00:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.038 00:44:25 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.038 00:44:25 -- setup/common.sh@32 -- # continue 00:04:41.038 00:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.038 00:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.038 00:44:25 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.038 00:44:25 -- setup/common.sh@32 -- # continue 00:04:41.038 00:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.038 00:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.038 00:44:25 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.038 00:44:25 -- setup/common.sh@32 -- # continue 00:04:41.038 00:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.038 00:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.038 00:44:25 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.038 00:44:25 -- setup/common.sh@32 -- # continue 00:04:41.038 00:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.038 00:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.038 00:44:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.038 00:44:25 -- setup/common.sh@32 -- # continue 00:04:41.038 00:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.038 00:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.038 00:44:25 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.038 00:44:25 -- setup/common.sh@32 -- # continue 00:04:41.038 00:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.038 00:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.038 00:44:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.038 00:44:25 -- setup/common.sh@32 -- # continue 00:04:41.038 00:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.038 00:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.038 00:44:25 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.038 00:44:25 -- setup/common.sh@32 -- # continue 00:04:41.038 00:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.038 00:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.038 00:44:25 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.038 00:44:25 -- setup/common.sh@32 -- # continue 00:04:41.038 00:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.038 00:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.038 00:44:25 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.038 00:44:25 -- setup/common.sh@32 -- # continue 00:04:41.038 00:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.038 00:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.038 00:44:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.038 00:44:25 -- setup/common.sh@32 -- # continue 00:04:41.038 00:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.038 00:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.038 00:44:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.038 00:44:25 -- setup/common.sh@32 -- # continue 00:04:41.038 00:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.038 00:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.038 00:44:25 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.038 00:44:25 -- setup/common.sh@32 -- # continue 00:04:41.038 00:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.038 00:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.038 00:44:25 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.038 00:44:25 -- setup/common.sh@32 -- # continue 00:04:41.038 00:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.038 00:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.038 00:44:25 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.038 00:44:25 -- setup/common.sh@32 -- # continue 00:04:41.038 00:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.038 00:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.038 00:44:25 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.038 00:44:25 -- setup/common.sh@32 -- # continue 00:04:41.038 00:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.038 00:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.038 00:44:25 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.038 00:44:25 -- setup/common.sh@32 -- # continue 00:04:41.038 00:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.038 00:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.038 00:44:25 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.038 00:44:25 -- setup/common.sh@32 -- # continue 00:04:41.038 00:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.038 00:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.038 00:44:25 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.038 00:44:25 -- setup/common.sh@32 -- # continue 00:04:41.038 00:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.038 00:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.038 00:44:25 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.038 00:44:25 -- setup/common.sh@33 -- # echo 0 00:04:41.038 00:44:25 -- setup/common.sh@33 -- # return 0 00:04:41.038 00:44:25 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:41.038 00:44:25 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:41.038 00:44:25 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:41.038 00:44:25 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:41.038 00:44:25 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:41.038 00:44:25 -- setup/common.sh@18 -- # local node=1 00:04:41.038 00:44:25 -- setup/common.sh@19 -- # local var val 00:04:41.038 00:44:25 -- setup/common.sh@20 -- # local mem_f mem 00:04:41.038 00:44:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.038 00:44:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:41.038 00:44:25 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:41.038 00:44:25 -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.038 00:44:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.038 00:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.038 00:44:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 17104256 kB' 'MemUsed: 10607568 kB' 'SwapCached: 0 kB' 'Active: 5475100 kB' 'Inactive: 3353288 kB' 'Active(anon): 5241344 kB' 'Inactive(anon): 0 kB' 'Active(file): 233756 kB' 'Inactive(file): 3353288 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8563628 kB' 'Mapped: 103408 kB' 'AnonPages: 264808 kB' 'Shmem: 4976584 kB' 'KernelStack: 6024 kB' 'PageTables: 4548 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 98972 kB' 'Slab: 254796 kB' 'SReclaimable: 98972 kB' 'SUnreclaim: 155824 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:41.038 00:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.038 00:44:25 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.038 00:44:25 -- setup/common.sh@32 -- # continue 00:04:41.038 00:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.038 00:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.038 00:44:25 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.038 00:44:25 -- setup/common.sh@32 -- # continue 00:04:41.038 00:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.038 00:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.038 00:44:25 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.038 00:44:25 -- setup/common.sh@32 -- # continue 00:04:41.038 00:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.039 00:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.039 00:44:25 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.039 00:44:25 -- setup/common.sh@32 -- # continue 00:04:41.039 00:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.039 00:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.039 00:44:25 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.039 00:44:25 -- setup/common.sh@32 -- # continue 00:04:41.039 00:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.039 00:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.039 00:44:25 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.039 00:44:25 -- setup/common.sh@32 -- # continue 00:04:41.039 00:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.039 00:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.039 00:44:25 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.039 00:44:25 -- setup/common.sh@32 -- # continue 00:04:41.039 00:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.039 00:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.039 00:44:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.039 00:44:25 -- setup/common.sh@32 -- # continue 00:04:41.039 00:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.039 00:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.039 00:44:25 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.039 00:44:25 -- setup/common.sh@32 -- # continue 00:04:41.039 00:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.039 00:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.039 00:44:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.039 00:44:25 -- setup/common.sh@32 -- # continue 00:04:41.039 00:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.039 00:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.039 00:44:25 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.039 00:44:25 -- setup/common.sh@32 -- # continue 00:04:41.039 00:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.039 00:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.039 00:44:25 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.039 00:44:25 -- setup/common.sh@32 -- # continue 00:04:41.039 00:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.039 00:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.039 00:44:25 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.039 00:44:25 -- setup/common.sh@32 -- # continue 00:04:41.039 00:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.039 00:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.039 00:44:25 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.039 00:44:25 -- setup/common.sh@32 -- # continue 00:04:41.039 00:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.039 00:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.039 00:44:25 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.039 00:44:25 -- setup/common.sh@32 -- # continue 00:04:41.039 00:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.039 00:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.039 00:44:25 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.039 00:44:25 -- setup/common.sh@32 -- # continue 00:04:41.039 00:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.039 00:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.039 00:44:25 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.039 00:44:25 -- setup/common.sh@32 -- # continue 00:04:41.039 00:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.039 00:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.039 00:44:25 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.039 00:44:25 -- setup/common.sh@32 -- # continue 00:04:41.039 00:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.039 00:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.039 00:44:25 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.039 00:44:25 -- setup/common.sh@32 -- # continue 00:04:41.039 00:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.039 00:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.039 00:44:25 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.039 00:44:25 -- setup/common.sh@32 -- # continue 00:04:41.039 00:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.039 00:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.039 00:44:25 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.039 00:44:25 -- setup/common.sh@32 -- # continue 00:04:41.039 00:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.039 00:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.039 00:44:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.039 00:44:25 -- setup/common.sh@32 -- # continue 00:04:41.039 00:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.039 00:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.039 00:44:25 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.039 00:44:25 -- setup/common.sh@32 -- # continue 00:04:41.039 00:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.039 00:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.039 00:44:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.039 00:44:25 -- setup/common.sh@32 -- # continue 00:04:41.039 00:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.039 00:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.039 00:44:25 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.039 00:44:25 -- setup/common.sh@32 -- # continue 00:04:41.039 00:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.039 00:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.039 00:44:25 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.039 00:44:25 -- setup/common.sh@32 -- # continue 00:04:41.039 00:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.039 00:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.039 00:44:25 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.039 00:44:25 -- setup/common.sh@32 -- # continue 00:04:41.039 00:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.039 00:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.039 00:44:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.039 00:44:25 -- setup/common.sh@32 -- # continue 00:04:41.039 00:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.039 00:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.039 00:44:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.039 00:44:25 -- setup/common.sh@32 -- # continue 00:04:41.039 00:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.039 00:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.039 00:44:25 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.039 00:44:25 -- setup/common.sh@32 -- # continue 00:04:41.039 00:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.039 00:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.039 00:44:25 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.039 00:44:25 -- setup/common.sh@32 -- # continue 00:04:41.039 00:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.039 00:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.039 00:44:25 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.039 00:44:25 -- setup/common.sh@32 -- # continue 00:04:41.039 00:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.039 00:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.039 00:44:25 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.039 00:44:25 -- setup/common.sh@32 -- # continue 00:04:41.039 00:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.039 00:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.039 00:44:25 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.039 00:44:25 -- setup/common.sh@32 -- # continue 00:04:41.039 00:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.039 00:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.039 00:44:25 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.039 00:44:25 -- setup/common.sh@32 -- # continue 00:04:41.039 00:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.039 00:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.039 00:44:25 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.039 00:44:25 -- setup/common.sh@32 -- # continue 00:04:41.039 00:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.039 00:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.039 00:44:25 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.039 00:44:25 -- setup/common.sh@33 -- # echo 0 00:04:41.039 00:44:25 -- setup/common.sh@33 -- # return 0 00:04:41.039 00:44:25 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:41.039 00:44:25 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:41.039 00:44:25 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:41.039 00:44:25 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:41.039 00:44:25 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:41.039 node0=512 expecting 512 00:04:41.039 00:44:25 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:41.039 00:44:25 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:41.039 00:44:25 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:41.039 00:44:25 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:41.039 node1=512 expecting 512 00:04:41.039 00:44:25 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:41.039 00:04:41.039 real 0m1.509s 00:04:41.039 user 0m0.624s 00:04:41.039 sys 0m0.852s 00:04:41.039 00:44:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:41.039 00:44:25 -- common/autotest_common.sh@10 -- # set +x 00:04:41.039 ************************************ 00:04:41.039 END TEST even_2G_alloc 00:04:41.039 ************************************ 00:04:41.039 00:44:25 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:41.039 00:44:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:41.039 00:44:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:41.039 00:44:25 -- common/autotest_common.sh@10 -- # set +x 00:04:41.039 ************************************ 00:04:41.039 START TEST odd_alloc 00:04:41.039 ************************************ 00:04:41.040 00:44:25 -- common/autotest_common.sh@1104 -- # odd_alloc 00:04:41.040 00:44:25 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:41.040 00:44:25 -- setup/hugepages.sh@49 -- # local size=2098176 00:04:41.040 00:44:25 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:41.040 00:44:25 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:41.040 00:44:25 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:41.040 00:44:25 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:41.040 00:44:25 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:41.040 00:44:25 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:41.040 00:44:25 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:41.040 00:44:25 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:41.040 00:44:25 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:41.040 00:44:25 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:41.040 00:44:25 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:41.040 00:44:25 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:41.040 00:44:25 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:41.040 00:44:25 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:41.040 00:44:25 -- setup/hugepages.sh@83 -- # : 513 00:04:41.040 00:44:25 -- setup/hugepages.sh@84 -- # : 1 00:04:41.040 00:44:25 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:41.040 00:44:25 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:41.040 00:44:25 -- setup/hugepages.sh@83 -- # : 0 00:04:41.040 00:44:25 -- setup/hugepages.sh@84 -- # : 0 00:04:41.040 00:44:25 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:41.040 00:44:25 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:41.040 00:44:25 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:41.040 00:44:25 -- setup/hugepages.sh@160 -- # setup output 00:04:41.040 00:44:25 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:41.040 00:44:25 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:41.975 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:41.975 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:41.975 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:41.975 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:41.975 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:41.975 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:41.975 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:41.975 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:41.975 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:41.975 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:41.975 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:41.975 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:41.975 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:41.975 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:41.975 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:41.975 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:41.975 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:42.238 00:44:26 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:42.238 00:44:26 -- setup/hugepages.sh@89 -- # local node 00:04:42.238 00:44:26 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:42.238 00:44:26 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:42.238 00:44:26 -- setup/hugepages.sh@92 -- # local surp 00:04:42.238 00:44:26 -- setup/hugepages.sh@93 -- # local resv 00:04:42.238 00:44:26 -- setup/hugepages.sh@94 -- # local anon 00:04:42.238 00:44:26 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:42.238 00:44:26 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:42.238 00:44:26 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:42.238 00:44:26 -- setup/common.sh@18 -- # local node= 00:04:42.238 00:44:26 -- setup/common.sh@19 -- # local var val 00:04:42.238 00:44:26 -- setup/common.sh@20 -- # local mem_f mem 00:04:42.238 00:44:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.238 00:44:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.238 00:44:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.238 00:44:26 -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.238 00:44:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.238 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.238 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.238 00:44:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43882112 kB' 'MemAvailable: 47392440 kB' 'Buffers: 2704 kB' 'Cached: 12212360 kB' 'SwapCached: 0 kB' 'Active: 9158348 kB' 'Inactive: 3508456 kB' 'Active(anon): 8763300 kB' 'Inactive(anon): 0 kB' 'Active(file): 395048 kB' 'Inactive(file): 3508456 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 454936 kB' 'Mapped: 214876 kB' 'Shmem: 8311560 kB' 'KReclaimable: 200892 kB' 'Slab: 586316 kB' 'SReclaimable: 200892 kB' 'SUnreclaim: 385424 kB' 'KernelStack: 12704 kB' 'PageTables: 7840 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 9907816 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197064 kB' 'VmallocChunk: 0 kB' 'Percpu: 37440 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2358876 kB' 'DirectMap2M: 20629504 kB' 'DirectMap1G: 46137344 kB' 00:04:42.238 00:44:26 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.238 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.238 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.238 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.238 00:44:26 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.238 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.238 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.238 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.238 00:44:26 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.238 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.238 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.238 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.238 00:44:26 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.238 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.238 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.238 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.238 00:44:26 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.239 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.239 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.239 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.239 00:44:26 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.239 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.239 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.239 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.239 00:44:26 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.239 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.239 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.239 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.239 00:44:26 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.239 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.239 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.239 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.239 00:44:26 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.239 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.239 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.239 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.239 00:44:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.239 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.239 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.239 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.239 00:44:26 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.239 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.239 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.239 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.239 00:44:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.239 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.239 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.239 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.239 00:44:26 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.239 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.239 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.239 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.239 00:44:26 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.239 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.239 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.239 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.239 00:44:26 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.239 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.239 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.239 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.239 00:44:26 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.239 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.239 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.239 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.239 00:44:26 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.239 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.239 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.239 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.239 00:44:26 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.239 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.239 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.239 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.239 00:44:26 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.239 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.239 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.239 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.239 00:44:26 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.239 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.239 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.239 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.239 00:44:26 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.239 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.239 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.239 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.239 00:44:26 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.239 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.239 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.239 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.239 00:44:26 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.239 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.239 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.239 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.239 00:44:26 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.239 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.239 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.239 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.239 00:44:26 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.239 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.239 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.239 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.239 00:44:26 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.239 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.239 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.239 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.239 00:44:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.239 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.239 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.239 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.239 00:44:26 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.239 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.239 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.239 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.239 00:44:26 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.239 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.239 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.239 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.239 00:44:26 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.239 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.239 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.239 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.240 00:44:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.240 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.240 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.240 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.240 00:44:26 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.240 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.240 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.240 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.240 00:44:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.240 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.240 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.240 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.240 00:44:26 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.240 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.240 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.240 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.240 00:44:26 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.240 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.240 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.240 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.240 00:44:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.240 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.240 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.240 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.240 00:44:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.240 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.240 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.240 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.240 00:44:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.240 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.240 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.240 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.240 00:44:26 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.240 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.240 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.240 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.240 00:44:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.240 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.240 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.240 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.240 00:44:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.240 00:44:26 -- setup/common.sh@33 -- # echo 0 00:04:42.240 00:44:26 -- setup/common.sh@33 -- # return 0 00:04:42.240 00:44:26 -- setup/hugepages.sh@97 -- # anon=0 00:04:42.240 00:44:26 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:42.240 00:44:26 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:42.240 00:44:26 -- setup/common.sh@18 -- # local node= 00:04:42.240 00:44:26 -- setup/common.sh@19 -- # local var val 00:04:42.240 00:44:26 -- setup/common.sh@20 -- # local mem_f mem 00:04:42.240 00:44:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.240 00:44:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.240 00:44:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.240 00:44:26 -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.240 00:44:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.240 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.240 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.240 00:44:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43893516 kB' 'MemAvailable: 47403844 kB' 'Buffers: 2704 kB' 'Cached: 12212364 kB' 'SwapCached: 0 kB' 'Active: 9158624 kB' 'Inactive: 3508456 kB' 'Active(anon): 8763576 kB' 'Inactive(anon): 0 kB' 'Active(file): 395048 kB' 'Inactive(file): 3508456 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 455180 kB' 'Mapped: 214880 kB' 'Shmem: 8311564 kB' 'KReclaimable: 200892 kB' 'Slab: 586308 kB' 'SReclaimable: 200892 kB' 'SUnreclaim: 385416 kB' 'KernelStack: 12672 kB' 'PageTables: 7720 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 9907828 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197032 kB' 'VmallocChunk: 0 kB' 'Percpu: 37440 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2358876 kB' 'DirectMap2M: 20629504 kB' 'DirectMap1G: 46137344 kB' 00:04:42.240 00:44:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.240 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.240 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.240 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.240 00:44:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.240 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.240 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.240 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.240 00:44:26 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.240 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.240 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.240 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.240 00:44:26 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.240 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.240 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.240 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.240 00:44:26 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.240 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.240 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.240 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.240 00:44:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.240 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.240 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.240 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.240 00:44:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.240 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.240 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.240 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.241 00:44:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.241 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.241 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.241 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.241 00:44:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.241 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.241 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.241 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.241 00:44:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.241 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.241 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.241 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.241 00:44:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.241 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.241 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.241 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.241 00:44:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.241 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.241 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.241 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.241 00:44:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.241 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.241 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.241 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.241 00:44:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.241 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.241 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.241 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.241 00:44:26 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.241 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.241 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.241 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.241 00:44:26 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.241 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.241 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.241 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.241 00:44:26 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.241 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.241 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.241 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.241 00:44:26 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.241 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.241 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.241 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.241 00:44:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.241 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.241 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.241 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.241 00:44:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.241 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.241 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.241 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.241 00:44:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.241 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.241 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.241 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.241 00:44:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.241 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.241 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.241 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.241 00:44:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.241 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.241 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.241 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.241 00:44:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.241 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.241 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.241 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.241 00:44:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.241 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.241 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.241 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.241 00:44:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.241 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.241 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.241 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.241 00:44:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.241 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.241 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.241 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.241 00:44:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.241 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.241 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.241 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.241 00:44:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.241 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.241 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.241 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.241 00:44:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.241 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.241 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.241 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.241 00:44:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.241 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.241 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.241 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.241 00:44:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.241 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.241 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.241 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.241 00:44:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.241 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.242 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.242 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.242 00:44:26 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.242 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.242 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.242 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.242 00:44:26 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.242 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.242 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.242 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.242 00:44:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.242 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.242 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.242 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.242 00:44:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.242 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.242 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.242 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.242 00:44:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.242 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.242 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.242 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.242 00:44:26 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.242 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.242 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.242 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.242 00:44:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.242 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.242 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.242 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.242 00:44:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.242 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.242 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.242 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.242 00:44:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.242 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.242 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.242 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.242 00:44:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.242 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.242 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.242 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.242 00:44:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.242 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.242 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.242 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.242 00:44:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.242 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.242 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.242 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.242 00:44:26 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.242 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.242 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.242 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.242 00:44:26 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.242 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.242 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.242 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.242 00:44:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.242 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.242 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.242 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.242 00:44:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.242 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.242 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.242 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.242 00:44:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.242 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.242 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.242 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.242 00:44:26 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.242 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.242 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.242 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.242 00:44:26 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.242 00:44:26 -- setup/common.sh@33 -- # echo 0 00:04:42.242 00:44:26 -- setup/common.sh@33 -- # return 0 00:04:42.242 00:44:26 -- setup/hugepages.sh@99 -- # surp=0 00:04:42.242 00:44:26 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:42.242 00:44:26 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:42.242 00:44:26 -- setup/common.sh@18 -- # local node= 00:04:42.242 00:44:26 -- setup/common.sh@19 -- # local var val 00:04:42.242 00:44:26 -- setup/common.sh@20 -- # local mem_f mem 00:04:42.242 00:44:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.242 00:44:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.242 00:44:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.242 00:44:26 -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.242 00:44:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.242 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.242 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.242 00:44:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43892004 kB' 'MemAvailable: 47402332 kB' 'Buffers: 2704 kB' 'Cached: 12212376 kB' 'SwapCached: 0 kB' 'Active: 9158644 kB' 'Inactive: 3508456 kB' 'Active(anon): 8763596 kB' 'Inactive(anon): 0 kB' 'Active(file): 395048 kB' 'Inactive(file): 3508456 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 455208 kB' 'Mapped: 214816 kB' 'Shmem: 8311576 kB' 'KReclaimable: 200892 kB' 'Slab: 586332 kB' 'SReclaimable: 200892 kB' 'SUnreclaim: 385440 kB' 'KernelStack: 12736 kB' 'PageTables: 7932 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 9911872 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197064 kB' 'VmallocChunk: 0 kB' 'Percpu: 37440 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2358876 kB' 'DirectMap2M: 20629504 kB' 'DirectMap1G: 46137344 kB' 00:04:42.243 00:44:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.243 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.243 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.243 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.243 00:44:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.243 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.243 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.243 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.243 00:44:26 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.243 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.243 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.243 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.243 00:44:26 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.243 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.243 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.243 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.243 00:44:26 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.243 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.243 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.243 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.243 00:44:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.243 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.243 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.243 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.243 00:44:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.243 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.243 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.243 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.243 00:44:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.243 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.243 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.243 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.243 00:44:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.243 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.243 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.243 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.243 00:44:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.243 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.243 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.243 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.243 00:44:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.243 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.243 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.243 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.243 00:44:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.243 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.243 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.243 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.243 00:44:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.243 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.243 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.243 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.243 00:44:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.243 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.243 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.243 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.243 00:44:26 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.243 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.243 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.243 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.243 00:44:26 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.243 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.243 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.243 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.243 00:44:26 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.243 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.243 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.243 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.243 00:44:26 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.243 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.243 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.243 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.243 00:44:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.243 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.243 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.243 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.243 00:44:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.243 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.243 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.243 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.243 00:44:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.243 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.243 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.243 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.243 00:44:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.243 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.243 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.243 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.243 00:44:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.243 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.243 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.243 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.243 00:44:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.243 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.243 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.243 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.243 00:44:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.243 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.243 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.243 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.243 00:44:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.243 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.243 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.244 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.244 00:44:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.244 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.244 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.244 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.244 00:44:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.244 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.244 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.244 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.244 00:44:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.244 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.244 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.244 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.244 00:44:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.244 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.244 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.244 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.244 00:44:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.244 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.244 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.244 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.244 00:44:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.244 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.244 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.244 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.244 00:44:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.244 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.244 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.244 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.244 00:44:26 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.244 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.244 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.244 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.244 00:44:26 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.244 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.244 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.244 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.244 00:44:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.244 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.244 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.244 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.244 00:44:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.244 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.244 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.244 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.244 00:44:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.244 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.244 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.244 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.244 00:44:26 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.244 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.244 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.244 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.244 00:44:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.244 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.244 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.244 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.244 00:44:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.244 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.244 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.244 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.244 00:44:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.244 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.244 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.244 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.244 00:44:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.244 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.244 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.244 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.244 00:44:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.244 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.244 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.244 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.244 00:44:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.244 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.244 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.244 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.244 00:44:26 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.244 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.244 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.244 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.244 00:44:26 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.244 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.244 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.244 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.244 00:44:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.244 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.244 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.244 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.244 00:44:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.244 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.244 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.244 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.244 00:44:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.244 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.244 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.244 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.244 00:44:26 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.244 00:44:26 -- setup/common.sh@33 -- # echo 0 00:04:42.244 00:44:26 -- setup/common.sh@33 -- # return 0 00:04:42.244 00:44:26 -- setup/hugepages.sh@100 -- # resv=0 00:04:42.244 00:44:26 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:42.244 nr_hugepages=1025 00:04:42.244 00:44:26 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:42.244 resv_hugepages=0 00:04:42.244 00:44:26 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:42.244 surplus_hugepages=0 00:04:42.244 00:44:26 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:42.244 anon_hugepages=0 00:04:42.244 00:44:26 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:42.244 00:44:26 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:42.244 00:44:26 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:42.244 00:44:26 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:42.244 00:44:26 -- setup/common.sh@18 -- # local node= 00:04:42.244 00:44:26 -- setup/common.sh@19 -- # local var val 00:04:42.244 00:44:26 -- setup/common.sh@20 -- # local mem_f mem 00:04:42.244 00:44:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.244 00:44:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.244 00:44:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.244 00:44:26 -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.244 00:44:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.244 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.244 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.245 00:44:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43891828 kB' 'MemAvailable: 47402156 kB' 'Buffers: 2704 kB' 'Cached: 12212376 kB' 'SwapCached: 0 kB' 'Active: 9158884 kB' 'Inactive: 3508456 kB' 'Active(anon): 8763836 kB' 'Inactive(anon): 0 kB' 'Active(file): 395048 kB' 'Inactive(file): 3508456 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 455496 kB' 'Mapped: 214816 kB' 'Shmem: 8311576 kB' 'KReclaimable: 200892 kB' 'Slab: 586316 kB' 'SReclaimable: 200892 kB' 'SUnreclaim: 385424 kB' 'KernelStack: 12992 kB' 'PageTables: 8556 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 9910508 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197192 kB' 'VmallocChunk: 0 kB' 'Percpu: 37440 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2358876 kB' 'DirectMap2M: 20629504 kB' 'DirectMap1G: 46137344 kB' 00:04:42.245 00:44:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.245 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.245 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.245 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.245 00:44:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.245 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.245 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.245 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.245 00:44:26 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.245 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.245 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.245 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.245 00:44:26 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.245 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.245 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.245 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.245 00:44:26 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.245 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.245 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.245 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.245 00:44:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.245 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.245 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.245 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.245 00:44:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.245 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.245 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.245 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.245 00:44:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.245 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.245 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.245 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.245 00:44:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.245 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.245 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.245 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.245 00:44:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.245 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.245 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.245 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.245 00:44:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.245 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.245 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.245 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.245 00:44:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.245 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.245 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.245 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.245 00:44:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.245 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.245 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.245 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.245 00:44:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.245 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.245 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.245 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.245 00:44:26 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.245 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.245 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.245 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.245 00:44:26 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.245 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.245 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.245 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.245 00:44:26 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.245 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.245 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.245 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.245 00:44:26 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.245 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.245 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.245 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.245 00:44:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.245 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.245 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.245 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.245 00:44:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.245 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.245 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.245 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.245 00:44:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.245 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.245 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.245 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.245 00:44:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.245 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.245 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.245 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.245 00:44:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.245 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.245 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.245 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.245 00:44:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.245 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.245 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.245 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.245 00:44:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.245 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.246 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.246 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.246 00:44:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.246 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.246 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.246 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.246 00:44:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.246 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.246 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.246 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.246 00:44:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.246 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.246 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.246 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.246 00:44:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.246 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.246 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.246 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.246 00:44:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.246 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.246 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.246 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.246 00:44:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.246 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.246 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.246 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.246 00:44:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.246 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.246 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.246 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.246 00:44:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.246 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.246 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.246 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.246 00:44:26 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.246 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.246 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.246 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.246 00:44:26 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.246 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.246 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.246 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.246 00:44:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.246 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.246 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.246 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.246 00:44:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.246 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.246 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.246 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.246 00:44:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.246 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.246 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.246 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.246 00:44:26 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.246 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.246 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.246 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.246 00:44:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.246 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.246 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.246 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.246 00:44:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.246 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.246 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.246 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.246 00:44:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.246 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.246 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.246 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.246 00:44:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.246 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.246 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.246 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.246 00:44:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.246 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.246 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.246 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.246 00:44:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.246 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.246 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.246 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.246 00:44:26 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.246 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.246 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.246 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.246 00:44:26 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.246 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.246 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.246 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.246 00:44:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.246 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.246 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.246 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.246 00:44:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.246 00:44:26 -- setup/common.sh@33 -- # echo 1025 00:04:42.246 00:44:26 -- setup/common.sh@33 -- # return 0 00:04:42.246 00:44:26 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:42.246 00:44:26 -- setup/hugepages.sh@112 -- # get_nodes 00:04:42.246 00:44:26 -- setup/hugepages.sh@27 -- # local node 00:04:42.246 00:44:26 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:42.246 00:44:26 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:42.246 00:44:26 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:42.246 00:44:26 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:42.246 00:44:26 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:42.246 00:44:26 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:42.246 00:44:26 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:42.246 00:44:26 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:42.246 00:44:26 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:42.246 00:44:26 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:42.246 00:44:26 -- setup/common.sh@18 -- # local node=0 00:04:42.246 00:44:26 -- setup/common.sh@19 -- # local var val 00:04:42.246 00:44:26 -- setup/common.sh@20 -- # local mem_f mem 00:04:42.246 00:44:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.246 00:44:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:42.246 00:44:26 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:42.246 00:44:26 -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.246 00:44:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.246 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.246 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.246 00:44:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 26795472 kB' 'MemUsed: 6034412 kB' 'SwapCached: 0 kB' 'Active: 3686264 kB' 'Inactive: 155168 kB' 'Active(anon): 3524972 kB' 'Inactive(anon): 0 kB' 'Active(file): 161292 kB' 'Inactive(file): 155168 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3651412 kB' 'Mapped: 111424 kB' 'AnonPages: 193132 kB' 'Shmem: 3334952 kB' 'KernelStack: 7208 kB' 'PageTables: 5172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 101920 kB' 'Slab: 331468 kB' 'SReclaimable: 101920 kB' 'SUnreclaim: 229548 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:42.246 00:44:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.246 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.246 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.246 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.246 00:44:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.246 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.246 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.246 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.246 00:44:26 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.246 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.246 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.246 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.246 00:44:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.246 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.246 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.246 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.246 00:44:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.246 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.246 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.246 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.246 00:44:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.246 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.246 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.246 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.246 00:44:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.246 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.246 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.247 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.247 00:44:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.247 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.247 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.247 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.247 00:44:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.247 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.247 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.247 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.247 00:44:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.247 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.247 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.247 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.247 00:44:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.247 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.247 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.247 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.247 00:44:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.247 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.247 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.247 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.247 00:44:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.247 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.247 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.247 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.247 00:44:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.247 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.247 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.247 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.247 00:44:26 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.247 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.247 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.247 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.247 00:44:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.247 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.247 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.247 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.247 00:44:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.247 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.247 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.247 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.247 00:44:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.247 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.247 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.247 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.247 00:44:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.247 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.247 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.247 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.247 00:44:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.247 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.247 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.247 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.247 00:44:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.247 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.247 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.247 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.247 00:44:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.247 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.247 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.247 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.247 00:44:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.247 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.247 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.247 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.247 00:44:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.247 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.247 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.247 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.247 00:44:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.247 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.247 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.247 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.247 00:44:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.247 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.247 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.247 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.247 00:44:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.247 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.247 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.247 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.247 00:44:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.247 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.247 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.247 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.247 00:44:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.247 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.247 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.247 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.247 00:44:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.247 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.247 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.247 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.247 00:44:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.247 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.247 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.247 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.247 00:44:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.247 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.247 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.247 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.247 00:44:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.247 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.247 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.247 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.247 00:44:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.247 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.247 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.247 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.247 00:44:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.247 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.247 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.247 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.247 00:44:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.247 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.247 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.247 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.247 00:44:26 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.247 00:44:26 -- setup/common.sh@33 -- # echo 0 00:04:42.247 00:44:26 -- setup/common.sh@33 -- # return 0 00:04:42.247 00:44:26 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:42.247 00:44:26 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:42.247 00:44:26 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:42.247 00:44:26 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:42.247 00:44:26 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:42.247 00:44:26 -- setup/common.sh@18 -- # local node=1 00:04:42.247 00:44:26 -- setup/common.sh@19 -- # local var val 00:04:42.247 00:44:26 -- setup/common.sh@20 -- # local mem_f mem 00:04:42.247 00:44:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.247 00:44:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:42.247 00:44:26 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:42.247 00:44:26 -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.247 00:44:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.247 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.247 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.247 00:44:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 17095784 kB' 'MemUsed: 10616040 kB' 'SwapCached: 0 kB' 'Active: 5473548 kB' 'Inactive: 3353288 kB' 'Active(anon): 5239792 kB' 'Inactive(anon): 0 kB' 'Active(file): 233756 kB' 'Inactive(file): 3353288 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8563712 kB' 'Mapped: 103392 kB' 'AnonPages: 263228 kB' 'Shmem: 4976668 kB' 'KernelStack: 5928 kB' 'PageTables: 4224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 98972 kB' 'Slab: 254848 kB' 'SReclaimable: 98972 kB' 'SUnreclaim: 155876 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:42.247 00:44:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.247 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.247 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.247 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.247 00:44:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.247 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.247 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.247 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.247 00:44:26 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.247 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.247 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.247 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.247 00:44:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.247 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.248 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.248 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.248 00:44:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.248 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.248 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.248 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.248 00:44:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.248 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.248 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.248 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.248 00:44:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.507 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.507 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.507 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.507 00:44:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.507 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.507 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.507 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.507 00:44:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.507 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.507 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.507 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.507 00:44:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.507 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.507 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.507 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.507 00:44:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.507 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.507 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.507 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.507 00:44:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.507 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.507 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.507 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.507 00:44:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.507 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.507 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.507 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.507 00:44:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.507 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.507 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.507 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.507 00:44:26 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.507 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.507 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.507 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.507 00:44:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.507 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.507 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.507 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.507 00:44:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.507 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.507 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.507 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.507 00:44:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.507 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.507 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.507 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.507 00:44:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.507 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.507 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.507 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.507 00:44:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.507 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.507 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.507 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.507 00:44:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.507 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.507 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.507 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.508 00:44:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.508 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.508 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.508 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.508 00:44:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.508 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.508 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.508 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.508 00:44:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.508 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.508 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.508 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.508 00:44:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.508 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.508 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.508 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.508 00:44:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.508 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.508 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.508 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.508 00:44:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.508 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.508 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.508 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.508 00:44:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.508 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.508 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.508 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.508 00:44:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.508 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.508 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.508 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.508 00:44:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.508 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.508 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.508 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.508 00:44:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.508 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.508 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.508 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.508 00:44:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.508 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.508 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.508 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.508 00:44:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.508 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.508 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.508 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.508 00:44:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.508 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.508 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.508 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.508 00:44:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.508 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.508 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.508 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.508 00:44:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.508 00:44:26 -- setup/common.sh@32 -- # continue 00:04:42.508 00:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.508 00:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.508 00:44:26 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.508 00:44:26 -- setup/common.sh@33 -- # echo 0 00:04:42.508 00:44:26 -- setup/common.sh@33 -- # return 0 00:04:42.508 00:44:26 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:42.508 00:44:26 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:42.508 00:44:26 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:42.508 00:44:26 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:42.508 00:44:26 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:42.508 node0=512 expecting 513 00:04:42.508 00:44:26 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:42.508 00:44:26 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:42.508 00:44:26 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:42.508 00:44:26 -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:42.508 node1=513 expecting 512 00:04:42.508 00:44:26 -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:42.508 00:04:42.508 real 0m1.386s 00:04:42.508 user 0m0.593s 00:04:42.508 sys 0m0.758s 00:04:42.508 00:44:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:42.508 00:44:26 -- common/autotest_common.sh@10 -- # set +x 00:04:42.508 ************************************ 00:04:42.508 END TEST odd_alloc 00:04:42.508 ************************************ 00:04:42.508 00:44:26 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:42.508 00:44:26 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:42.508 00:44:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:42.508 00:44:26 -- common/autotest_common.sh@10 -- # set +x 00:04:42.508 ************************************ 00:04:42.508 START TEST custom_alloc 00:04:42.508 ************************************ 00:04:42.508 00:44:26 -- common/autotest_common.sh@1104 -- # custom_alloc 00:04:42.508 00:44:26 -- setup/hugepages.sh@167 -- # local IFS=, 00:04:42.508 00:44:26 -- setup/hugepages.sh@169 -- # local node 00:04:42.508 00:44:26 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:42.508 00:44:26 -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:42.508 00:44:26 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:42.508 00:44:26 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:42.508 00:44:26 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:42.508 00:44:26 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:42.508 00:44:26 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:42.508 00:44:26 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:42.508 00:44:26 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:42.508 00:44:26 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:42.508 00:44:26 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:42.508 00:44:26 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:42.508 00:44:26 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:42.508 00:44:26 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:42.508 00:44:26 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:42.508 00:44:26 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:42.508 00:44:26 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:42.508 00:44:26 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:42.508 00:44:26 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:42.508 00:44:26 -- setup/hugepages.sh@83 -- # : 256 00:04:42.508 00:44:26 -- setup/hugepages.sh@84 -- # : 1 00:04:42.508 00:44:26 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:42.508 00:44:26 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:42.508 00:44:26 -- setup/hugepages.sh@83 -- # : 0 00:04:42.508 00:44:26 -- setup/hugepages.sh@84 -- # : 0 00:04:42.508 00:44:26 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:42.508 00:44:26 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:42.508 00:44:26 -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:42.508 00:44:26 -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:42.508 00:44:26 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:42.508 00:44:26 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:42.508 00:44:26 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:42.508 00:44:26 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:42.508 00:44:26 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:42.508 00:44:26 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:42.508 00:44:26 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:42.508 00:44:26 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:42.508 00:44:26 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:42.508 00:44:26 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:42.508 00:44:26 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:42.508 00:44:26 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:42.508 00:44:26 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:42.508 00:44:26 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:42.508 00:44:26 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:42.508 00:44:26 -- setup/hugepages.sh@78 -- # return 0 00:04:42.508 00:44:26 -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:42.508 00:44:26 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:42.508 00:44:26 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:42.508 00:44:26 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:42.508 00:44:26 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:42.508 00:44:26 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:42.508 00:44:26 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:42.508 00:44:26 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:42.508 00:44:26 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:42.508 00:44:26 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:42.508 00:44:26 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:42.508 00:44:26 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:42.508 00:44:26 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:42.508 00:44:26 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:42.508 00:44:26 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:42.508 00:44:26 -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:42.508 00:44:26 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:42.509 00:44:26 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:42.509 00:44:26 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:42.509 00:44:26 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:42.509 00:44:26 -- setup/hugepages.sh@78 -- # return 0 00:04:42.509 00:44:26 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:42.509 00:44:26 -- setup/hugepages.sh@187 -- # setup output 00:04:42.509 00:44:26 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:42.509 00:44:26 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:43.444 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:43.444 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:43.444 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:43.444 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:43.444 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:43.444 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:43.444 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:43.444 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:43.444 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:43.444 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:43.444 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:43.444 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:43.444 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:43.444 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:43.444 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:43.444 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:43.444 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:43.708 00:44:27 -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:43.708 00:44:27 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:43.708 00:44:27 -- setup/hugepages.sh@89 -- # local node 00:04:43.708 00:44:27 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:43.708 00:44:27 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:43.708 00:44:27 -- setup/hugepages.sh@92 -- # local surp 00:04:43.708 00:44:27 -- setup/hugepages.sh@93 -- # local resv 00:04:43.708 00:44:27 -- setup/hugepages.sh@94 -- # local anon 00:04:43.708 00:44:27 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:43.708 00:44:27 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:43.708 00:44:27 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:43.708 00:44:27 -- setup/common.sh@18 -- # local node= 00:04:43.708 00:44:27 -- setup/common.sh@19 -- # local var val 00:04:43.708 00:44:27 -- setup/common.sh@20 -- # local mem_f mem 00:04:43.708 00:44:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.708 00:44:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:43.708 00:44:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:43.708 00:44:27 -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.708 00:44:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.708 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.708 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.708 00:44:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 42838232 kB' 'MemAvailable: 46348560 kB' 'Buffers: 2704 kB' 'Cached: 12212456 kB' 'SwapCached: 0 kB' 'Active: 9164152 kB' 'Inactive: 3508456 kB' 'Active(anon): 8769104 kB' 'Inactive(anon): 0 kB' 'Active(file): 395048 kB' 'Inactive(file): 3508456 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 460244 kB' 'Mapped: 215300 kB' 'Shmem: 8311656 kB' 'KReclaimable: 200892 kB' 'Slab: 586024 kB' 'SReclaimable: 200892 kB' 'SUnreclaim: 385132 kB' 'KernelStack: 12688 kB' 'PageTables: 7760 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 9914288 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196956 kB' 'VmallocChunk: 0 kB' 'Percpu: 37440 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2358876 kB' 'DirectMap2M: 20629504 kB' 'DirectMap1G: 46137344 kB' 00:04:43.708 00:44:27 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.708 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.708 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.708 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.708 00:44:27 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.708 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.708 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.708 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.708 00:44:27 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.708 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.708 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.708 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.708 00:44:27 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.708 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.708 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.708 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.708 00:44:27 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.708 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.708 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.708 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.708 00:44:27 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.708 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.708 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.708 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.708 00:44:27 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.708 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.708 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.708 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.708 00:44:27 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.708 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.708 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.708 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.708 00:44:27 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.708 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.708 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.708 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.708 00:44:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.708 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.708 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.708 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.708 00:44:27 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.708 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.708 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.708 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.708 00:44:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.708 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.708 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.708 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.708 00:44:27 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.708 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.708 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.708 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.709 00:44:27 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.709 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.709 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.709 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.709 00:44:27 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.709 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.709 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.709 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.709 00:44:27 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.709 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.709 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.709 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.709 00:44:27 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.709 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.709 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.709 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.709 00:44:27 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.709 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.709 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.709 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.709 00:44:27 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.709 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.709 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.709 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.709 00:44:27 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.709 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.709 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.709 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.709 00:44:27 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.709 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.709 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.709 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.709 00:44:27 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.709 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.709 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.709 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.709 00:44:27 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.709 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.709 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.709 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.709 00:44:27 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.709 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.709 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.709 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.709 00:44:27 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.709 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.709 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.709 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.709 00:44:27 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.709 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.709 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.709 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.709 00:44:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.709 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.709 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.709 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.709 00:44:27 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.709 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.709 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.709 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.709 00:44:27 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.709 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.709 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.709 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.709 00:44:27 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.709 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.709 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.709 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.709 00:44:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.709 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.709 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.709 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.709 00:44:27 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.709 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.709 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.709 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.709 00:44:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.709 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.709 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.709 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.709 00:44:27 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.709 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.709 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.709 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.709 00:44:27 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.709 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.709 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.709 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.709 00:44:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.709 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.709 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.709 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.709 00:44:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.709 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.709 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.709 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.709 00:44:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.709 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.709 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.709 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.709 00:44:27 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.709 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.709 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.709 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.709 00:44:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.709 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.709 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.709 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.709 00:44:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.709 00:44:27 -- setup/common.sh@33 -- # echo 0 00:04:43.709 00:44:27 -- setup/common.sh@33 -- # return 0 00:04:43.709 00:44:27 -- setup/hugepages.sh@97 -- # anon=0 00:04:43.709 00:44:27 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:43.709 00:44:27 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:43.709 00:44:27 -- setup/common.sh@18 -- # local node= 00:04:43.709 00:44:27 -- setup/common.sh@19 -- # local var val 00:04:43.709 00:44:27 -- setup/common.sh@20 -- # local mem_f mem 00:04:43.709 00:44:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.709 00:44:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:43.709 00:44:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:43.709 00:44:27 -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.709 00:44:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.709 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.709 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.709 00:44:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 42840600 kB' 'MemAvailable: 46350928 kB' 'Buffers: 2704 kB' 'Cached: 12212460 kB' 'SwapCached: 0 kB' 'Active: 9159188 kB' 'Inactive: 3508456 kB' 'Active(anon): 8764140 kB' 'Inactive(anon): 0 kB' 'Active(file): 395048 kB' 'Inactive(file): 3508456 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 455336 kB' 'Mapped: 215600 kB' 'Shmem: 8311660 kB' 'KReclaimable: 200892 kB' 'Slab: 586048 kB' 'SReclaimable: 200892 kB' 'SUnreclaim: 385156 kB' 'KernelStack: 12752 kB' 'PageTables: 7972 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 9909536 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196952 kB' 'VmallocChunk: 0 kB' 'Percpu: 37440 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2358876 kB' 'DirectMap2M: 20629504 kB' 'DirectMap1G: 46137344 kB' 00:04:43.709 00:44:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.709 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.709 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.709 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.709 00:44:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.709 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.709 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.709 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.709 00:44:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.709 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.709 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.710 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.710 00:44:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.710 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.710 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.710 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.710 00:44:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.710 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.710 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.710 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.710 00:44:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.710 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.710 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.710 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.710 00:44:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.710 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.710 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.710 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.710 00:44:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.710 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.710 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.710 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.710 00:44:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.710 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.710 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.710 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.710 00:44:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.710 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.710 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.710 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.710 00:44:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.710 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.710 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.710 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.710 00:44:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.710 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.710 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.710 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.710 00:44:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.710 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.710 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.710 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.710 00:44:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.710 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.710 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.710 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.710 00:44:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.710 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.710 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.710 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.710 00:44:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.710 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.710 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.710 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.710 00:44:27 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.710 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.710 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.710 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.710 00:44:27 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.710 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.710 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.710 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.710 00:44:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.710 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.710 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.710 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.710 00:44:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.710 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.710 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.710 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.710 00:44:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.710 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.710 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.710 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.710 00:44:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.710 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.710 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.710 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.710 00:44:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.710 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.710 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.710 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.710 00:44:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.710 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.710 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.710 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.710 00:44:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.710 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.710 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.710 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.710 00:44:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.710 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.710 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.710 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.710 00:44:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.710 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.710 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.710 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.710 00:44:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.710 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.710 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.710 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.710 00:44:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.710 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.710 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.710 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.710 00:44:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.710 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.710 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.710 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.710 00:44:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.710 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.710 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.710 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.710 00:44:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.710 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.710 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.710 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.710 00:44:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.710 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.710 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.710 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.710 00:44:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.710 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.710 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.710 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.710 00:44:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.710 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.710 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.710 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.710 00:44:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.710 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.710 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.710 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.710 00:44:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.710 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.710 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.710 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.710 00:44:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.710 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.710 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.710 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.710 00:44:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.710 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.710 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.710 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.710 00:44:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.710 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.710 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.710 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.710 00:44:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.710 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.710 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.710 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.710 00:44:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.710 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.710 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.710 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.710 00:44:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.710 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.710 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.710 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.711 00:44:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.711 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.711 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.711 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.711 00:44:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.711 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.711 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.711 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.711 00:44:27 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.711 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.711 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.711 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.711 00:44:27 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.711 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.711 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.711 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.711 00:44:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.711 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.711 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.711 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.711 00:44:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.711 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.711 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.711 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.711 00:44:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.711 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.711 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.711 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.711 00:44:27 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.711 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.711 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.711 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.711 00:44:27 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.711 00:44:27 -- setup/common.sh@33 -- # echo 0 00:04:43.711 00:44:27 -- setup/common.sh@33 -- # return 0 00:04:43.711 00:44:27 -- setup/hugepages.sh@99 -- # surp=0 00:04:43.711 00:44:27 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:43.711 00:44:27 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:43.711 00:44:27 -- setup/common.sh@18 -- # local node= 00:04:43.711 00:44:27 -- setup/common.sh@19 -- # local var val 00:04:43.711 00:44:27 -- setup/common.sh@20 -- # local mem_f mem 00:04:43.711 00:44:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.711 00:44:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:43.711 00:44:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:43.711 00:44:27 -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.711 00:44:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.711 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.711 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.711 00:44:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 42840980 kB' 'MemAvailable: 46351308 kB' 'Buffers: 2704 kB' 'Cached: 12212472 kB' 'SwapCached: 0 kB' 'Active: 9161852 kB' 'Inactive: 3508456 kB' 'Active(anon): 8766804 kB' 'Inactive(anon): 0 kB' 'Active(file): 395048 kB' 'Inactive(file): 3508456 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 458840 kB' 'Mapped: 215252 kB' 'Shmem: 8311672 kB' 'KReclaimable: 200892 kB' 'Slab: 586052 kB' 'SReclaimable: 200892 kB' 'SUnreclaim: 385160 kB' 'KernelStack: 12752 kB' 'PageTables: 7876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 9912588 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196936 kB' 'VmallocChunk: 0 kB' 'Percpu: 37440 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2358876 kB' 'DirectMap2M: 20629504 kB' 'DirectMap1G: 46137344 kB' 00:04:43.711 00:44:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.711 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.711 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.711 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.711 00:44:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.711 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.711 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.711 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.711 00:44:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.711 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.711 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.711 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.711 00:44:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.711 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.711 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.711 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.711 00:44:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.711 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.711 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.711 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.711 00:44:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.711 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.711 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.711 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.711 00:44:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.711 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.711 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.711 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.711 00:44:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.711 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.711 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.711 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.711 00:44:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.711 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.711 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.711 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.711 00:44:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.711 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.711 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.711 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.711 00:44:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.711 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.711 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.711 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.711 00:44:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.711 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.711 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.711 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.711 00:44:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.711 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.711 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.711 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.711 00:44:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.711 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.711 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.711 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.711 00:44:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.711 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.711 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.711 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.711 00:44:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.711 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.711 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.711 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.711 00:44:27 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.711 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.711 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.711 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.711 00:44:27 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.711 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.711 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.711 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.711 00:44:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.711 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.711 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.711 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.711 00:44:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.711 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.711 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.711 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.711 00:44:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.711 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.711 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.711 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.711 00:44:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.711 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.711 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.711 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.711 00:44:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.711 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.711 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.711 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.711 00:44:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.711 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.711 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.712 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.712 00:44:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.712 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.712 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.712 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.712 00:44:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.712 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.712 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.712 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.712 00:44:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.712 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.712 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.712 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.712 00:44:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.712 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.712 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.712 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.712 00:44:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.712 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.712 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.712 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.712 00:44:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.712 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.712 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.712 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.712 00:44:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.712 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.712 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.712 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.712 00:44:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.712 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.712 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.712 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.712 00:44:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.712 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.712 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.712 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.712 00:44:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.712 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.712 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.712 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.712 00:44:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.712 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.712 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.712 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.712 00:44:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.712 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.712 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.712 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.712 00:44:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.712 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.712 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.712 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.712 00:44:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.712 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.712 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.712 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.712 00:44:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.712 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.712 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.712 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.712 00:44:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.712 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.712 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.712 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.712 00:44:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.712 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.712 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.712 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.712 00:44:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.712 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.712 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.712 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.712 00:44:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.712 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.712 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.712 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.712 00:44:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.712 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.712 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.712 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.712 00:44:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.712 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.712 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.712 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.712 00:44:27 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.712 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.712 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.712 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.712 00:44:27 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.712 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.712 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.712 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.712 00:44:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.712 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.712 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.712 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.712 00:44:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.712 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.712 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.712 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.712 00:44:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.712 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.712 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.712 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.712 00:44:27 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.712 00:44:27 -- setup/common.sh@33 -- # echo 0 00:04:43.712 00:44:27 -- setup/common.sh@33 -- # return 0 00:04:43.712 00:44:27 -- setup/hugepages.sh@100 -- # resv=0 00:04:43.712 00:44:27 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:43.712 nr_hugepages=1536 00:04:43.712 00:44:27 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:43.712 resv_hugepages=0 00:04:43.712 00:44:27 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:43.712 surplus_hugepages=0 00:04:43.712 00:44:27 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:43.712 anon_hugepages=0 00:04:43.712 00:44:27 -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:43.712 00:44:27 -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:43.712 00:44:27 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:43.712 00:44:27 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:43.712 00:44:27 -- setup/common.sh@18 -- # local node= 00:04:43.712 00:44:27 -- setup/common.sh@19 -- # local var val 00:04:43.712 00:44:27 -- setup/common.sh@20 -- # local mem_f mem 00:04:43.712 00:44:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.712 00:44:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:43.712 00:44:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:43.712 00:44:27 -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.712 00:44:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.712 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.712 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.712 00:44:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 42842488 kB' 'MemAvailable: 46352816 kB' 'Buffers: 2704 kB' 'Cached: 12212472 kB' 'SwapCached: 0 kB' 'Active: 9163688 kB' 'Inactive: 3508456 kB' 'Active(anon): 8768640 kB' 'Inactive(anon): 0 kB' 'Active(file): 395048 kB' 'Inactive(file): 3508456 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 460260 kB' 'Mapped: 215588 kB' 'Shmem: 8311672 kB' 'KReclaimable: 200892 kB' 'Slab: 586052 kB' 'SReclaimable: 200892 kB' 'SUnreclaim: 385160 kB' 'KernelStack: 12752 kB' 'PageTables: 7896 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 9914332 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196940 kB' 'VmallocChunk: 0 kB' 'Percpu: 37440 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2358876 kB' 'DirectMap2M: 20629504 kB' 'DirectMap1G: 46137344 kB' 00:04:43.712 00:44:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.712 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.712 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.712 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.712 00:44:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.712 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.712 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.712 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.712 00:44:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.712 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.712 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.712 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.712 00:44:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.712 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.713 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.713 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.713 00:44:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.713 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.713 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.713 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.713 00:44:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.713 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.713 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.713 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.713 00:44:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.713 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.713 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.713 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.713 00:44:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.713 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.713 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.713 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.713 00:44:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.713 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.713 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.713 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.713 00:44:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.713 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.713 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.713 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.713 00:44:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.713 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.713 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.713 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.713 00:44:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.713 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.713 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.713 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.713 00:44:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.713 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.713 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.713 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.713 00:44:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.713 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.713 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.713 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.713 00:44:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.713 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.713 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.713 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.713 00:44:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.713 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.713 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.713 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.713 00:44:27 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.713 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.713 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.713 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.713 00:44:27 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.713 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.713 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.713 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.713 00:44:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.713 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.713 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.713 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.713 00:44:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.713 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.713 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.713 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.713 00:44:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.713 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.713 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.713 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.713 00:44:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.713 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.713 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.713 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.713 00:44:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.713 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.713 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.713 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.713 00:44:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.713 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.713 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.713 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.713 00:44:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.713 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.713 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.713 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.713 00:44:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.713 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.713 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.713 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.713 00:44:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.713 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.713 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.713 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.713 00:44:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.713 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.713 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.713 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.713 00:44:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.713 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.713 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.713 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.713 00:44:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.713 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.713 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.713 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.713 00:44:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.713 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.713 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.713 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.713 00:44:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.713 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.713 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.713 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.713 00:44:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.713 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.713 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.713 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.713 00:44:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.713 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.713 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.713 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.713 00:44:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.713 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.713 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.713 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.713 00:44:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.713 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.713 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.714 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.714 00:44:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.714 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.714 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.714 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.714 00:44:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.714 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.714 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.714 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.714 00:44:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.714 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.714 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.714 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.714 00:44:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.714 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.714 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.714 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.714 00:44:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.714 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.714 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.714 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.714 00:44:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.714 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.714 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.714 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.714 00:44:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.714 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.714 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.714 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.714 00:44:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.714 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.714 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.714 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.714 00:44:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.714 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.714 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.714 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.714 00:44:27 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.714 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.714 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.714 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.714 00:44:27 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.714 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.714 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.714 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.714 00:44:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.714 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.714 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.714 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.714 00:44:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.714 00:44:27 -- setup/common.sh@33 -- # echo 1536 00:04:43.714 00:44:27 -- setup/common.sh@33 -- # return 0 00:04:43.714 00:44:27 -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:43.714 00:44:27 -- setup/hugepages.sh@112 -- # get_nodes 00:04:43.714 00:44:27 -- setup/hugepages.sh@27 -- # local node 00:04:43.714 00:44:27 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:43.714 00:44:27 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:43.714 00:44:27 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:43.714 00:44:27 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:43.714 00:44:27 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:43.714 00:44:27 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:43.714 00:44:27 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:43.714 00:44:27 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:43.714 00:44:27 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:43.714 00:44:27 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:43.714 00:44:27 -- setup/common.sh@18 -- # local node=0 00:04:43.714 00:44:27 -- setup/common.sh@19 -- # local var val 00:04:43.714 00:44:27 -- setup/common.sh@20 -- # local mem_f mem 00:04:43.714 00:44:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.714 00:44:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:43.714 00:44:27 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:43.714 00:44:27 -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.714 00:44:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.714 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.714 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.714 00:44:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 26791092 kB' 'MemUsed: 6038792 kB' 'SwapCached: 0 kB' 'Active: 3684368 kB' 'Inactive: 155168 kB' 'Active(anon): 3523076 kB' 'Inactive(anon): 0 kB' 'Active(file): 161292 kB' 'Inactive(file): 155168 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3651420 kB' 'Mapped: 111576 kB' 'AnonPages: 191248 kB' 'Shmem: 3334960 kB' 'KernelStack: 6808 kB' 'PageTables: 3548 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 101920 kB' 'Slab: 331260 kB' 'SReclaimable: 101920 kB' 'SUnreclaim: 229340 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:43.714 00:44:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.714 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.714 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.714 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.714 00:44:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.714 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.714 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.714 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.714 00:44:27 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.714 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.714 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.714 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.714 00:44:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.714 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.714 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.714 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.714 00:44:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.714 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.714 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.714 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.714 00:44:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.714 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.714 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.714 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.714 00:44:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.714 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.714 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.714 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.714 00:44:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.714 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.714 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.714 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.714 00:44:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.714 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.714 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.714 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.714 00:44:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.714 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.714 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.714 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.714 00:44:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.714 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.714 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.714 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.714 00:44:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.714 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.714 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.714 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.714 00:44:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.714 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.714 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.714 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.714 00:44:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.714 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.714 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.714 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.714 00:44:27 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.714 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.714 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.714 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.714 00:44:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.714 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.714 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.714 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.714 00:44:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.714 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.714 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.714 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.714 00:44:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.714 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.714 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.715 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.715 00:44:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.715 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.715 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.715 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.715 00:44:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.715 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.715 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.715 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.715 00:44:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.715 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.715 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.715 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.715 00:44:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.715 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.715 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.715 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.715 00:44:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.715 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.715 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.715 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.715 00:44:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.715 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.715 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.715 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.715 00:44:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.715 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.715 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.715 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.715 00:44:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.715 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.715 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.715 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.715 00:44:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.715 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.715 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.715 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.715 00:44:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.715 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.715 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.715 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.715 00:44:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.715 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.715 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.715 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.715 00:44:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.715 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.715 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.715 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.715 00:44:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.715 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.715 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.715 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.715 00:44:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.715 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.715 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.715 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.715 00:44:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.715 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.715 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.715 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.715 00:44:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.715 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.715 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.715 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.715 00:44:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.715 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.715 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.715 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.715 00:44:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.715 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.715 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.715 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.715 00:44:27 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.715 00:44:27 -- setup/common.sh@33 -- # echo 0 00:04:43.715 00:44:27 -- setup/common.sh@33 -- # return 0 00:04:43.715 00:44:27 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:43.715 00:44:27 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:43.715 00:44:27 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:43.715 00:44:27 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:43.715 00:44:27 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:43.715 00:44:27 -- setup/common.sh@18 -- # local node=1 00:04:43.715 00:44:27 -- setup/common.sh@19 -- # local var val 00:04:43.715 00:44:27 -- setup/common.sh@20 -- # local mem_f mem 00:04:43.715 00:44:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.715 00:44:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:43.715 00:44:27 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:43.715 00:44:27 -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.715 00:44:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.715 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.715 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.715 00:44:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 16052236 kB' 'MemUsed: 11659588 kB' 'SwapCached: 0 kB' 'Active: 5473984 kB' 'Inactive: 3353288 kB' 'Active(anon): 5240228 kB' 'Inactive(anon): 0 kB' 'Active(file): 233756 kB' 'Inactive(file): 3353288 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8563796 kB' 'Mapped: 103392 kB' 'AnonPages: 263640 kB' 'Shmem: 4976752 kB' 'KernelStack: 5944 kB' 'PageTables: 4320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 98972 kB' 'Slab: 254792 kB' 'SReclaimable: 98972 kB' 'SUnreclaim: 155820 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:43.715 00:44:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.715 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.715 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.715 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.715 00:44:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.715 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.715 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.715 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.715 00:44:27 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.715 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.715 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.715 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.715 00:44:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.715 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.715 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.715 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.715 00:44:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.715 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.715 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.715 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.715 00:44:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.715 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.715 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.715 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.715 00:44:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.715 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.715 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.715 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.715 00:44:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.715 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.715 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.715 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.715 00:44:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.715 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.715 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.715 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.715 00:44:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.715 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.715 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.715 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.715 00:44:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.715 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.715 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.715 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.715 00:44:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.715 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.715 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.715 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.715 00:44:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.716 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.716 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.716 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.716 00:44:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.716 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.716 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.716 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.716 00:44:27 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.716 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.716 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.716 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.716 00:44:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.716 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.716 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.716 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.716 00:44:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.716 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.716 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.716 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.716 00:44:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.716 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.716 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.716 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.716 00:44:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.716 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.716 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.716 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.716 00:44:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.716 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.716 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.716 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.716 00:44:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.716 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.716 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.716 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.716 00:44:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.716 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.716 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.716 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.716 00:44:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.716 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.716 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.716 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.716 00:44:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.716 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.716 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.716 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.716 00:44:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.716 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.716 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.716 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.716 00:44:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.716 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.716 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.716 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.716 00:44:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.716 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.716 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.716 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.716 00:44:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.716 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.716 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.716 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.716 00:44:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.716 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.716 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.716 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.716 00:44:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.716 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.716 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.716 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.716 00:44:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.716 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.716 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.716 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.716 00:44:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.716 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.716 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.716 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.716 00:44:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.716 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.716 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.716 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.716 00:44:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.716 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.716 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.716 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.716 00:44:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.716 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.716 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.716 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.716 00:44:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.716 00:44:27 -- setup/common.sh@32 -- # continue 00:04:43.716 00:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.716 00:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.716 00:44:27 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.716 00:44:27 -- setup/common.sh@33 -- # echo 0 00:04:43.716 00:44:27 -- setup/common.sh@33 -- # return 0 00:04:43.716 00:44:27 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:43.716 00:44:27 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:43.716 00:44:27 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:43.716 00:44:27 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:43.716 00:44:27 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:43.716 node0=512 expecting 512 00:04:43.716 00:44:27 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:43.716 00:44:27 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:43.716 00:44:27 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:43.716 00:44:27 -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:43.716 node1=1024 expecting 1024 00:04:43.716 00:44:27 -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:43.716 00:04:43.716 real 0m1.393s 00:04:43.716 user 0m0.617s 00:04:43.716 sys 0m0.742s 00:04:43.716 00:44:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:43.716 00:44:27 -- common/autotest_common.sh@10 -- # set +x 00:04:43.716 ************************************ 00:04:43.716 END TEST custom_alloc 00:04:43.716 ************************************ 00:04:43.716 00:44:27 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:43.716 00:44:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:43.716 00:44:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:43.716 00:44:27 -- common/autotest_common.sh@10 -- # set +x 00:04:43.716 ************************************ 00:04:43.716 START TEST no_shrink_alloc 00:04:43.716 ************************************ 00:04:43.716 00:44:27 -- common/autotest_common.sh@1104 -- # no_shrink_alloc 00:04:43.716 00:44:27 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:43.716 00:44:27 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:43.716 00:44:27 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:43.716 00:44:27 -- setup/hugepages.sh@51 -- # shift 00:04:43.716 00:44:27 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:43.716 00:44:27 -- setup/hugepages.sh@52 -- # local node_ids 00:04:43.716 00:44:27 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:43.716 00:44:27 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:43.716 00:44:27 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:43.716 00:44:27 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:43.716 00:44:27 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:43.716 00:44:27 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:43.716 00:44:27 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:43.716 00:44:27 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:43.716 00:44:27 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:43.716 00:44:27 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:43.716 00:44:27 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:43.716 00:44:27 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:43.716 00:44:27 -- setup/hugepages.sh@73 -- # return 0 00:04:43.716 00:44:27 -- setup/hugepages.sh@198 -- # setup output 00:04:43.716 00:44:27 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:43.716 00:44:27 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:45.099 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:45.099 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:45.099 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:45.099 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:45.099 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:45.099 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:45.099 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:45.099 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:45.099 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:45.099 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:45.100 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:45.100 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:45.100 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:45.100 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:45.100 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:45.100 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:45.100 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:45.100 00:44:29 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:45.100 00:44:29 -- setup/hugepages.sh@89 -- # local node 00:04:45.100 00:44:29 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:45.100 00:44:29 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:45.100 00:44:29 -- setup/hugepages.sh@92 -- # local surp 00:04:45.100 00:44:29 -- setup/hugepages.sh@93 -- # local resv 00:04:45.100 00:44:29 -- setup/hugepages.sh@94 -- # local anon 00:04:45.100 00:44:29 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:45.100 00:44:29 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:45.100 00:44:29 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:45.100 00:44:29 -- setup/common.sh@18 -- # local node= 00:04:45.100 00:44:29 -- setup/common.sh@19 -- # local var val 00:04:45.100 00:44:29 -- setup/common.sh@20 -- # local mem_f mem 00:04:45.100 00:44:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.100 00:44:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:45.100 00:44:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:45.100 00:44:29 -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.100 00:44:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.100 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.100 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.100 00:44:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43835496 kB' 'MemAvailable: 47345824 kB' 'Buffers: 2704 kB' 'Cached: 12212552 kB' 'SwapCached: 0 kB' 'Active: 9158736 kB' 'Inactive: 3508456 kB' 'Active(anon): 8763688 kB' 'Inactive(anon): 0 kB' 'Active(file): 395048 kB' 'Inactive(file): 3508456 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 455184 kB' 'Mapped: 214828 kB' 'Shmem: 8311752 kB' 'KReclaimable: 200892 kB' 'Slab: 586248 kB' 'SReclaimable: 200892 kB' 'SUnreclaim: 385356 kB' 'KernelStack: 12736 kB' 'PageTables: 7828 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9908264 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197032 kB' 'VmallocChunk: 0 kB' 'Percpu: 37440 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2358876 kB' 'DirectMap2M: 20629504 kB' 'DirectMap1G: 46137344 kB' 00:04:45.100 00:44:29 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.100 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.100 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.100 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.100 00:44:29 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.100 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.100 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.100 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.100 00:44:29 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.100 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.100 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.100 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.100 00:44:29 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.100 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.100 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.100 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.100 00:44:29 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.100 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.100 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.100 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.100 00:44:29 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.100 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.100 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.100 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.100 00:44:29 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.100 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.100 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.100 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.100 00:44:29 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.100 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.100 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.100 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.100 00:44:29 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.100 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.100 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.100 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.100 00:44:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.100 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.100 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.100 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.100 00:44:29 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.100 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.100 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.100 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.100 00:44:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.100 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.100 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.100 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.100 00:44:29 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.100 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.100 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.100 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.100 00:44:29 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.100 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.100 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.100 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.100 00:44:29 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.100 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.100 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.100 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.100 00:44:29 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.100 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.100 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.100 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.100 00:44:29 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.100 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.100 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.100 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.100 00:44:29 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.100 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.100 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.100 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.100 00:44:29 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.100 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.100 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.100 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.100 00:44:29 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.100 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.100 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.100 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.100 00:44:29 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.100 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.100 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.100 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.100 00:44:29 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.100 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.100 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.100 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.100 00:44:29 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.100 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.100 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.100 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.100 00:44:29 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.100 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.100 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.100 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.100 00:44:29 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.100 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.100 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.100 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.100 00:44:29 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.100 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.100 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.100 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.100 00:44:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.100 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.100 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.100 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.100 00:44:29 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.100 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.100 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.100 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.100 00:44:29 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.100 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.100 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.100 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.100 00:44:29 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.100 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.100 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.100 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.100 00:44:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.100 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.100 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.100 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.100 00:44:29 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.100 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.100 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.100 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.100 00:44:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.100 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.100 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.100 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.100 00:44:29 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.100 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.100 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.100 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.100 00:44:29 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.100 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.100 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.100 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.100 00:44:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.100 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.100 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.100 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.100 00:44:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.100 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.100 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.100 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.100 00:44:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.100 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.100 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.100 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.100 00:44:29 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.100 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.100 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.100 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.100 00:44:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.100 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.100 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.100 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.100 00:44:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.100 00:44:29 -- setup/common.sh@33 -- # echo 0 00:04:45.100 00:44:29 -- setup/common.sh@33 -- # return 0 00:04:45.100 00:44:29 -- setup/hugepages.sh@97 -- # anon=0 00:04:45.100 00:44:29 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:45.100 00:44:29 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:45.100 00:44:29 -- setup/common.sh@18 -- # local node= 00:04:45.100 00:44:29 -- setup/common.sh@19 -- # local var val 00:04:45.100 00:44:29 -- setup/common.sh@20 -- # local mem_f mem 00:04:45.100 00:44:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.100 00:44:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:45.100 00:44:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:45.100 00:44:29 -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.100 00:44:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.100 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.100 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.100 00:44:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43838800 kB' 'MemAvailable: 47349128 kB' 'Buffers: 2704 kB' 'Cached: 12212556 kB' 'SwapCached: 0 kB' 'Active: 9158620 kB' 'Inactive: 3508456 kB' 'Active(anon): 8763572 kB' 'Inactive(anon): 0 kB' 'Active(file): 395048 kB' 'Inactive(file): 3508456 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 455084 kB' 'Mapped: 214824 kB' 'Shmem: 8311756 kB' 'KReclaimable: 200892 kB' 'Slab: 586240 kB' 'SReclaimable: 200892 kB' 'SUnreclaim: 385348 kB' 'KernelStack: 12752 kB' 'PageTables: 7856 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9908276 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197000 kB' 'VmallocChunk: 0 kB' 'Percpu: 37440 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2358876 kB' 'DirectMap2M: 20629504 kB' 'DirectMap1G: 46137344 kB' 00:04:45.100 00:44:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.100 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.100 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.100 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.100 00:44:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.100 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.100 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.100 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.100 00:44:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.100 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.100 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.100 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.100 00:44:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.100 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.100 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.100 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.100 00:44:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.100 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.100 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.100 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.100 00:44:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.100 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.100 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.100 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.100 00:44:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.100 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.101 00:44:29 -- setup/common.sh@33 -- # echo 0 00:04:45.101 00:44:29 -- setup/common.sh@33 -- # return 0 00:04:45.101 00:44:29 -- setup/hugepages.sh@99 -- # surp=0 00:04:45.101 00:44:29 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:45.101 00:44:29 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:45.101 00:44:29 -- setup/common.sh@18 -- # local node= 00:04:45.101 00:44:29 -- setup/common.sh@19 -- # local var val 00:04:45.101 00:44:29 -- setup/common.sh@20 -- # local mem_f mem 00:04:45.101 00:44:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.101 00:44:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:45.101 00:44:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:45.101 00:44:29 -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.101 00:44:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.101 00:44:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43838872 kB' 'MemAvailable: 47349200 kB' 'Buffers: 2704 kB' 'Cached: 12212560 kB' 'SwapCached: 0 kB' 'Active: 9158272 kB' 'Inactive: 3508456 kB' 'Active(anon): 8763224 kB' 'Inactive(anon): 0 kB' 'Active(file): 395048 kB' 'Inactive(file): 3508456 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 454716 kB' 'Mapped: 214824 kB' 'Shmem: 8311760 kB' 'KReclaimable: 200892 kB' 'Slab: 586248 kB' 'SReclaimable: 200892 kB' 'SUnreclaim: 385356 kB' 'KernelStack: 12752 kB' 'PageTables: 7876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9908292 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197016 kB' 'VmallocChunk: 0 kB' 'Percpu: 37440 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2358876 kB' 'DirectMap2M: 20629504 kB' 'DirectMap1G: 46137344 kB' 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.101 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.101 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.102 00:44:29 -- setup/common.sh@33 -- # echo 0 00:04:45.102 00:44:29 -- setup/common.sh@33 -- # return 0 00:04:45.102 00:44:29 -- setup/hugepages.sh@100 -- # resv=0 00:04:45.102 00:44:29 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:45.102 nr_hugepages=1024 00:04:45.102 00:44:29 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:45.102 resv_hugepages=0 00:04:45.102 00:44:29 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:45.102 surplus_hugepages=0 00:04:45.102 00:44:29 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:45.102 anon_hugepages=0 00:04:45.102 00:44:29 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:45.102 00:44:29 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:45.102 00:44:29 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:45.102 00:44:29 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:45.102 00:44:29 -- setup/common.sh@18 -- # local node= 00:04:45.102 00:44:29 -- setup/common.sh@19 -- # local var val 00:04:45.102 00:44:29 -- setup/common.sh@20 -- # local mem_f mem 00:04:45.102 00:44:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.102 00:44:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:45.102 00:44:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:45.102 00:44:29 -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.102 00:44:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.102 00:44:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43838368 kB' 'MemAvailable: 47348696 kB' 'Buffers: 2704 kB' 'Cached: 12212580 kB' 'SwapCached: 0 kB' 'Active: 9158576 kB' 'Inactive: 3508456 kB' 'Active(anon): 8763528 kB' 'Inactive(anon): 0 kB' 'Active(file): 395048 kB' 'Inactive(file): 3508456 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 454996 kB' 'Mapped: 214824 kB' 'Shmem: 8311780 kB' 'KReclaimable: 200892 kB' 'Slab: 586248 kB' 'SReclaimable: 200892 kB' 'SUnreclaim: 385356 kB' 'KernelStack: 12752 kB' 'PageTables: 7876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9908308 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197016 kB' 'VmallocChunk: 0 kB' 'Percpu: 37440 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2358876 kB' 'DirectMap2M: 20629504 kB' 'DirectMap1G: 46137344 kB' 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.102 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.102 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.103 00:44:29 -- setup/common.sh@33 -- # echo 1024 00:04:45.103 00:44:29 -- setup/common.sh@33 -- # return 0 00:04:45.103 00:44:29 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:45.103 00:44:29 -- setup/hugepages.sh@112 -- # get_nodes 00:04:45.103 00:44:29 -- setup/hugepages.sh@27 -- # local node 00:04:45.103 00:44:29 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:45.103 00:44:29 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:45.103 00:44:29 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:45.103 00:44:29 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:45.103 00:44:29 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:45.103 00:44:29 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:45.103 00:44:29 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:45.103 00:44:29 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:45.103 00:44:29 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:45.103 00:44:29 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:45.103 00:44:29 -- setup/common.sh@18 -- # local node=0 00:04:45.103 00:44:29 -- setup/common.sh@19 -- # local var val 00:04:45.103 00:44:29 -- setup/common.sh@20 -- # local mem_f mem 00:04:45.103 00:44:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.103 00:44:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:45.103 00:44:29 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:45.103 00:44:29 -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.103 00:44:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.103 00:44:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 25733956 kB' 'MemUsed: 7095928 kB' 'SwapCached: 0 kB' 'Active: 3684088 kB' 'Inactive: 155168 kB' 'Active(anon): 3522796 kB' 'Inactive(anon): 0 kB' 'Active(file): 161292 kB' 'Inactive(file): 155168 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3651428 kB' 'Mapped: 111432 kB' 'AnonPages: 190972 kB' 'Shmem: 3334968 kB' 'KernelStack: 6776 kB' 'PageTables: 3552 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 101920 kB' 'Slab: 331308 kB' 'SReclaimable: 101920 kB' 'SUnreclaim: 229388 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # continue 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.103 00:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.103 00:44:29 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.103 00:44:29 -- setup/common.sh@33 -- # echo 0 00:04:45.103 00:44:29 -- setup/common.sh@33 -- # return 0 00:04:45.103 00:44:29 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:45.103 00:44:29 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:45.103 00:44:29 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:45.103 00:44:29 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:45.103 00:44:29 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:45.103 node0=1024 expecting 1024 00:04:45.103 00:44:29 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:45.103 00:44:29 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:45.103 00:44:29 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:45.103 00:44:29 -- setup/hugepages.sh@202 -- # setup output 00:04:45.103 00:44:29 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:45.103 00:44:29 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:46.489 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:46.489 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:46.489 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:46.489 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:46.489 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:46.489 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:46.489 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:46.489 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:46.489 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:46.489 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:46.489 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:46.489 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:46.489 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:46.489 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:46.489 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:46.489 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:46.489 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:46.489 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:46.489 00:44:30 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:46.489 00:44:30 -- setup/hugepages.sh@89 -- # local node 00:04:46.489 00:44:30 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:46.489 00:44:30 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:46.489 00:44:30 -- setup/hugepages.sh@92 -- # local surp 00:04:46.489 00:44:30 -- setup/hugepages.sh@93 -- # local resv 00:04:46.489 00:44:30 -- setup/hugepages.sh@94 -- # local anon 00:04:46.489 00:44:30 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:46.489 00:44:30 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:46.489 00:44:30 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:46.489 00:44:30 -- setup/common.sh@18 -- # local node= 00:04:46.489 00:44:30 -- setup/common.sh@19 -- # local var val 00:04:46.489 00:44:30 -- setup/common.sh@20 -- # local mem_f mem 00:04:46.489 00:44:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.489 00:44:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.489 00:44:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.489 00:44:30 -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.489 00:44:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.489 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.489 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.489 00:44:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43835376 kB' 'MemAvailable: 47345704 kB' 'Buffers: 2704 kB' 'Cached: 12212628 kB' 'SwapCached: 0 kB' 'Active: 9158516 kB' 'Inactive: 3508456 kB' 'Active(anon): 8763468 kB' 'Inactive(anon): 0 kB' 'Active(file): 395048 kB' 'Inactive(file): 3508456 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 454856 kB' 'Mapped: 214828 kB' 'Shmem: 8311828 kB' 'KReclaimable: 200892 kB' 'Slab: 586040 kB' 'SReclaimable: 200892 kB' 'SUnreclaim: 385148 kB' 'KernelStack: 12752 kB' 'PageTables: 7816 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9908472 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197048 kB' 'VmallocChunk: 0 kB' 'Percpu: 37440 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2358876 kB' 'DirectMap2M: 20629504 kB' 'DirectMap1G: 46137344 kB' 00:04:46.489 00:44:30 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.489 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.489 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.489 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.489 00:44:30 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.489 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.489 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.489 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.489 00:44:30 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.489 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.489 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.489 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.489 00:44:30 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.489 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.489 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.489 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.489 00:44:30 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.489 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.489 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.489 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.489 00:44:30 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.489 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.489 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.489 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.489 00:44:30 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.489 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.489 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.489 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.489 00:44:30 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.489 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.489 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.489 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.489 00:44:30 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.489 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.489 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.489 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.489 00:44:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.489 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.489 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.489 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.489 00:44:30 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.489 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.489 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.489 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.489 00:44:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.489 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.489 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.489 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.489 00:44:30 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.489 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.489 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.489 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.489 00:44:30 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.489 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.489 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.489 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.489 00:44:30 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.489 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.489 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.489 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.489 00:44:30 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.489 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.489 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.489 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.489 00:44:30 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.489 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.489 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.489 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.489 00:44:30 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.489 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.489 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.489 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.489 00:44:30 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.489 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.489 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.489 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.489 00:44:30 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.489 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.489 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.489 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.489 00:44:30 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.489 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.489 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.489 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.489 00:44:30 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.489 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.489 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.489 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.489 00:44:30 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.489 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.489 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.489 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.489 00:44:30 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.489 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.490 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.490 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.490 00:44:30 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.490 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.490 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.490 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.490 00:44:30 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.490 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.490 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.490 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.490 00:44:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.490 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.490 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.490 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.490 00:44:30 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.490 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.490 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.490 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.490 00:44:30 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.490 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.490 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.490 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.490 00:44:30 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.490 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.490 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.490 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.490 00:44:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.490 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.490 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.490 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.490 00:44:30 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.490 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.490 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.490 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.490 00:44:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.490 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.490 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.490 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.490 00:44:30 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.490 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.490 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.490 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.490 00:44:30 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.490 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.490 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.490 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.490 00:44:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.490 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.490 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.490 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.490 00:44:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.490 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.490 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.490 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.490 00:44:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.490 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.490 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.490 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.490 00:44:30 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.490 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.490 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.490 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.490 00:44:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.490 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.490 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.490 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.490 00:44:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.490 00:44:30 -- setup/common.sh@33 -- # echo 0 00:04:46.490 00:44:30 -- setup/common.sh@33 -- # return 0 00:04:46.490 00:44:30 -- setup/hugepages.sh@97 -- # anon=0 00:04:46.490 00:44:30 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:46.490 00:44:30 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:46.490 00:44:30 -- setup/common.sh@18 -- # local node= 00:04:46.490 00:44:30 -- setup/common.sh@19 -- # local var val 00:04:46.490 00:44:30 -- setup/common.sh@20 -- # local mem_f mem 00:04:46.490 00:44:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.490 00:44:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.490 00:44:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.490 00:44:30 -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.490 00:44:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.490 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.490 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.490 00:44:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43849352 kB' 'MemAvailable: 47359680 kB' 'Buffers: 2704 kB' 'Cached: 12212632 kB' 'SwapCached: 0 kB' 'Active: 9158524 kB' 'Inactive: 3508456 kB' 'Active(anon): 8763476 kB' 'Inactive(anon): 0 kB' 'Active(file): 395048 kB' 'Inactive(file): 3508456 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 454896 kB' 'Mapped: 214828 kB' 'Shmem: 8311832 kB' 'KReclaimable: 200892 kB' 'Slab: 586036 kB' 'SReclaimable: 200892 kB' 'SUnreclaim: 385144 kB' 'KernelStack: 12752 kB' 'PageTables: 7792 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9908484 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197016 kB' 'VmallocChunk: 0 kB' 'Percpu: 37440 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2358876 kB' 'DirectMap2M: 20629504 kB' 'DirectMap1G: 46137344 kB' 00:04:46.490 00:44:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.490 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.490 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.490 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.490 00:44:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.490 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.490 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.490 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.490 00:44:30 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.490 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.490 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.490 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.490 00:44:30 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.490 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.490 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.490 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.490 00:44:30 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.490 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.490 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.490 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.490 00:44:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.490 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.490 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.490 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.490 00:44:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.490 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.490 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.490 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.490 00:44:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.490 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.490 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.490 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.490 00:44:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.490 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.490 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.490 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.490 00:44:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.490 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.490 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.490 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.490 00:44:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.490 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.490 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.490 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.490 00:44:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.490 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.490 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.490 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.490 00:44:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.490 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.490 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.490 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.490 00:44:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.490 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.490 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.490 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.490 00:44:30 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.490 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.490 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.490 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.490 00:44:30 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.491 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.491 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.491 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.491 00:44:30 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.491 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.491 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.491 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.491 00:44:30 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.491 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.491 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.491 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.491 00:44:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.491 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.491 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.491 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.491 00:44:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.491 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.491 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.491 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.491 00:44:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.491 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.491 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.491 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.491 00:44:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.491 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.491 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.491 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.491 00:44:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.491 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.491 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.491 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.491 00:44:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.491 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.491 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.491 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.491 00:44:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.491 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.491 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.491 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.491 00:44:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.491 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.491 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.491 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.491 00:44:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.491 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.491 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.491 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.491 00:44:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.491 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.491 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.491 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.491 00:44:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.491 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.491 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.491 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.491 00:44:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.491 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.491 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.491 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.491 00:44:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.491 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.491 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.491 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.491 00:44:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.491 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.491 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.491 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.491 00:44:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.491 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.491 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.491 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.491 00:44:30 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.491 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.491 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.491 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.491 00:44:30 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.491 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.491 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.491 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.491 00:44:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.491 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.491 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.491 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.491 00:44:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.491 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.491 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.491 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.491 00:44:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.491 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.491 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.491 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.491 00:44:30 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.491 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.491 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.491 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.491 00:44:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.491 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.491 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.491 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.491 00:44:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.491 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.491 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.491 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.491 00:44:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.491 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.491 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.491 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.491 00:44:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.491 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.491 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.491 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.491 00:44:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.491 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.491 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.491 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.491 00:44:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.491 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.491 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.491 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.491 00:44:30 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.491 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.491 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.491 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.491 00:44:30 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.491 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.491 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.491 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.491 00:44:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.491 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.491 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.491 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.491 00:44:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.491 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.491 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.491 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.491 00:44:30 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.491 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.491 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.491 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.491 00:44:30 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.491 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.491 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.491 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.491 00:44:30 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.491 00:44:30 -- setup/common.sh@33 -- # echo 0 00:04:46.491 00:44:30 -- setup/common.sh@33 -- # return 0 00:04:46.491 00:44:30 -- setup/hugepages.sh@99 -- # surp=0 00:04:46.491 00:44:30 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:46.491 00:44:30 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:46.491 00:44:30 -- setup/common.sh@18 -- # local node= 00:04:46.491 00:44:30 -- setup/common.sh@19 -- # local var val 00:04:46.491 00:44:30 -- setup/common.sh@20 -- # local mem_f mem 00:04:46.491 00:44:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.491 00:44:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.491 00:44:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.491 00:44:30 -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.491 00:44:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.491 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.491 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.492 00:44:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43849468 kB' 'MemAvailable: 47359796 kB' 'Buffers: 2704 kB' 'Cached: 12212644 kB' 'SwapCached: 0 kB' 'Active: 9158756 kB' 'Inactive: 3508456 kB' 'Active(anon): 8763708 kB' 'Inactive(anon): 0 kB' 'Active(file): 395048 kB' 'Inactive(file): 3508456 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 455108 kB' 'Mapped: 214828 kB' 'Shmem: 8311844 kB' 'KReclaimable: 200892 kB' 'Slab: 586068 kB' 'SReclaimable: 200892 kB' 'SUnreclaim: 385176 kB' 'KernelStack: 12768 kB' 'PageTables: 7864 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9908500 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197016 kB' 'VmallocChunk: 0 kB' 'Percpu: 37440 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2358876 kB' 'DirectMap2M: 20629504 kB' 'DirectMap1G: 46137344 kB' 00:04:46.492 00:44:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.492 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.492 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.492 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.492 00:44:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.492 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.492 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.492 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.492 00:44:30 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.492 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.492 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.492 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.492 00:44:30 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.492 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.492 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.492 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.492 00:44:30 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.492 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.492 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.492 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.492 00:44:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.492 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.492 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.492 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.492 00:44:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.492 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.492 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.492 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.492 00:44:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.492 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.492 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.492 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.492 00:44:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.492 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.492 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.492 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.492 00:44:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.492 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.492 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.492 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.492 00:44:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.492 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.492 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.492 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.492 00:44:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.492 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.492 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.492 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.492 00:44:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.492 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.492 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.492 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.492 00:44:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.492 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.492 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.492 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.492 00:44:30 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.492 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.492 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.492 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.492 00:44:30 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.492 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.492 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.492 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.492 00:44:30 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.492 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.492 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.492 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.492 00:44:30 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.492 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.492 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.492 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.492 00:44:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.492 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.492 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.492 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.492 00:44:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.492 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.492 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.492 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.492 00:44:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.492 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.492 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.492 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.492 00:44:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.492 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.492 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.492 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.492 00:44:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.492 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.492 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.492 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.492 00:44:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.492 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.492 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.492 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.492 00:44:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.492 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.492 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.492 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.492 00:44:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.492 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.492 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.492 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.492 00:44:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.492 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.492 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.492 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.492 00:44:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.492 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.492 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.492 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.492 00:44:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.492 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.492 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.492 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.492 00:44:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.492 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.492 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.492 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.492 00:44:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.492 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.492 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.492 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.492 00:44:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.492 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.492 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.492 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.492 00:44:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.492 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.492 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.492 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.492 00:44:30 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.492 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.492 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.492 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.492 00:44:30 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.492 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.492 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.492 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.492 00:44:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.492 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.492 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.492 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.492 00:44:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.492 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.493 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.493 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.493 00:44:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.493 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.493 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.493 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.493 00:44:30 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.493 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.493 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.493 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.493 00:44:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.493 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.493 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.493 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.493 00:44:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.493 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.493 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.493 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.493 00:44:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.493 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.493 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.493 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.493 00:44:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.493 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.493 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.493 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.493 00:44:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.493 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.493 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.493 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.493 00:44:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.493 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.493 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.493 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.493 00:44:30 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.493 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.493 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.493 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.493 00:44:30 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.493 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.493 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.493 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.493 00:44:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.493 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.493 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.493 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.493 00:44:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.493 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.493 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.493 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.493 00:44:30 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.493 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.493 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.493 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.493 00:44:30 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.493 00:44:30 -- setup/common.sh@33 -- # echo 0 00:04:46.493 00:44:30 -- setup/common.sh@33 -- # return 0 00:04:46.493 00:44:30 -- setup/hugepages.sh@100 -- # resv=0 00:04:46.493 00:44:30 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:46.493 nr_hugepages=1024 00:04:46.493 00:44:30 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:46.493 resv_hugepages=0 00:04:46.493 00:44:30 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:46.493 surplus_hugepages=0 00:04:46.493 00:44:30 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:46.493 anon_hugepages=0 00:04:46.493 00:44:30 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:46.493 00:44:30 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:46.493 00:44:30 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:46.493 00:44:30 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:46.493 00:44:30 -- setup/common.sh@18 -- # local node= 00:04:46.493 00:44:30 -- setup/common.sh@19 -- # local var val 00:04:46.493 00:44:30 -- setup/common.sh@20 -- # local mem_f mem 00:04:46.493 00:44:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.493 00:44:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.493 00:44:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.493 00:44:30 -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.493 00:44:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.493 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.493 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.493 00:44:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43849468 kB' 'MemAvailable: 47359796 kB' 'Buffers: 2704 kB' 'Cached: 12212656 kB' 'SwapCached: 0 kB' 'Active: 9158744 kB' 'Inactive: 3508456 kB' 'Active(anon): 8763696 kB' 'Inactive(anon): 0 kB' 'Active(file): 395048 kB' 'Inactive(file): 3508456 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 455104 kB' 'Mapped: 214828 kB' 'Shmem: 8311856 kB' 'KReclaimable: 200892 kB' 'Slab: 586068 kB' 'SReclaimable: 200892 kB' 'SUnreclaim: 385176 kB' 'KernelStack: 12768 kB' 'PageTables: 7864 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9908516 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197032 kB' 'VmallocChunk: 0 kB' 'Percpu: 37440 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2358876 kB' 'DirectMap2M: 20629504 kB' 'DirectMap1G: 46137344 kB' 00:04:46.493 00:44:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.493 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.493 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.493 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.493 00:44:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.493 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.493 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.493 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.493 00:44:30 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.493 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.493 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.493 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.493 00:44:30 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.493 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.493 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.493 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.493 00:44:30 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.493 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.493 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.493 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.493 00:44:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.493 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.493 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.493 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.493 00:44:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.493 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.493 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.493 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.493 00:44:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.493 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.493 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.493 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.493 00:44:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.494 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.494 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.494 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.494 00:44:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.494 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.494 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.494 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.494 00:44:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.494 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.494 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.494 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.494 00:44:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.494 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.494 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.494 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.494 00:44:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.494 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.494 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.494 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.494 00:44:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.494 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.494 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.494 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.494 00:44:30 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.494 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.494 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.494 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.494 00:44:30 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.494 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.494 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.494 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.494 00:44:30 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.494 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.494 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.494 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.494 00:44:30 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.494 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.494 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.494 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.494 00:44:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.494 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.494 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.494 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.494 00:44:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.494 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.494 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.494 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.494 00:44:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.494 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.494 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.494 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.494 00:44:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.494 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.494 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.494 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.494 00:44:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.494 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.494 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.494 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.494 00:44:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.494 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.494 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.494 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.494 00:44:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.494 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.494 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.494 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.494 00:44:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.494 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.494 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.494 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.494 00:44:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.494 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.494 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.494 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.494 00:44:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.494 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.494 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.494 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.494 00:44:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.494 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.494 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.494 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.494 00:44:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.494 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.494 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.494 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.494 00:44:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.494 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.494 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.494 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.494 00:44:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.494 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.494 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.494 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.494 00:44:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.494 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.494 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.494 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.494 00:44:30 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.494 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.494 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.494 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.494 00:44:30 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.494 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.494 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.494 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.494 00:44:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.494 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.494 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.494 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.494 00:44:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.494 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.494 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.494 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.494 00:44:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.494 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.494 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.494 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.494 00:44:30 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.494 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.494 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.494 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.494 00:44:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.494 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.494 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.494 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.494 00:44:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.494 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.494 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.494 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.494 00:44:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.494 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.494 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.494 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.494 00:44:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.494 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.494 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.494 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.494 00:44:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.494 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.494 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.494 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.494 00:44:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.494 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.494 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.494 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.494 00:44:30 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.494 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.494 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.494 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.494 00:44:30 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.494 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.495 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.495 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.495 00:44:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.495 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.495 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.495 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.495 00:44:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.495 00:44:30 -- setup/common.sh@33 -- # echo 1024 00:04:46.495 00:44:30 -- setup/common.sh@33 -- # return 0 00:04:46.495 00:44:30 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:46.495 00:44:30 -- setup/hugepages.sh@112 -- # get_nodes 00:04:46.495 00:44:30 -- setup/hugepages.sh@27 -- # local node 00:04:46.495 00:44:30 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:46.495 00:44:30 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:46.495 00:44:30 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:46.495 00:44:30 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:46.495 00:44:30 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:46.495 00:44:30 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:46.495 00:44:30 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:46.495 00:44:30 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:46.495 00:44:30 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:46.495 00:44:30 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:46.495 00:44:30 -- setup/common.sh@18 -- # local node=0 00:04:46.495 00:44:30 -- setup/common.sh@19 -- # local var val 00:04:46.495 00:44:30 -- setup/common.sh@20 -- # local mem_f mem 00:04:46.495 00:44:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.495 00:44:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:46.495 00:44:30 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:46.495 00:44:30 -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.495 00:44:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.495 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.495 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.495 00:44:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 25742596 kB' 'MemUsed: 7087288 kB' 'SwapCached: 0 kB' 'Active: 3684276 kB' 'Inactive: 155168 kB' 'Active(anon): 3522984 kB' 'Inactive(anon): 0 kB' 'Active(file): 161292 kB' 'Inactive(file): 155168 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3651428 kB' 'Mapped: 111436 kB' 'AnonPages: 191188 kB' 'Shmem: 3334968 kB' 'KernelStack: 6824 kB' 'PageTables: 3600 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 101920 kB' 'Slab: 331276 kB' 'SReclaimable: 101920 kB' 'SUnreclaim: 229356 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:46.495 00:44:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.495 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.495 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.495 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.755 00:44:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.755 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.755 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.755 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.755 00:44:30 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.755 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.755 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.755 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.755 00:44:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.755 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.755 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.755 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.755 00:44:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.755 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.755 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.755 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.755 00:44:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.755 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.755 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.755 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.755 00:44:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.755 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.755 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.755 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.755 00:44:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.755 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.755 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.755 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.755 00:44:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.755 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.755 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.755 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.755 00:44:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.755 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.755 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.755 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.755 00:44:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.755 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.755 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.755 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.755 00:44:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.755 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.755 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.755 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.755 00:44:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.755 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.755 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.755 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.755 00:44:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.755 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.755 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.755 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.755 00:44:30 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.755 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.755 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.755 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.755 00:44:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.755 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.755 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.755 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.755 00:44:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.755 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.755 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.755 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.755 00:44:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.755 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.755 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.755 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.755 00:44:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.755 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.755 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.755 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.755 00:44:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.755 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.755 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.755 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.755 00:44:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.755 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.755 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.755 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.755 00:44:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.755 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.755 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.755 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.755 00:44:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.755 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.755 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.755 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.755 00:44:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.755 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.755 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.755 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.755 00:44:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.755 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.755 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.755 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.755 00:44:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.755 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.755 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.755 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.755 00:44:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.755 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.755 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.755 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.755 00:44:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.755 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.755 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.755 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.755 00:44:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.755 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.755 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.755 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.755 00:44:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.755 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.755 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.755 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.755 00:44:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.755 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.755 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.755 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.755 00:44:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.755 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.755 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.755 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.755 00:44:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.755 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.755 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.755 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.755 00:44:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.755 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.755 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.755 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.755 00:44:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.755 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.755 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.755 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.755 00:44:30 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.755 00:44:30 -- setup/common.sh@32 -- # continue 00:04:46.755 00:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.755 00:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.755 00:44:30 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.755 00:44:30 -- setup/common.sh@33 -- # echo 0 00:04:46.755 00:44:30 -- setup/common.sh@33 -- # return 0 00:04:46.755 00:44:30 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:46.755 00:44:30 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:46.755 00:44:30 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:46.755 00:44:30 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:46.755 00:44:30 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:46.756 node0=1024 expecting 1024 00:04:46.756 00:44:30 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:46.756 00:04:46.756 real 0m2.806s 00:04:46.756 user 0m1.177s 00:04:46.756 sys 0m1.557s 00:04:46.756 00:44:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:46.756 00:44:30 -- common/autotest_common.sh@10 -- # set +x 00:04:46.756 ************************************ 00:04:46.756 END TEST no_shrink_alloc 00:04:46.756 ************************************ 00:04:46.756 00:44:30 -- setup/hugepages.sh@217 -- # clear_hp 00:04:46.756 00:44:30 -- setup/hugepages.sh@37 -- # local node hp 00:04:46.756 00:44:30 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:46.756 00:44:30 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:46.756 00:44:30 -- setup/hugepages.sh@41 -- # echo 0 00:04:46.756 00:44:30 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:46.756 00:44:30 -- setup/hugepages.sh@41 -- # echo 0 00:04:46.756 00:44:30 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:46.756 00:44:30 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:46.756 00:44:30 -- setup/hugepages.sh@41 -- # echo 0 00:04:46.756 00:44:30 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:46.756 00:44:30 -- setup/hugepages.sh@41 -- # echo 0 00:04:46.756 00:44:30 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:46.756 00:44:30 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:46.756 00:04:46.756 real 0m11.065s 00:04:46.756 user 0m4.343s 00:04:46.756 sys 0m5.674s 00:04:46.756 00:44:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:46.756 00:44:30 -- common/autotest_common.sh@10 -- # set +x 00:04:46.756 ************************************ 00:04:46.756 END TEST hugepages 00:04:46.756 ************************************ 00:04:46.756 00:44:30 -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:46.756 00:44:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:46.756 00:44:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:46.756 00:44:30 -- common/autotest_common.sh@10 -- # set +x 00:04:46.756 ************************************ 00:04:46.756 START TEST driver 00:04:46.756 ************************************ 00:04:46.756 00:44:30 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:46.756 * Looking for test storage... 00:04:46.756 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:46.756 00:44:30 -- setup/driver.sh@68 -- # setup reset 00:04:46.756 00:44:30 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:46.756 00:44:30 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:49.289 00:44:33 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:49.289 00:44:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:49.289 00:44:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:49.289 00:44:33 -- common/autotest_common.sh@10 -- # set +x 00:04:49.289 ************************************ 00:04:49.289 START TEST guess_driver 00:04:49.289 ************************************ 00:04:49.289 00:44:33 -- common/autotest_common.sh@1104 -- # guess_driver 00:04:49.289 00:44:33 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:49.289 00:44:33 -- setup/driver.sh@47 -- # local fail=0 00:04:49.289 00:44:33 -- setup/driver.sh@49 -- # pick_driver 00:04:49.289 00:44:33 -- setup/driver.sh@36 -- # vfio 00:04:49.289 00:44:33 -- setup/driver.sh@21 -- # local iommu_grups 00:04:49.289 00:44:33 -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:49.289 00:44:33 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:49.289 00:44:33 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:49.289 00:44:33 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:49.289 00:44:33 -- setup/driver.sh@29 -- # (( 141 > 0 )) 00:04:49.289 00:44:33 -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:49.289 00:44:33 -- setup/driver.sh@14 -- # mod vfio_pci 00:04:49.289 00:44:33 -- setup/driver.sh@12 -- # dep vfio_pci 00:04:49.289 00:44:33 -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:49.289 00:44:33 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:49.289 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:49.289 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:49.289 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:49.289 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:49.289 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:49.289 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:49.289 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:49.289 00:44:33 -- setup/driver.sh@30 -- # return 0 00:04:49.289 00:44:33 -- setup/driver.sh@37 -- # echo vfio-pci 00:04:49.289 00:44:33 -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:49.289 00:44:33 -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:49.289 00:44:33 -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:49.289 Looking for driver=vfio-pci 00:04:49.289 00:44:33 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:49.289 00:44:33 -- setup/driver.sh@45 -- # setup output config 00:04:49.289 00:44:33 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:49.289 00:44:33 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:50.666 00:44:34 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:50.666 00:44:34 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:50.666 00:44:34 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:50.666 00:44:34 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:50.666 00:44:34 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:50.666 00:44:34 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:50.666 00:44:34 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:50.666 00:44:34 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:50.666 00:44:34 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:50.666 00:44:34 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:50.666 00:44:34 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:50.666 00:44:34 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:50.666 00:44:34 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:50.666 00:44:34 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:50.666 00:44:34 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:50.666 00:44:34 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:50.666 00:44:34 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:50.666 00:44:34 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:50.666 00:44:34 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:50.666 00:44:34 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:50.666 00:44:34 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:50.666 00:44:34 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:50.666 00:44:34 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:50.666 00:44:34 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:50.666 00:44:34 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:50.666 00:44:34 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:50.666 00:44:34 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:50.666 00:44:34 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:50.666 00:44:34 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:50.666 00:44:34 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:50.666 00:44:34 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:50.666 00:44:34 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:50.666 00:44:34 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:50.666 00:44:34 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:50.666 00:44:34 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:50.666 00:44:34 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:50.666 00:44:34 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:50.666 00:44:34 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:50.666 00:44:34 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:50.666 00:44:34 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:50.666 00:44:34 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:50.666 00:44:34 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:50.666 00:44:34 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:50.666 00:44:34 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:50.666 00:44:34 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:50.666 00:44:34 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:50.666 00:44:34 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:50.666 00:44:34 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:51.603 00:44:35 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:51.603 00:44:35 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:51.603 00:44:35 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:51.603 00:44:35 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:51.603 00:44:35 -- setup/driver.sh@65 -- # setup reset 00:04:51.603 00:44:35 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:51.603 00:44:35 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:54.130 00:04:54.130 real 0m4.910s 00:04:54.130 user 0m1.187s 00:04:54.130 sys 0m1.876s 00:04:54.131 00:44:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:54.131 00:44:38 -- common/autotest_common.sh@10 -- # set +x 00:04:54.131 ************************************ 00:04:54.131 END TEST guess_driver 00:04:54.131 ************************************ 00:04:54.131 00:04:54.131 real 0m7.473s 00:04:54.131 user 0m1.784s 00:04:54.131 sys 0m2.875s 00:04:54.131 00:44:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:54.131 00:44:38 -- common/autotest_common.sh@10 -- # set +x 00:04:54.131 ************************************ 00:04:54.131 END TEST driver 00:04:54.131 ************************************ 00:04:54.131 00:44:38 -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:54.131 00:44:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:54.131 00:44:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:54.131 00:44:38 -- common/autotest_common.sh@10 -- # set +x 00:04:54.131 ************************************ 00:04:54.131 START TEST devices 00:04:54.131 ************************************ 00:04:54.131 00:44:38 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:54.131 * Looking for test storage... 00:04:54.131 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:54.131 00:44:38 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:54.131 00:44:38 -- setup/devices.sh@192 -- # setup reset 00:04:54.131 00:44:38 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:54.131 00:44:38 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:56.034 00:44:39 -- setup/devices.sh@194 -- # get_zoned_devs 00:04:56.034 00:44:39 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:04:56.034 00:44:39 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:04:56.034 00:44:39 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:04:56.034 00:44:39 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:56.034 00:44:39 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:04:56.034 00:44:39 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:04:56.034 00:44:39 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:56.034 00:44:39 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:56.034 00:44:39 -- setup/devices.sh@196 -- # blocks=() 00:04:56.034 00:44:39 -- setup/devices.sh@196 -- # declare -a blocks 00:04:56.034 00:44:39 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:56.034 00:44:39 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:56.034 00:44:39 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:56.034 00:44:39 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:56.034 00:44:39 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:56.034 00:44:39 -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:56.034 00:44:39 -- setup/devices.sh@202 -- # pci=0000:88:00.0 00:04:56.034 00:44:39 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:04:56.034 00:44:39 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:56.034 00:44:39 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:04:56.034 00:44:39 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:56.034 No valid GPT data, bailing 00:04:56.034 00:44:39 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:56.034 00:44:39 -- scripts/common.sh@393 -- # pt= 00:04:56.034 00:44:39 -- scripts/common.sh@394 -- # return 1 00:04:56.034 00:44:39 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:56.034 00:44:39 -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:56.034 00:44:39 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:56.034 00:44:39 -- setup/common.sh@80 -- # echo 1000204886016 00:04:56.034 00:44:39 -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:04:56.034 00:44:39 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:56.034 00:44:39 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:88:00.0 00:04:56.034 00:44:39 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:56.034 00:44:39 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:56.034 00:44:39 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:56.034 00:44:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:56.034 00:44:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:56.034 00:44:39 -- common/autotest_common.sh@10 -- # set +x 00:04:56.034 ************************************ 00:04:56.034 START TEST nvme_mount 00:04:56.034 ************************************ 00:04:56.034 00:44:39 -- common/autotest_common.sh@1104 -- # nvme_mount 00:04:56.034 00:44:39 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:56.034 00:44:39 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:56.034 00:44:39 -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:56.034 00:44:39 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:56.034 00:44:39 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:56.034 00:44:39 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:56.034 00:44:39 -- setup/common.sh@40 -- # local part_no=1 00:04:56.034 00:44:39 -- setup/common.sh@41 -- # local size=1073741824 00:04:56.034 00:44:39 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:56.034 00:44:39 -- setup/common.sh@44 -- # parts=() 00:04:56.034 00:44:39 -- setup/common.sh@44 -- # local parts 00:04:56.034 00:44:39 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:56.034 00:44:39 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:56.034 00:44:39 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:56.034 00:44:39 -- setup/common.sh@46 -- # (( part++ )) 00:04:56.034 00:44:39 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:56.034 00:44:39 -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:56.034 00:44:39 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:56.034 00:44:39 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:56.973 Creating new GPT entries in memory. 00:04:56.973 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:56.973 other utilities. 00:04:56.973 00:44:40 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:56.973 00:44:40 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:56.973 00:44:40 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:56.973 00:44:40 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:56.973 00:44:40 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:57.911 Creating new GPT entries in memory. 00:04:57.911 The operation has completed successfully. 00:04:57.911 00:44:41 -- setup/common.sh@57 -- # (( part++ )) 00:04:57.911 00:44:41 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:57.911 00:44:41 -- setup/common.sh@62 -- # wait 3265903 00:04:57.911 00:44:41 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:57.911 00:44:41 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:57.911 00:44:41 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:57.911 00:44:41 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:57.911 00:44:41 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:57.911 00:44:41 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:57.911 00:44:41 -- setup/devices.sh@105 -- # verify 0000:88:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:57.911 00:44:41 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:57.911 00:44:41 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:57.911 00:44:41 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:57.911 00:44:41 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:57.911 00:44:41 -- setup/devices.sh@53 -- # local found=0 00:04:57.911 00:44:41 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:57.911 00:44:41 -- setup/devices.sh@56 -- # : 00:04:57.911 00:44:41 -- setup/devices.sh@59 -- # local pci status 00:04:57.911 00:44:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.911 00:44:41 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:57.911 00:44:41 -- setup/devices.sh@47 -- # setup output config 00:04:57.911 00:44:41 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:57.911 00:44:41 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:58.848 00:44:42 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:58.848 00:44:42 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:58.848 00:44:42 -- setup/devices.sh@63 -- # found=1 00:04:58.848 00:44:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.848 00:44:42 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:58.848 00:44:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.848 00:44:42 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:58.848 00:44:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.848 00:44:42 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:58.848 00:44:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.848 00:44:42 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:58.848 00:44:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.848 00:44:42 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:58.849 00:44:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.849 00:44:42 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:58.849 00:44:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.849 00:44:42 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:58.849 00:44:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.849 00:44:42 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:58.849 00:44:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.849 00:44:43 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:58.849 00:44:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.849 00:44:43 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:58.849 00:44:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.849 00:44:43 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:58.849 00:44:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.849 00:44:43 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:58.849 00:44:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.849 00:44:43 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:58.849 00:44:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.849 00:44:43 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:58.849 00:44:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.849 00:44:43 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:58.849 00:44:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.849 00:44:43 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:58.849 00:44:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.109 00:44:43 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:59.109 00:44:43 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:59.109 00:44:43 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:59.109 00:44:43 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:59.109 00:44:43 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:59.109 00:44:43 -- setup/devices.sh@110 -- # cleanup_nvme 00:04:59.109 00:44:43 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:59.109 00:44:43 -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:59.109 00:44:43 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:59.109 00:44:43 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:59.109 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:59.109 00:44:43 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:59.109 00:44:43 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:59.367 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:59.367 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:59.367 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:59.367 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:59.367 00:44:43 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:59.367 00:44:43 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:59.367 00:44:43 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:59.367 00:44:43 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:59.367 00:44:43 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:59.367 00:44:43 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:59.625 00:44:43 -- setup/devices.sh@116 -- # verify 0000:88:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:59.625 00:44:43 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:59.625 00:44:43 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:59.625 00:44:43 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:59.625 00:44:43 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:59.625 00:44:43 -- setup/devices.sh@53 -- # local found=0 00:04:59.625 00:44:43 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:59.625 00:44:43 -- setup/devices.sh@56 -- # : 00:04:59.625 00:44:43 -- setup/devices.sh@59 -- # local pci status 00:04:59.625 00:44:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.625 00:44:43 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:59.625 00:44:43 -- setup/devices.sh@47 -- # setup output config 00:04:59.625 00:44:43 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:59.625 00:44:43 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:00.559 00:44:44 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:00.559 00:44:44 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:00.559 00:44:44 -- setup/devices.sh@63 -- # found=1 00:05:00.559 00:44:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.559 00:44:44 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:00.559 00:44:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.559 00:44:44 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:00.559 00:44:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.559 00:44:44 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:00.559 00:44:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.559 00:44:44 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:00.559 00:44:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.559 00:44:44 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:00.559 00:44:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.559 00:44:44 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:00.559 00:44:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.559 00:44:44 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:00.559 00:44:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.559 00:44:44 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:00.559 00:44:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.559 00:44:44 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:00.559 00:44:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.559 00:44:44 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:00.559 00:44:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.559 00:44:44 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:00.560 00:44:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.560 00:44:44 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:00.560 00:44:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.560 00:44:44 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:00.560 00:44:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.560 00:44:44 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:00.560 00:44:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.560 00:44:44 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:00.560 00:44:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.560 00:44:44 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:00.560 00:44:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.560 00:44:44 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:00.560 00:44:44 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:00.560 00:44:44 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:00.560 00:44:44 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:00.560 00:44:44 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:00.560 00:44:44 -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:00.560 00:44:44 -- setup/devices.sh@125 -- # verify 0000:88:00.0 data@nvme0n1 '' '' 00:05:00.560 00:44:44 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:00.560 00:44:44 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:00.560 00:44:44 -- setup/devices.sh@50 -- # local mount_point= 00:05:00.560 00:44:44 -- setup/devices.sh@51 -- # local test_file= 00:05:00.560 00:44:44 -- setup/devices.sh@53 -- # local found=0 00:05:00.560 00:44:44 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:00.560 00:44:44 -- setup/devices.sh@59 -- # local pci status 00:05:00.560 00:44:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.560 00:44:44 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:00.560 00:44:44 -- setup/devices.sh@47 -- # setup output config 00:05:00.560 00:44:44 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:00.560 00:44:44 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:01.934 00:44:45 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:01.934 00:44:45 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:01.934 00:44:45 -- setup/devices.sh@63 -- # found=1 00:05:01.934 00:44:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.934 00:44:45 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:01.934 00:44:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.934 00:44:45 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:01.934 00:44:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.934 00:44:45 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:01.934 00:44:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.934 00:44:45 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:01.934 00:44:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.934 00:44:45 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:01.934 00:44:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.934 00:44:45 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:01.934 00:44:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.934 00:44:45 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:01.934 00:44:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.934 00:44:45 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:01.934 00:44:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.934 00:44:45 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:01.934 00:44:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.934 00:44:45 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:01.934 00:44:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.934 00:44:45 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:01.934 00:44:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.934 00:44:45 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:01.934 00:44:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.934 00:44:45 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:01.934 00:44:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.934 00:44:45 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:01.934 00:44:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.934 00:44:45 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:01.934 00:44:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.934 00:44:45 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:01.934 00:44:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.934 00:44:45 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:01.934 00:44:45 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:01.934 00:44:45 -- setup/devices.sh@68 -- # return 0 00:05:01.934 00:44:45 -- setup/devices.sh@128 -- # cleanup_nvme 00:05:01.934 00:44:45 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:01.934 00:44:45 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:01.934 00:44:45 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:01.934 00:44:45 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:01.934 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:01.934 00:05:01.934 real 0m6.125s 00:05:01.934 user 0m1.341s 00:05:01.934 sys 0m2.355s 00:05:01.934 00:44:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:01.934 00:44:45 -- common/autotest_common.sh@10 -- # set +x 00:05:01.934 ************************************ 00:05:01.934 END TEST nvme_mount 00:05:01.934 ************************************ 00:05:01.934 00:44:45 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:01.934 00:44:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:01.934 00:44:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:01.934 00:44:45 -- common/autotest_common.sh@10 -- # set +x 00:05:01.934 ************************************ 00:05:01.934 START TEST dm_mount 00:05:01.934 ************************************ 00:05:01.934 00:44:45 -- common/autotest_common.sh@1104 -- # dm_mount 00:05:01.934 00:44:45 -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:01.934 00:44:45 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:01.934 00:44:45 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:01.934 00:44:45 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:01.934 00:44:45 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:01.935 00:44:45 -- setup/common.sh@40 -- # local part_no=2 00:05:01.935 00:44:45 -- setup/common.sh@41 -- # local size=1073741824 00:05:01.935 00:44:45 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:01.935 00:44:45 -- setup/common.sh@44 -- # parts=() 00:05:01.935 00:44:45 -- setup/common.sh@44 -- # local parts 00:05:01.935 00:44:45 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:01.935 00:44:45 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:01.935 00:44:45 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:01.935 00:44:45 -- setup/common.sh@46 -- # (( part++ )) 00:05:01.935 00:44:45 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:01.935 00:44:45 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:01.935 00:44:45 -- setup/common.sh@46 -- # (( part++ )) 00:05:01.935 00:44:45 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:01.935 00:44:45 -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:01.935 00:44:45 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:01.935 00:44:45 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:02.872 Creating new GPT entries in memory. 00:05:02.872 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:02.872 other utilities. 00:05:02.872 00:44:46 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:02.872 00:44:46 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:02.872 00:44:46 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:02.872 00:44:46 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:02.872 00:44:46 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:03.809 Creating new GPT entries in memory. 00:05:03.809 The operation has completed successfully. 00:05:03.809 00:44:47 -- setup/common.sh@57 -- # (( part++ )) 00:05:03.809 00:44:47 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:03.809 00:44:47 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:03.809 00:44:47 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:03.809 00:44:47 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:05:05.187 The operation has completed successfully. 00:05:05.187 00:44:49 -- setup/common.sh@57 -- # (( part++ )) 00:05:05.187 00:44:49 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:05.187 00:44:49 -- setup/common.sh@62 -- # wait 3268355 00:05:05.187 00:44:49 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:05.187 00:44:49 -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:05.187 00:44:49 -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:05.187 00:44:49 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:05.187 00:44:49 -- setup/devices.sh@160 -- # for t in {1..5} 00:05:05.187 00:44:49 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:05.187 00:44:49 -- setup/devices.sh@161 -- # break 00:05:05.187 00:44:49 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:05.187 00:44:49 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:05.187 00:44:49 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:05.187 00:44:49 -- setup/devices.sh@166 -- # dm=dm-0 00:05:05.187 00:44:49 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:05.187 00:44:49 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:05.187 00:44:49 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:05.187 00:44:49 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:05:05.187 00:44:49 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:05.187 00:44:49 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:05.187 00:44:49 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:05.187 00:44:49 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:05.187 00:44:49 -- setup/devices.sh@174 -- # verify 0000:88:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:05.187 00:44:49 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:05.187 00:44:49 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:05.187 00:44:49 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:05.187 00:44:49 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:05.187 00:44:49 -- setup/devices.sh@53 -- # local found=0 00:05:05.187 00:44:49 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:05.187 00:44:49 -- setup/devices.sh@56 -- # : 00:05:05.187 00:44:49 -- setup/devices.sh@59 -- # local pci status 00:05:05.187 00:44:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.187 00:44:49 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:05.187 00:44:49 -- setup/devices.sh@47 -- # setup output config 00:05:05.187 00:44:49 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:05.187 00:44:49 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:06.122 00:44:50 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:06.122 00:44:50 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:06.122 00:44:50 -- setup/devices.sh@63 -- # found=1 00:05:06.122 00:44:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.122 00:44:50 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:06.122 00:44:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.122 00:44:50 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:06.122 00:44:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.122 00:44:50 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:06.122 00:44:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.122 00:44:50 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:06.122 00:44:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.122 00:44:50 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:06.122 00:44:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.122 00:44:50 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:06.122 00:44:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.122 00:44:50 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:06.122 00:44:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.122 00:44:50 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:06.122 00:44:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.122 00:44:50 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:06.122 00:44:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.122 00:44:50 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:06.122 00:44:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.122 00:44:50 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:06.122 00:44:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.122 00:44:50 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:06.122 00:44:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.122 00:44:50 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:06.122 00:44:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.122 00:44:50 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:06.122 00:44:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.122 00:44:50 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:06.122 00:44:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.122 00:44:50 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:06.122 00:44:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.381 00:44:50 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:06.381 00:44:50 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:06.381 00:44:50 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:06.381 00:44:50 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:06.381 00:44:50 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:06.381 00:44:50 -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:06.381 00:44:50 -- setup/devices.sh@184 -- # verify 0000:88:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:06.381 00:44:50 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:06.381 00:44:50 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:06.381 00:44:50 -- setup/devices.sh@50 -- # local mount_point= 00:05:06.381 00:44:50 -- setup/devices.sh@51 -- # local test_file= 00:05:06.381 00:44:50 -- setup/devices.sh@53 -- # local found=0 00:05:06.381 00:44:50 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:06.381 00:44:50 -- setup/devices.sh@59 -- # local pci status 00:05:06.381 00:44:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.381 00:44:50 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:06.381 00:44:50 -- setup/devices.sh@47 -- # setup output config 00:05:06.381 00:44:50 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:06.381 00:44:50 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:07.315 00:44:51 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:07.315 00:44:51 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:07.315 00:44:51 -- setup/devices.sh@63 -- # found=1 00:05:07.315 00:44:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.315 00:44:51 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:07.315 00:44:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.315 00:44:51 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:07.315 00:44:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.315 00:44:51 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:07.315 00:44:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.574 00:44:51 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:07.574 00:44:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.574 00:44:51 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:07.574 00:44:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.574 00:44:51 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:07.574 00:44:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.574 00:44:51 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:07.574 00:44:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.574 00:44:51 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:07.574 00:44:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.574 00:44:51 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:07.574 00:44:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.574 00:44:51 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:07.574 00:44:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.574 00:44:51 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:07.574 00:44:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.574 00:44:51 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:07.574 00:44:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.574 00:44:51 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:07.574 00:44:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.574 00:44:51 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:07.574 00:44:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.574 00:44:51 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:07.574 00:44:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.574 00:44:51 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:07.574 00:44:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.574 00:44:51 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:07.574 00:44:51 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:07.574 00:44:51 -- setup/devices.sh@68 -- # return 0 00:05:07.574 00:44:51 -- setup/devices.sh@187 -- # cleanup_dm 00:05:07.574 00:44:51 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:07.574 00:44:51 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:07.574 00:44:51 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:07.574 00:44:51 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:07.574 00:44:51 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:07.833 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:07.833 00:44:51 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:07.833 00:44:51 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:07.833 00:05:07.833 real 0m5.819s 00:05:07.833 user 0m1.060s 00:05:07.833 sys 0m1.640s 00:05:07.833 00:44:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:07.833 00:44:51 -- common/autotest_common.sh@10 -- # set +x 00:05:07.833 ************************************ 00:05:07.833 END TEST dm_mount 00:05:07.833 ************************************ 00:05:07.833 00:44:51 -- setup/devices.sh@1 -- # cleanup 00:05:07.833 00:44:51 -- setup/devices.sh@11 -- # cleanup_nvme 00:05:07.833 00:44:51 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:07.833 00:44:51 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:07.833 00:44:51 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:07.833 00:44:51 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:07.833 00:44:51 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:08.092 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:08.092 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:08.092 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:08.092 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:08.092 00:44:52 -- setup/devices.sh@12 -- # cleanup_dm 00:05:08.092 00:44:52 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:08.092 00:44:52 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:08.092 00:44:52 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:08.092 00:44:52 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:08.092 00:44:52 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:08.092 00:44:52 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:08.092 00:05:08.092 real 0m13.819s 00:05:08.092 user 0m3.021s 00:05:08.092 sys 0m5.024s 00:05:08.092 00:44:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:08.092 00:44:52 -- common/autotest_common.sh@10 -- # set +x 00:05:08.092 ************************************ 00:05:08.092 END TEST devices 00:05:08.092 ************************************ 00:05:08.092 00:05:08.092 real 0m42.986s 00:05:08.092 user 0m12.404s 00:05:08.092 sys 0m19.012s 00:05:08.092 00:44:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:08.092 00:44:52 -- common/autotest_common.sh@10 -- # set +x 00:05:08.092 ************************************ 00:05:08.092 END TEST setup.sh 00:05:08.092 ************************************ 00:05:08.092 00:44:52 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:09.027 Hugepages 00:05:09.027 node hugesize free / total 00:05:09.027 node0 1048576kB 0 / 0 00:05:09.027 node0 2048kB 2048 / 2048 00:05:09.027 node1 1048576kB 0 / 0 00:05:09.027 node1 2048kB 0 / 0 00:05:09.027 00:05:09.027 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:09.027 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:05:09.027 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:05:09.027 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:05:09.027 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:05:09.027 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:05:09.027 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:05:09.027 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:05:09.027 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:05:09.027 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:05:09.027 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:05:09.027 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:05:09.027 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:05:09.027 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:05:09.027 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:05:09.027 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:05:09.027 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:05:09.298 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:05:09.298 00:44:53 -- spdk/autotest.sh@141 -- # uname -s 00:05:09.298 00:44:53 -- spdk/autotest.sh@141 -- # [[ Linux == Linux ]] 00:05:09.298 00:44:53 -- spdk/autotest.sh@143 -- # nvme_namespace_revert 00:05:09.298 00:44:53 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:10.233 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:10.233 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:10.233 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:10.233 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:10.233 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:10.233 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:10.233 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:10.233 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:10.233 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:10.233 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:10.493 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:10.493 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:10.493 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:10.493 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:10.493 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:10.493 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:11.431 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:11.431 00:44:55 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:12.367 00:44:56 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:12.367 00:44:56 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:12.367 00:44:56 -- common/autotest_common.sh@1519 -- # bdfs=($(get_nvme_bdfs)) 00:05:12.367 00:44:56 -- common/autotest_common.sh@1519 -- # get_nvme_bdfs 00:05:12.367 00:44:56 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:12.367 00:44:56 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:12.367 00:44:56 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:12.367 00:44:56 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:12.367 00:44:56 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:12.367 00:44:56 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:12.367 00:44:56 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:05:12.367 00:44:56 -- common/autotest_common.sh@1521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:13.746 Waiting for block devices as requested 00:05:13.746 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:05:13.746 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:14.033 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:14.033 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:14.033 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:14.291 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:14.291 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:14.291 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:14.291 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:14.291 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:14.550 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:14.550 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:14.550 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:14.809 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:14.809 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:14.809 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:14.809 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:15.067 00:44:59 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:05:15.067 00:44:59 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:05:15.067 00:44:59 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:05:15.067 00:44:59 -- common/autotest_common.sh@1487 -- # grep 0000:88:00.0/nvme/nvme 00:05:15.067 00:44:59 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:05:15.067 00:44:59 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:05:15.067 00:44:59 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:05:15.067 00:44:59 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:15.067 00:44:59 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme0 00:05:15.067 00:44:59 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme0 ]] 00:05:15.067 00:44:59 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme0 00:05:15.067 00:44:59 -- common/autotest_common.sh@1530 -- # grep oacs 00:05:15.067 00:44:59 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:05:15.067 00:44:59 -- common/autotest_common.sh@1530 -- # oacs=' 0xf' 00:05:15.067 00:44:59 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:05:15.067 00:44:59 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:05:15.067 00:44:59 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme0 00:05:15.067 00:44:59 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:05:15.067 00:44:59 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:05:15.067 00:44:59 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:05:15.067 00:44:59 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:05:15.067 00:44:59 -- common/autotest_common.sh@1542 -- # continue 00:05:15.067 00:44:59 -- spdk/autotest.sh@146 -- # timing_exit pre_cleanup 00:05:15.067 00:44:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:15.067 00:44:59 -- common/autotest_common.sh@10 -- # set +x 00:05:15.067 00:44:59 -- spdk/autotest.sh@149 -- # timing_enter afterboot 00:05:15.067 00:44:59 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:15.067 00:44:59 -- common/autotest_common.sh@10 -- # set +x 00:05:15.067 00:44:59 -- spdk/autotest.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:16.444 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:16.444 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:16.444 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:16.444 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:16.444 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:16.444 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:16.444 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:16.444 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:16.444 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:16.444 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:16.444 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:16.444 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:16.444 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:16.444 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:16.444 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:16.444 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:17.380 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:17.380 00:45:01 -- spdk/autotest.sh@151 -- # timing_exit afterboot 00:05:17.380 00:45:01 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:17.380 00:45:01 -- common/autotest_common.sh@10 -- # set +x 00:05:17.380 00:45:01 -- spdk/autotest.sh@155 -- # opal_revert_cleanup 00:05:17.380 00:45:01 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:17.380 00:45:01 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:17.380 00:45:01 -- common/autotest_common.sh@1562 -- # bdfs=() 00:05:17.380 00:45:01 -- common/autotest_common.sh@1562 -- # local bdfs 00:05:17.380 00:45:01 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:17.380 00:45:01 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:17.380 00:45:01 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:17.381 00:45:01 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:17.381 00:45:01 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:17.381 00:45:01 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:17.639 00:45:01 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:17.639 00:45:01 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:05:17.639 00:45:01 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:05:17.639 00:45:01 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:05:17.639 00:45:01 -- common/autotest_common.sh@1565 -- # device=0x0a54 00:05:17.639 00:45:01 -- common/autotest_common.sh@1566 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:17.639 00:45:01 -- common/autotest_common.sh@1567 -- # bdfs+=($bdf) 00:05:17.639 00:45:01 -- common/autotest_common.sh@1571 -- # printf '%s\n' 0000:88:00.0 00:05:17.639 00:45:01 -- common/autotest_common.sh@1577 -- # [[ -z 0000:88:00.0 ]] 00:05:17.639 00:45:01 -- common/autotest_common.sh@1582 -- # spdk_tgt_pid=3273771 00:05:17.639 00:45:01 -- common/autotest_common.sh@1581 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:17.639 00:45:01 -- common/autotest_common.sh@1583 -- # waitforlisten 3273771 00:05:17.639 00:45:01 -- common/autotest_common.sh@819 -- # '[' -z 3273771 ']' 00:05:17.639 00:45:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.639 00:45:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:17.639 00:45:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.639 00:45:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:17.639 00:45:01 -- common/autotest_common.sh@10 -- # set +x 00:05:17.639 [2024-07-23 00:45:01.639608] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:05:17.639 [2024-07-23 00:45:01.639714] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3273771 ] 00:05:17.639 EAL: No free 2048 kB hugepages reported on node 1 00:05:17.639 [2024-07-23 00:45:01.701145] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.639 [2024-07-23 00:45:01.790474] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:17.639 [2024-07-23 00:45:01.790682] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.575 00:45:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:18.575 00:45:02 -- common/autotest_common.sh@852 -- # return 0 00:05:18.575 00:45:02 -- common/autotest_common.sh@1585 -- # bdf_id=0 00:05:18.575 00:45:02 -- common/autotest_common.sh@1586 -- # for bdf in "${bdfs[@]}" 00:05:18.575 00:45:02 -- common/autotest_common.sh@1587 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:05:21.856 nvme0n1 00:05:21.856 00:45:05 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:21.856 [2024-07-23 00:45:05.848118] nvme_opal.c:2059:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:05:21.856 [2024-07-23 00:45:05.848169] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:05:21.856 request: 00:05:21.856 { 00:05:21.856 "nvme_ctrlr_name": "nvme0", 00:05:21.856 "password": "test", 00:05:21.856 "method": "bdev_nvme_opal_revert", 00:05:21.856 "req_id": 1 00:05:21.856 } 00:05:21.856 Got JSON-RPC error response 00:05:21.856 response: 00:05:21.856 { 00:05:21.856 "code": -32603, 00:05:21.856 "message": "Internal error" 00:05:21.856 } 00:05:21.856 00:45:05 -- common/autotest_common.sh@1589 -- # true 00:05:21.856 00:45:05 -- common/autotest_common.sh@1590 -- # (( ++bdf_id )) 00:05:21.856 00:45:05 -- common/autotest_common.sh@1593 -- # killprocess 3273771 00:05:21.856 00:45:05 -- common/autotest_common.sh@926 -- # '[' -z 3273771 ']' 00:05:21.856 00:45:05 -- common/autotest_common.sh@930 -- # kill -0 3273771 00:05:21.856 00:45:05 -- common/autotest_common.sh@931 -- # uname 00:05:21.856 00:45:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:21.856 00:45:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3273771 00:05:21.856 00:45:05 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:21.856 00:45:05 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:21.856 00:45:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3273771' 00:05:21.856 killing process with pid 3273771 00:05:21.856 00:45:05 -- common/autotest_common.sh@945 -- # kill 3273771 00:05:21.856 00:45:05 -- common/autotest_common.sh@950 -- # wait 3273771 00:05:21.856 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.856 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.856 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.856 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.856 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.856 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.856 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.856 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.856 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.856 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.856 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.856 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.856 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.856 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.856 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.856 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.856 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.856 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.856 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.856 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.856 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.856 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.856 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.856 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.856 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.856 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.856 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.856 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.856 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.856 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.856 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.856 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.856 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.856 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.856 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.856 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.856 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.856 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.856 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.856 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.856 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.856 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.856 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.856 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.856 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.856 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.856 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.856 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.856 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.856 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.856 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.856 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.856 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.856 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.856 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.856 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.856 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.856 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.856 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.856 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.856 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.856 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.856 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.857 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.783 00:45:07 -- spdk/autotest.sh@161 -- # '[' 0 -eq 1 ']' 00:05:23.783 00:45:07 -- spdk/autotest.sh@165 -- # '[' 1 -eq 1 ']' 00:05:23.783 00:45:07 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:05:23.783 00:45:07 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:05:23.783 00:45:07 -- spdk/autotest.sh@173 -- # timing_enter lib 00:05:23.783 00:45:07 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:23.783 00:45:07 -- common/autotest_common.sh@10 -- # set +x 00:05:23.783 00:45:07 -- spdk/autotest.sh@175 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:23.783 00:45:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:23.783 00:45:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:23.783 00:45:07 -- common/autotest_common.sh@10 -- # set +x 00:05:23.783 ************************************ 00:05:23.783 START TEST env 00:05:23.783 ************************************ 00:05:23.783 00:45:07 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:23.783 * Looking for test storage... 00:05:23.783 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:23.783 00:45:07 -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:23.783 00:45:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:23.783 00:45:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:23.783 00:45:07 -- common/autotest_common.sh@10 -- # set +x 00:05:23.783 ************************************ 00:05:23.783 START TEST env_memory 00:05:23.783 ************************************ 00:05:23.783 00:45:07 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:23.783 00:05:23.783 00:05:23.783 CUnit - A unit testing framework for C - Version 2.1-3 00:05:23.783 http://cunit.sourceforge.net/ 00:05:23.783 00:05:23.783 00:05:23.783 Suite: memory 00:05:23.783 Test: alloc and free memory map ...[2024-07-23 00:45:07.749116] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:23.783 passed 00:05:23.783 Test: mem map translation ...[2024-07-23 00:45:07.769333] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:23.783 [2024-07-23 00:45:07.769355] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:23.783 [2024-07-23 00:45:07.769396] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:23.783 [2024-07-23 00:45:07.769408] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:23.783 passed 00:05:23.783 Test: mem map registration ...[2024-07-23 00:45:07.810248] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:23.783 [2024-07-23 00:45:07.810268] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:23.783 passed 00:05:23.783 Test: mem map adjacent registrations ...passed 00:05:23.783 00:05:23.783 Run Summary: Type Total Ran Passed Failed Inactive 00:05:23.783 suites 1 1 n/a 0 0 00:05:23.783 tests 4 4 4 0 0 00:05:23.783 asserts 152 152 152 0 n/a 00:05:23.783 00:05:23.783 Elapsed time = 0.145 seconds 00:05:23.783 00:05:23.783 real 0m0.152s 00:05:23.783 user 0m0.146s 00:05:23.783 sys 0m0.005s 00:05:23.783 00:45:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.783 00:45:07 -- common/autotest_common.sh@10 -- # set +x 00:05:23.783 ************************************ 00:05:23.783 END TEST env_memory 00:05:23.783 ************************************ 00:05:23.783 00:45:07 -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:23.783 00:45:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:23.783 00:45:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:23.783 00:45:07 -- common/autotest_common.sh@10 -- # set +x 00:05:23.783 ************************************ 00:05:23.783 START TEST env_vtophys 00:05:23.783 ************************************ 00:05:23.783 00:45:07 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:23.783 EAL: lib.eal log level changed from notice to debug 00:05:23.783 EAL: Detected lcore 0 as core 0 on socket 0 00:05:23.783 EAL: Detected lcore 1 as core 1 on socket 0 00:05:23.783 EAL: Detected lcore 2 as core 2 on socket 0 00:05:23.783 EAL: Detected lcore 3 as core 3 on socket 0 00:05:23.783 EAL: Detected lcore 4 as core 4 on socket 0 00:05:23.783 EAL: Detected lcore 5 as core 5 on socket 0 00:05:23.783 EAL: Detected lcore 6 as core 8 on socket 0 00:05:23.783 EAL: Detected lcore 7 as core 9 on socket 0 00:05:23.783 EAL: Detected lcore 8 as core 10 on socket 0 00:05:23.783 EAL: Detected lcore 9 as core 11 on socket 0 00:05:23.783 EAL: Detected lcore 10 as core 12 on socket 0 00:05:23.783 EAL: Detected lcore 11 as core 13 on socket 0 00:05:23.783 EAL: Detected lcore 12 as core 0 on socket 1 00:05:23.783 EAL: Detected lcore 13 as core 1 on socket 1 00:05:23.783 EAL: Detected lcore 14 as core 2 on socket 1 00:05:23.783 EAL: Detected lcore 15 as core 3 on socket 1 00:05:23.783 EAL: Detected lcore 16 as core 4 on socket 1 00:05:23.783 EAL: Detected lcore 17 as core 5 on socket 1 00:05:23.783 EAL: Detected lcore 18 as core 8 on socket 1 00:05:23.783 EAL: Detected lcore 19 as core 9 on socket 1 00:05:23.783 EAL: Detected lcore 20 as core 10 on socket 1 00:05:23.783 EAL: Detected lcore 21 as core 11 on socket 1 00:05:23.783 EAL: Detected lcore 22 as core 12 on socket 1 00:05:23.783 EAL: Detected lcore 23 as core 13 on socket 1 00:05:23.783 EAL: Detected lcore 24 as core 0 on socket 0 00:05:23.783 EAL: Detected lcore 25 as core 1 on socket 0 00:05:23.783 EAL: Detected lcore 26 as core 2 on socket 0 00:05:23.783 EAL: Detected lcore 27 as core 3 on socket 0 00:05:23.783 EAL: Detected lcore 28 as core 4 on socket 0 00:05:23.783 EAL: Detected lcore 29 as core 5 on socket 0 00:05:23.783 EAL: Detected lcore 30 as core 8 on socket 0 00:05:23.783 EAL: Detected lcore 31 as core 9 on socket 0 00:05:23.783 EAL: Detected lcore 32 as core 10 on socket 0 00:05:23.783 EAL: Detected lcore 33 as core 11 on socket 0 00:05:23.783 EAL: Detected lcore 34 as core 12 on socket 0 00:05:23.783 EAL: Detected lcore 35 as core 13 on socket 0 00:05:23.783 EAL: Detected lcore 36 as core 0 on socket 1 00:05:23.783 EAL: Detected lcore 37 as core 1 on socket 1 00:05:23.783 EAL: Detected lcore 38 as core 2 on socket 1 00:05:23.783 EAL: Detected lcore 39 as core 3 on socket 1 00:05:23.783 EAL: Detected lcore 40 as core 4 on socket 1 00:05:23.783 EAL: Detected lcore 41 as core 5 on socket 1 00:05:23.783 EAL: Detected lcore 42 as core 8 on socket 1 00:05:23.783 EAL: Detected lcore 43 as core 9 on socket 1 00:05:23.783 EAL: Detected lcore 44 as core 10 on socket 1 00:05:23.783 EAL: Detected lcore 45 as core 11 on socket 1 00:05:23.783 EAL: Detected lcore 46 as core 12 on socket 1 00:05:23.783 EAL: Detected lcore 47 as core 13 on socket 1 00:05:23.783 EAL: Maximum logical cores by configuration: 128 00:05:23.783 EAL: Detected CPU lcores: 48 00:05:23.783 EAL: Detected NUMA nodes: 2 00:05:23.783 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:05:23.783 EAL: Detected shared linkage of DPDK 00:05:23.783 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:05:23.784 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:05:23.784 EAL: Registered [vdev] bus. 00:05:23.784 EAL: bus.vdev log level changed from disabled to notice 00:05:23.784 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:05:23.784 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:05:23.784 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:23.784 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:23.784 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:05:23.784 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:05:23.784 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:05:23.784 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:05:23.784 EAL: No shared files mode enabled, IPC will be disabled 00:05:23.784 EAL: No shared files mode enabled, IPC is disabled 00:05:23.784 EAL: Bus pci wants IOVA as 'DC' 00:05:23.784 EAL: Bus vdev wants IOVA as 'DC' 00:05:23.784 EAL: Buses did not request a specific IOVA mode. 00:05:23.784 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:23.784 EAL: Selected IOVA mode 'VA' 00:05:23.784 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.784 EAL: Probing VFIO support... 00:05:23.784 EAL: IOMMU type 1 (Type 1) is supported 00:05:23.784 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:23.784 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:23.784 EAL: VFIO support initialized 00:05:23.784 EAL: Ask a virtual area of 0x2e000 bytes 00:05:23.784 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:23.784 EAL: Setting up physically contiguous memory... 00:05:23.784 EAL: Setting maximum number of open files to 524288 00:05:23.784 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:23.784 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:23.784 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:23.784 EAL: Ask a virtual area of 0x61000 bytes 00:05:23.784 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:23.784 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:23.784 EAL: Ask a virtual area of 0x400000000 bytes 00:05:23.784 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:23.784 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:23.784 EAL: Ask a virtual area of 0x61000 bytes 00:05:23.784 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:23.784 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:23.784 EAL: Ask a virtual area of 0x400000000 bytes 00:05:23.784 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:23.784 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:23.784 EAL: Ask a virtual area of 0x61000 bytes 00:05:23.784 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:23.784 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:23.784 EAL: Ask a virtual area of 0x400000000 bytes 00:05:23.784 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:23.784 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:23.784 EAL: Ask a virtual area of 0x61000 bytes 00:05:23.784 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:23.784 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:23.784 EAL: Ask a virtual area of 0x400000000 bytes 00:05:23.784 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:23.784 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:23.784 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:23.784 EAL: Ask a virtual area of 0x61000 bytes 00:05:23.784 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:23.784 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:23.784 EAL: Ask a virtual area of 0x400000000 bytes 00:05:23.784 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:23.784 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:23.784 EAL: Ask a virtual area of 0x61000 bytes 00:05:23.784 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:23.784 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:23.784 EAL: Ask a virtual area of 0x400000000 bytes 00:05:23.784 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:23.784 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:23.784 EAL: Ask a virtual area of 0x61000 bytes 00:05:23.784 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:23.784 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:23.784 EAL: Ask a virtual area of 0x400000000 bytes 00:05:23.784 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:23.784 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:23.784 EAL: Ask a virtual area of 0x61000 bytes 00:05:23.784 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:23.784 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:23.784 EAL: Ask a virtual area of 0x400000000 bytes 00:05:23.784 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:23.784 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:23.784 EAL: Hugepages will be freed exactly as allocated. 00:05:23.784 EAL: No shared files mode enabled, IPC is disabled 00:05:23.784 EAL: No shared files mode enabled, IPC is disabled 00:05:23.784 EAL: TSC frequency is ~2700000 KHz 00:05:23.784 EAL: Main lcore 0 is ready (tid=7f7f91894a00;cpuset=[0]) 00:05:23.784 EAL: Trying to obtain current memory policy. 00:05:23.784 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:23.784 EAL: Restoring previous memory policy: 0 00:05:23.784 EAL: request: mp_malloc_sync 00:05:23.784 EAL: No shared files mode enabled, IPC is disabled 00:05:23.784 EAL: Heap on socket 0 was expanded by 2MB 00:05:23.784 EAL: No shared files mode enabled, IPC is disabled 00:05:23.784 EAL: No shared files mode enabled, IPC is disabled 00:05:23.784 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:23.784 EAL: Mem event callback 'spdk:(nil)' registered 00:05:23.784 00:05:23.784 00:05:23.784 CUnit - A unit testing framework for C - Version 2.1-3 00:05:23.784 http://cunit.sourceforge.net/ 00:05:23.784 00:05:23.784 00:05:23.784 Suite: components_suite 00:05:23.784 Test: vtophys_malloc_test ...passed 00:05:23.784 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:23.784 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:23.784 EAL: Restoring previous memory policy: 4 00:05:23.784 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.784 EAL: request: mp_malloc_sync 00:05:23.784 EAL: No shared files mode enabled, IPC is disabled 00:05:23.784 EAL: Heap on socket 0 was expanded by 4MB 00:05:23.784 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.784 EAL: request: mp_malloc_sync 00:05:23.784 EAL: No shared files mode enabled, IPC is disabled 00:05:23.784 EAL: Heap on socket 0 was shrunk by 4MB 00:05:23.784 EAL: Trying to obtain current memory policy. 00:05:23.784 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:23.784 EAL: Restoring previous memory policy: 4 00:05:23.784 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.784 EAL: request: mp_malloc_sync 00:05:23.784 EAL: No shared files mode enabled, IPC is disabled 00:05:23.784 EAL: Heap on socket 0 was expanded by 6MB 00:05:23.784 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.784 EAL: request: mp_malloc_sync 00:05:23.784 EAL: No shared files mode enabled, IPC is disabled 00:05:23.784 EAL: Heap on socket 0 was shrunk by 6MB 00:05:23.784 EAL: Trying to obtain current memory policy. 00:05:23.784 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:23.784 EAL: Restoring previous memory policy: 4 00:05:23.784 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.784 EAL: request: mp_malloc_sync 00:05:23.784 EAL: No shared files mode enabled, IPC is disabled 00:05:23.784 EAL: Heap on socket 0 was expanded by 10MB 00:05:23.784 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.784 EAL: request: mp_malloc_sync 00:05:23.784 EAL: No shared files mode enabled, IPC is disabled 00:05:23.784 EAL: Heap on socket 0 was shrunk by 10MB 00:05:23.784 EAL: Trying to obtain current memory policy. 00:05:23.784 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:23.784 EAL: Restoring previous memory policy: 4 00:05:23.784 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.784 EAL: request: mp_malloc_sync 00:05:23.784 EAL: No shared files mode enabled, IPC is disabled 00:05:23.784 EAL: Heap on socket 0 was expanded by 18MB 00:05:23.784 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.784 EAL: request: mp_malloc_sync 00:05:23.784 EAL: No shared files mode enabled, IPC is disabled 00:05:23.784 EAL: Heap on socket 0 was shrunk by 18MB 00:05:23.784 EAL: Trying to obtain current memory policy. 00:05:23.784 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:24.042 EAL: Restoring previous memory policy: 4 00:05:24.042 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.042 EAL: request: mp_malloc_sync 00:05:24.042 EAL: No shared files mode enabled, IPC is disabled 00:05:24.042 EAL: Heap on socket 0 was expanded by 34MB 00:05:24.042 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.042 EAL: request: mp_malloc_sync 00:05:24.042 EAL: No shared files mode enabled, IPC is disabled 00:05:24.042 EAL: Heap on socket 0 was shrunk by 34MB 00:05:24.042 EAL: Trying to obtain current memory policy. 00:05:24.042 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:24.042 EAL: Restoring previous memory policy: 4 00:05:24.042 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.042 EAL: request: mp_malloc_sync 00:05:24.042 EAL: No shared files mode enabled, IPC is disabled 00:05:24.042 EAL: Heap on socket 0 was expanded by 66MB 00:05:24.042 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.042 EAL: request: mp_malloc_sync 00:05:24.042 EAL: No shared files mode enabled, IPC is disabled 00:05:24.042 EAL: Heap on socket 0 was shrunk by 66MB 00:05:24.042 EAL: Trying to obtain current memory policy. 00:05:24.042 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:24.042 EAL: Restoring previous memory policy: 4 00:05:24.042 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.042 EAL: request: mp_malloc_sync 00:05:24.042 EAL: No shared files mode enabled, IPC is disabled 00:05:24.042 EAL: Heap on socket 0 was expanded by 130MB 00:05:24.042 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.042 EAL: request: mp_malloc_sync 00:05:24.042 EAL: No shared files mode enabled, IPC is disabled 00:05:24.042 EAL: Heap on socket 0 was shrunk by 130MB 00:05:24.042 EAL: Trying to obtain current memory policy. 00:05:24.042 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:24.042 EAL: Restoring previous memory policy: 4 00:05:24.042 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.042 EAL: request: mp_malloc_sync 00:05:24.042 EAL: No shared files mode enabled, IPC is disabled 00:05:24.042 EAL: Heap on socket 0 was expanded by 258MB 00:05:24.042 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.300 EAL: request: mp_malloc_sync 00:05:24.300 EAL: No shared files mode enabled, IPC is disabled 00:05:24.300 EAL: Heap on socket 0 was shrunk by 258MB 00:05:24.300 EAL: Trying to obtain current memory policy. 00:05:24.300 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:24.300 EAL: Restoring previous memory policy: 4 00:05:24.300 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.300 EAL: request: mp_malloc_sync 00:05:24.300 EAL: No shared files mode enabled, IPC is disabled 00:05:24.300 EAL: Heap on socket 0 was expanded by 514MB 00:05:24.557 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.557 EAL: request: mp_malloc_sync 00:05:24.557 EAL: No shared files mode enabled, IPC is disabled 00:05:24.557 EAL: Heap on socket 0 was shrunk by 514MB 00:05:24.557 EAL: Trying to obtain current memory policy. 00:05:24.557 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:24.814 EAL: Restoring previous memory policy: 4 00:05:24.814 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.814 EAL: request: mp_malloc_sync 00:05:24.814 EAL: No shared files mode enabled, IPC is disabled 00:05:24.814 EAL: Heap on socket 0 was expanded by 1026MB 00:05:25.071 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.330 EAL: request: mp_malloc_sync 00:05:25.330 EAL: No shared files mode enabled, IPC is disabled 00:05:25.330 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:25.330 passed 00:05:25.330 00:05:25.330 Run Summary: Type Total Ran Passed Failed Inactive 00:05:25.330 suites 1 1 n/a 0 0 00:05:25.330 tests 2 2 2 0 0 00:05:25.330 asserts 497 497 497 0 n/a 00:05:25.330 00:05:25.330 Elapsed time = 1.315 seconds 00:05:25.330 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.330 EAL: request: mp_malloc_sync 00:05:25.330 EAL: No shared files mode enabled, IPC is disabled 00:05:25.330 EAL: Heap on socket 0 was shrunk by 2MB 00:05:25.330 EAL: No shared files mode enabled, IPC is disabled 00:05:25.330 EAL: No shared files mode enabled, IPC is disabled 00:05:25.330 EAL: No shared files mode enabled, IPC is disabled 00:05:25.331 00:05:25.331 real 0m1.433s 00:05:25.331 user 0m0.804s 00:05:25.331 sys 0m0.594s 00:05:25.331 00:45:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.331 00:45:09 -- common/autotest_common.sh@10 -- # set +x 00:05:25.331 ************************************ 00:05:25.331 END TEST env_vtophys 00:05:25.331 ************************************ 00:05:25.331 00:45:09 -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:25.331 00:45:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:25.331 00:45:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:25.331 00:45:09 -- common/autotest_common.sh@10 -- # set +x 00:05:25.331 ************************************ 00:05:25.331 START TEST env_pci 00:05:25.331 ************************************ 00:05:25.331 00:45:09 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:25.331 00:05:25.331 00:05:25.331 CUnit - A unit testing framework for C - Version 2.1-3 00:05:25.331 http://cunit.sourceforge.net/ 00:05:25.331 00:05:25.331 00:05:25.331 Suite: pci 00:05:25.331 Test: pci_hook ...[2024-07-23 00:45:09.360767] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3275325 has claimed it 00:05:25.331 EAL: Cannot find device (10000:00:01.0) 00:05:25.331 EAL: Failed to attach device on primary process 00:05:25.331 passed 00:05:25.331 00:05:25.331 Run Summary: Type Total Ran Passed Failed Inactive 00:05:25.331 suites 1 1 n/a 0 0 00:05:25.331 tests 1 1 1 0 0 00:05:25.331 asserts 25 25 25 0 n/a 00:05:25.331 00:05:25.331 Elapsed time = 0.021 seconds 00:05:25.331 00:05:25.331 real 0m0.033s 00:05:25.331 user 0m0.008s 00:05:25.331 sys 0m0.025s 00:05:25.331 00:45:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.331 00:45:09 -- common/autotest_common.sh@10 -- # set +x 00:05:25.331 ************************************ 00:05:25.331 END TEST env_pci 00:05:25.331 ************************************ 00:05:25.331 00:45:09 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:25.331 00:45:09 -- env/env.sh@15 -- # uname 00:05:25.331 00:45:09 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:25.331 00:45:09 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:25.331 00:45:09 -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:25.331 00:45:09 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:05:25.331 00:45:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:25.331 00:45:09 -- common/autotest_common.sh@10 -- # set +x 00:05:25.331 ************************************ 00:05:25.331 START TEST env_dpdk_post_init 00:05:25.331 ************************************ 00:05:25.331 00:45:09 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:25.331 EAL: Detected CPU lcores: 48 00:05:25.331 EAL: Detected NUMA nodes: 2 00:05:25.331 EAL: Detected shared linkage of DPDK 00:05:25.331 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:25.331 EAL: Selected IOVA mode 'VA' 00:05:25.331 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.331 EAL: VFIO support initialized 00:05:25.331 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:25.331 EAL: Using IOMMU type 1 (Type 1) 00:05:25.331 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:05:25.331 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:05:25.590 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:05:25.590 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:05:25.590 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:05:25.590 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:05:25.590 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:05:25.590 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:05:25.590 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:05:25.590 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:05:25.590 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:05:25.590 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:05:25.590 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:05:25.590 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:05:25.590 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:05:25.590 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:05:26.525 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:05:29.805 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:05:29.805 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:05:29.805 Starting DPDK initialization... 00:05:29.805 Starting SPDK post initialization... 00:05:29.805 SPDK NVMe probe 00:05:29.805 Attaching to 0000:88:00.0 00:05:29.805 Attached to 0000:88:00.0 00:05:29.805 Cleaning up... 00:05:29.805 00:05:29.805 real 0m4.399s 00:05:29.805 user 0m3.262s 00:05:29.805 sys 0m0.196s 00:05:29.805 00:45:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.805 00:45:13 -- common/autotest_common.sh@10 -- # set +x 00:05:29.805 ************************************ 00:05:29.805 END TEST env_dpdk_post_init 00:05:29.805 ************************************ 00:05:29.805 00:45:13 -- env/env.sh@26 -- # uname 00:05:29.805 00:45:13 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:29.805 00:45:13 -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:29.805 00:45:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:29.805 00:45:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:29.805 00:45:13 -- common/autotest_common.sh@10 -- # set +x 00:05:29.805 ************************************ 00:05:29.805 START TEST env_mem_callbacks 00:05:29.805 ************************************ 00:05:29.805 00:45:13 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:29.805 EAL: Detected CPU lcores: 48 00:05:29.805 EAL: Detected NUMA nodes: 2 00:05:29.805 EAL: Detected shared linkage of DPDK 00:05:29.805 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:29.805 EAL: Selected IOVA mode 'VA' 00:05:29.805 EAL: No free 2048 kB hugepages reported on node 1 00:05:29.805 EAL: VFIO support initialized 00:05:29.805 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:29.805 00:05:29.805 00:05:29.805 CUnit - A unit testing framework for C - Version 2.1-3 00:05:29.805 http://cunit.sourceforge.net/ 00:05:29.805 00:05:29.805 00:05:29.805 Suite: memory 00:05:29.805 Test: test ... 00:05:29.805 register 0x200000200000 2097152 00:05:29.805 malloc 3145728 00:05:29.805 register 0x200000400000 4194304 00:05:29.805 buf 0x200000500000 len 3145728 PASSED 00:05:29.805 malloc 64 00:05:29.805 buf 0x2000004fff40 len 64 PASSED 00:05:29.805 malloc 4194304 00:05:29.805 register 0x200000800000 6291456 00:05:29.805 buf 0x200000a00000 len 4194304 PASSED 00:05:29.805 free 0x200000500000 3145728 00:05:29.805 free 0x2000004fff40 64 00:05:29.805 unregister 0x200000400000 4194304 PASSED 00:05:29.805 free 0x200000a00000 4194304 00:05:29.805 unregister 0x200000800000 6291456 PASSED 00:05:29.805 malloc 8388608 00:05:29.805 register 0x200000400000 10485760 00:05:29.805 buf 0x200000600000 len 8388608 PASSED 00:05:29.805 free 0x200000600000 8388608 00:05:29.805 unregister 0x200000400000 10485760 PASSED 00:05:29.805 passed 00:05:29.805 00:05:29.805 Run Summary: Type Total Ran Passed Failed Inactive 00:05:29.805 suites 1 1 n/a 0 0 00:05:29.805 tests 1 1 1 0 0 00:05:29.805 asserts 15 15 15 0 n/a 00:05:29.805 00:05:29.805 Elapsed time = 0.005 seconds 00:05:29.805 00:05:29.805 real 0m0.047s 00:05:29.805 user 0m0.009s 00:05:29.805 sys 0m0.037s 00:05:29.805 00:45:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.805 00:45:13 -- common/autotest_common.sh@10 -- # set +x 00:05:29.805 ************************************ 00:05:29.805 END TEST env_mem_callbacks 00:05:29.805 ************************************ 00:05:29.805 00:05:29.805 real 0m6.230s 00:05:29.805 user 0m4.297s 00:05:29.805 sys 0m0.979s 00:05:29.805 00:45:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.805 00:45:13 -- common/autotest_common.sh@10 -- # set +x 00:05:29.805 ************************************ 00:05:29.805 END TEST env 00:05:29.805 ************************************ 00:05:29.805 00:45:13 -- spdk/autotest.sh@176 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:29.805 00:45:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:29.805 00:45:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:29.805 00:45:13 -- common/autotest_common.sh@10 -- # set +x 00:05:29.805 ************************************ 00:05:29.805 START TEST rpc 00:05:29.805 ************************************ 00:05:29.805 00:45:13 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:29.805 * Looking for test storage... 00:05:29.805 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:29.805 00:45:13 -- rpc/rpc.sh@65 -- # spdk_pid=3275988 00:05:29.805 00:45:13 -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:29.805 00:45:13 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:29.805 00:45:13 -- rpc/rpc.sh@67 -- # waitforlisten 3275988 00:05:29.805 00:45:13 -- common/autotest_common.sh@819 -- # '[' -z 3275988 ']' 00:05:29.805 00:45:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.805 00:45:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:29.805 00:45:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.805 00:45:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:29.806 00:45:13 -- common/autotest_common.sh@10 -- # set +x 00:05:30.063 [2024-07-23 00:45:14.022904] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:05:30.063 [2024-07-23 00:45:14.022996] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3275988 ] 00:05:30.063 EAL: No free 2048 kB hugepages reported on node 1 00:05:30.063 [2024-07-23 00:45:14.080566] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.064 [2024-07-23 00:45:14.162178] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:30.064 [2024-07-23 00:45:14.162335] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:30.064 [2024-07-23 00:45:14.162366] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3275988' to capture a snapshot of events at runtime. 00:05:30.064 [2024-07-23 00:45:14.162378] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3275988 for offline analysis/debug. 00:05:30.064 [2024-07-23 00:45:14.162406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.998 00:45:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:30.998 00:45:14 -- common/autotest_common.sh@852 -- # return 0 00:05:30.998 00:45:14 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:30.998 00:45:14 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:30.998 00:45:14 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:30.998 00:45:14 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:30.998 00:45:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:30.998 00:45:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:30.998 00:45:14 -- common/autotest_common.sh@10 -- # set +x 00:05:30.998 ************************************ 00:05:30.998 START TEST rpc_integrity 00:05:30.998 ************************************ 00:05:30.998 00:45:14 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:05:30.998 00:45:14 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:30.998 00:45:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:30.998 00:45:14 -- common/autotest_common.sh@10 -- # set +x 00:05:30.998 00:45:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:30.998 00:45:14 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:30.998 00:45:14 -- rpc/rpc.sh@13 -- # jq length 00:05:30.998 00:45:14 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:30.998 00:45:14 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:30.998 00:45:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:30.998 00:45:14 -- common/autotest_common.sh@10 -- # set +x 00:05:30.998 00:45:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:30.998 00:45:15 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:30.998 00:45:15 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:30.998 00:45:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:30.998 00:45:15 -- common/autotest_common.sh@10 -- # set +x 00:05:30.998 00:45:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:30.998 00:45:15 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:30.998 { 00:05:30.998 "name": "Malloc0", 00:05:30.998 "aliases": [ 00:05:30.998 "77220603-4a39-430e-a531-9ca99630142a" 00:05:30.998 ], 00:05:30.998 "product_name": "Malloc disk", 00:05:30.998 "block_size": 512, 00:05:30.998 "num_blocks": 16384, 00:05:30.998 "uuid": "77220603-4a39-430e-a531-9ca99630142a", 00:05:30.998 "assigned_rate_limits": { 00:05:30.998 "rw_ios_per_sec": 0, 00:05:30.998 "rw_mbytes_per_sec": 0, 00:05:30.998 "r_mbytes_per_sec": 0, 00:05:30.998 "w_mbytes_per_sec": 0 00:05:30.998 }, 00:05:30.998 "claimed": false, 00:05:30.998 "zoned": false, 00:05:30.998 "supported_io_types": { 00:05:30.998 "read": true, 00:05:30.998 "write": true, 00:05:30.998 "unmap": true, 00:05:30.998 "write_zeroes": true, 00:05:30.998 "flush": true, 00:05:30.998 "reset": true, 00:05:30.998 "compare": false, 00:05:30.998 "compare_and_write": false, 00:05:30.998 "abort": true, 00:05:30.998 "nvme_admin": false, 00:05:30.998 "nvme_io": false 00:05:30.998 }, 00:05:30.998 "memory_domains": [ 00:05:30.998 { 00:05:30.998 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:30.998 "dma_device_type": 2 00:05:30.998 } 00:05:30.998 ], 00:05:30.998 "driver_specific": {} 00:05:30.998 } 00:05:30.998 ]' 00:05:30.998 00:45:15 -- rpc/rpc.sh@17 -- # jq length 00:05:30.998 00:45:15 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:30.998 00:45:15 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:30.998 00:45:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:30.998 00:45:15 -- common/autotest_common.sh@10 -- # set +x 00:05:30.998 [2024-07-23 00:45:15.059851] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:30.998 [2024-07-23 00:45:15.059897] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:30.998 [2024-07-23 00:45:15.059934] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf725b0 00:05:30.998 [2024-07-23 00:45:15.059947] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:30.998 [2024-07-23 00:45:15.061391] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:30.998 [2024-07-23 00:45:15.061419] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:30.998 Passthru0 00:05:30.998 00:45:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:30.998 00:45:15 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:30.998 00:45:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:30.998 00:45:15 -- common/autotest_common.sh@10 -- # set +x 00:05:30.998 00:45:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:30.998 00:45:15 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:30.998 { 00:05:30.998 "name": "Malloc0", 00:05:30.998 "aliases": [ 00:05:30.998 "77220603-4a39-430e-a531-9ca99630142a" 00:05:30.998 ], 00:05:30.998 "product_name": "Malloc disk", 00:05:30.998 "block_size": 512, 00:05:30.998 "num_blocks": 16384, 00:05:30.998 "uuid": "77220603-4a39-430e-a531-9ca99630142a", 00:05:30.998 "assigned_rate_limits": { 00:05:30.998 "rw_ios_per_sec": 0, 00:05:30.998 "rw_mbytes_per_sec": 0, 00:05:30.998 "r_mbytes_per_sec": 0, 00:05:30.998 "w_mbytes_per_sec": 0 00:05:30.998 }, 00:05:30.998 "claimed": true, 00:05:30.998 "claim_type": "exclusive_write", 00:05:30.998 "zoned": false, 00:05:30.998 "supported_io_types": { 00:05:30.998 "read": true, 00:05:30.998 "write": true, 00:05:30.998 "unmap": true, 00:05:30.998 "write_zeroes": true, 00:05:30.998 "flush": true, 00:05:30.998 "reset": true, 00:05:30.998 "compare": false, 00:05:30.998 "compare_and_write": false, 00:05:30.998 "abort": true, 00:05:30.998 "nvme_admin": false, 00:05:30.998 "nvme_io": false 00:05:30.998 }, 00:05:30.998 "memory_domains": [ 00:05:30.998 { 00:05:30.998 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:30.998 "dma_device_type": 2 00:05:30.998 } 00:05:30.998 ], 00:05:30.998 "driver_specific": {} 00:05:30.998 }, 00:05:30.998 { 00:05:30.998 "name": "Passthru0", 00:05:30.998 "aliases": [ 00:05:30.998 "9d789691-a392-5c26-a0c7-dea8dea80f82" 00:05:30.998 ], 00:05:30.998 "product_name": "passthru", 00:05:30.998 "block_size": 512, 00:05:30.998 "num_blocks": 16384, 00:05:30.998 "uuid": "9d789691-a392-5c26-a0c7-dea8dea80f82", 00:05:30.998 "assigned_rate_limits": { 00:05:30.998 "rw_ios_per_sec": 0, 00:05:30.998 "rw_mbytes_per_sec": 0, 00:05:30.998 "r_mbytes_per_sec": 0, 00:05:30.998 "w_mbytes_per_sec": 0 00:05:30.998 }, 00:05:30.998 "claimed": false, 00:05:30.998 "zoned": false, 00:05:30.998 "supported_io_types": { 00:05:30.998 "read": true, 00:05:30.998 "write": true, 00:05:30.998 "unmap": true, 00:05:30.998 "write_zeroes": true, 00:05:30.998 "flush": true, 00:05:30.998 "reset": true, 00:05:30.998 "compare": false, 00:05:30.998 "compare_and_write": false, 00:05:30.998 "abort": true, 00:05:30.998 "nvme_admin": false, 00:05:30.998 "nvme_io": false 00:05:30.998 }, 00:05:30.998 "memory_domains": [ 00:05:30.998 { 00:05:30.998 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:30.998 "dma_device_type": 2 00:05:30.998 } 00:05:30.998 ], 00:05:30.998 "driver_specific": { 00:05:30.998 "passthru": { 00:05:30.998 "name": "Passthru0", 00:05:30.998 "base_bdev_name": "Malloc0" 00:05:30.998 } 00:05:30.998 } 00:05:30.998 } 00:05:30.998 ]' 00:05:30.998 00:45:15 -- rpc/rpc.sh@21 -- # jq length 00:05:30.998 00:45:15 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:30.998 00:45:15 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:30.998 00:45:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:30.998 00:45:15 -- common/autotest_common.sh@10 -- # set +x 00:05:30.998 00:45:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:30.998 00:45:15 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:30.998 00:45:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:30.998 00:45:15 -- common/autotest_common.sh@10 -- # set +x 00:05:30.998 00:45:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:30.998 00:45:15 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:30.998 00:45:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:30.998 00:45:15 -- common/autotest_common.sh@10 -- # set +x 00:05:30.998 00:45:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:30.998 00:45:15 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:30.998 00:45:15 -- rpc/rpc.sh@26 -- # jq length 00:05:30.998 00:45:15 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:30.998 00:05:30.998 real 0m0.232s 00:05:30.998 user 0m0.153s 00:05:30.998 sys 0m0.020s 00:05:30.998 00:45:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.998 00:45:15 -- common/autotest_common.sh@10 -- # set +x 00:05:30.998 ************************************ 00:05:30.998 END TEST rpc_integrity 00:05:30.998 ************************************ 00:05:31.256 00:45:15 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:31.256 00:45:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:31.256 00:45:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:31.256 00:45:15 -- common/autotest_common.sh@10 -- # set +x 00:05:31.256 ************************************ 00:05:31.256 START TEST rpc_plugins 00:05:31.256 ************************************ 00:05:31.256 00:45:15 -- common/autotest_common.sh@1104 -- # rpc_plugins 00:05:31.256 00:45:15 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:31.256 00:45:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:31.256 00:45:15 -- common/autotest_common.sh@10 -- # set +x 00:05:31.256 00:45:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:31.256 00:45:15 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:31.257 00:45:15 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:31.257 00:45:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:31.257 00:45:15 -- common/autotest_common.sh@10 -- # set +x 00:05:31.257 00:45:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:31.257 00:45:15 -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:31.257 { 00:05:31.257 "name": "Malloc1", 00:05:31.257 "aliases": [ 00:05:31.257 "4b6bf0e2-49b9-49c5-8be8-5e22d06b9c7f" 00:05:31.257 ], 00:05:31.257 "product_name": "Malloc disk", 00:05:31.257 "block_size": 4096, 00:05:31.257 "num_blocks": 256, 00:05:31.257 "uuid": "4b6bf0e2-49b9-49c5-8be8-5e22d06b9c7f", 00:05:31.257 "assigned_rate_limits": { 00:05:31.257 "rw_ios_per_sec": 0, 00:05:31.257 "rw_mbytes_per_sec": 0, 00:05:31.257 "r_mbytes_per_sec": 0, 00:05:31.257 "w_mbytes_per_sec": 0 00:05:31.257 }, 00:05:31.257 "claimed": false, 00:05:31.257 "zoned": false, 00:05:31.257 "supported_io_types": { 00:05:31.257 "read": true, 00:05:31.257 "write": true, 00:05:31.257 "unmap": true, 00:05:31.257 "write_zeroes": true, 00:05:31.257 "flush": true, 00:05:31.257 "reset": true, 00:05:31.257 "compare": false, 00:05:31.257 "compare_and_write": false, 00:05:31.257 "abort": true, 00:05:31.257 "nvme_admin": false, 00:05:31.257 "nvme_io": false 00:05:31.257 }, 00:05:31.257 "memory_domains": [ 00:05:31.257 { 00:05:31.257 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:31.257 "dma_device_type": 2 00:05:31.257 } 00:05:31.257 ], 00:05:31.257 "driver_specific": {} 00:05:31.257 } 00:05:31.257 ]' 00:05:31.257 00:45:15 -- rpc/rpc.sh@32 -- # jq length 00:05:31.257 00:45:15 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:31.257 00:45:15 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:31.257 00:45:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:31.257 00:45:15 -- common/autotest_common.sh@10 -- # set +x 00:05:31.257 00:45:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:31.257 00:45:15 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:31.257 00:45:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:31.257 00:45:15 -- common/autotest_common.sh@10 -- # set +x 00:05:31.257 00:45:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:31.257 00:45:15 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:31.257 00:45:15 -- rpc/rpc.sh@36 -- # jq length 00:05:31.257 00:45:15 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:31.257 00:05:31.257 real 0m0.113s 00:05:31.257 user 0m0.078s 00:05:31.257 sys 0m0.008s 00:05:31.257 00:45:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.257 00:45:15 -- common/autotest_common.sh@10 -- # set +x 00:05:31.257 ************************************ 00:05:31.257 END TEST rpc_plugins 00:05:31.257 ************************************ 00:05:31.257 00:45:15 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:31.257 00:45:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:31.257 00:45:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:31.257 00:45:15 -- common/autotest_common.sh@10 -- # set +x 00:05:31.257 ************************************ 00:05:31.257 START TEST rpc_trace_cmd_test 00:05:31.257 ************************************ 00:05:31.257 00:45:15 -- common/autotest_common.sh@1104 -- # rpc_trace_cmd_test 00:05:31.257 00:45:15 -- rpc/rpc.sh@40 -- # local info 00:05:31.257 00:45:15 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:31.257 00:45:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:31.257 00:45:15 -- common/autotest_common.sh@10 -- # set +x 00:05:31.257 00:45:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:31.257 00:45:15 -- rpc/rpc.sh@42 -- # info='{ 00:05:31.257 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3275988", 00:05:31.257 "tpoint_group_mask": "0x8", 00:05:31.257 "iscsi_conn": { 00:05:31.257 "mask": "0x2", 00:05:31.257 "tpoint_mask": "0x0" 00:05:31.257 }, 00:05:31.257 "scsi": { 00:05:31.257 "mask": "0x4", 00:05:31.257 "tpoint_mask": "0x0" 00:05:31.257 }, 00:05:31.257 "bdev": { 00:05:31.257 "mask": "0x8", 00:05:31.257 "tpoint_mask": "0xffffffffffffffff" 00:05:31.257 }, 00:05:31.257 "nvmf_rdma": { 00:05:31.257 "mask": "0x10", 00:05:31.257 "tpoint_mask": "0x0" 00:05:31.257 }, 00:05:31.257 "nvmf_tcp": { 00:05:31.257 "mask": "0x20", 00:05:31.257 "tpoint_mask": "0x0" 00:05:31.257 }, 00:05:31.257 "ftl": { 00:05:31.257 "mask": "0x40", 00:05:31.257 "tpoint_mask": "0x0" 00:05:31.257 }, 00:05:31.257 "blobfs": { 00:05:31.257 "mask": "0x80", 00:05:31.257 "tpoint_mask": "0x0" 00:05:31.257 }, 00:05:31.257 "dsa": { 00:05:31.257 "mask": "0x200", 00:05:31.257 "tpoint_mask": "0x0" 00:05:31.257 }, 00:05:31.257 "thread": { 00:05:31.257 "mask": "0x400", 00:05:31.257 "tpoint_mask": "0x0" 00:05:31.257 }, 00:05:31.257 "nvme_pcie": { 00:05:31.257 "mask": "0x800", 00:05:31.257 "tpoint_mask": "0x0" 00:05:31.257 }, 00:05:31.257 "iaa": { 00:05:31.257 "mask": "0x1000", 00:05:31.257 "tpoint_mask": "0x0" 00:05:31.257 }, 00:05:31.257 "nvme_tcp": { 00:05:31.257 "mask": "0x2000", 00:05:31.257 "tpoint_mask": "0x0" 00:05:31.257 }, 00:05:31.257 "bdev_nvme": { 00:05:31.257 "mask": "0x4000", 00:05:31.257 "tpoint_mask": "0x0" 00:05:31.257 } 00:05:31.257 }' 00:05:31.257 00:45:15 -- rpc/rpc.sh@43 -- # jq length 00:05:31.257 00:45:15 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:05:31.257 00:45:15 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:31.257 00:45:15 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:31.257 00:45:15 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:31.515 00:45:15 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:31.515 00:45:15 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:31.515 00:45:15 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:31.515 00:45:15 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:31.515 00:45:15 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:31.515 00:05:31.515 real 0m0.198s 00:05:31.515 user 0m0.172s 00:05:31.515 sys 0m0.017s 00:05:31.515 00:45:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.515 00:45:15 -- common/autotest_common.sh@10 -- # set +x 00:05:31.515 ************************************ 00:05:31.515 END TEST rpc_trace_cmd_test 00:05:31.515 ************************************ 00:05:31.515 00:45:15 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:31.515 00:45:15 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:31.515 00:45:15 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:31.515 00:45:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:31.515 00:45:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:31.515 00:45:15 -- common/autotest_common.sh@10 -- # set +x 00:05:31.515 ************************************ 00:05:31.515 START TEST rpc_daemon_integrity 00:05:31.515 ************************************ 00:05:31.515 00:45:15 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:05:31.515 00:45:15 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:31.515 00:45:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:31.515 00:45:15 -- common/autotest_common.sh@10 -- # set +x 00:05:31.515 00:45:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:31.515 00:45:15 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:31.515 00:45:15 -- rpc/rpc.sh@13 -- # jq length 00:05:31.515 00:45:15 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:31.515 00:45:15 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:31.515 00:45:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:31.515 00:45:15 -- common/autotest_common.sh@10 -- # set +x 00:05:31.515 00:45:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:31.515 00:45:15 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:31.515 00:45:15 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:31.515 00:45:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:31.515 00:45:15 -- common/autotest_common.sh@10 -- # set +x 00:05:31.515 00:45:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:31.515 00:45:15 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:31.515 { 00:05:31.515 "name": "Malloc2", 00:05:31.515 "aliases": [ 00:05:31.515 "f6e983f1-ddbf-4443-aca1-396fe330faf4" 00:05:31.515 ], 00:05:31.515 "product_name": "Malloc disk", 00:05:31.515 "block_size": 512, 00:05:31.515 "num_blocks": 16384, 00:05:31.515 "uuid": "f6e983f1-ddbf-4443-aca1-396fe330faf4", 00:05:31.515 "assigned_rate_limits": { 00:05:31.515 "rw_ios_per_sec": 0, 00:05:31.515 "rw_mbytes_per_sec": 0, 00:05:31.515 "r_mbytes_per_sec": 0, 00:05:31.516 "w_mbytes_per_sec": 0 00:05:31.516 }, 00:05:31.516 "claimed": false, 00:05:31.516 "zoned": false, 00:05:31.516 "supported_io_types": { 00:05:31.516 "read": true, 00:05:31.516 "write": true, 00:05:31.516 "unmap": true, 00:05:31.516 "write_zeroes": true, 00:05:31.516 "flush": true, 00:05:31.516 "reset": true, 00:05:31.516 "compare": false, 00:05:31.516 "compare_and_write": false, 00:05:31.516 "abort": true, 00:05:31.516 "nvme_admin": false, 00:05:31.516 "nvme_io": false 00:05:31.516 }, 00:05:31.516 "memory_domains": [ 00:05:31.516 { 00:05:31.516 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:31.516 "dma_device_type": 2 00:05:31.516 } 00:05:31.516 ], 00:05:31.516 "driver_specific": {} 00:05:31.516 } 00:05:31.516 ]' 00:05:31.516 00:45:15 -- rpc/rpc.sh@17 -- # jq length 00:05:31.516 00:45:15 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:31.516 00:45:15 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:31.516 00:45:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:31.516 00:45:15 -- common/autotest_common.sh@10 -- # set +x 00:05:31.516 [2024-07-23 00:45:15.677670] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:31.516 [2024-07-23 00:45:15.677716] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:31.516 [2024-07-23 00:45:15.677739] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x10336f0 00:05:31.516 [2024-07-23 00:45:15.677754] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:31.516 [2024-07-23 00:45:15.679025] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:31.516 [2024-07-23 00:45:15.679053] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:31.516 Passthru0 00:05:31.516 00:45:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:31.516 00:45:15 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:31.516 00:45:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:31.516 00:45:15 -- common/autotest_common.sh@10 -- # set +x 00:05:31.516 00:45:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:31.516 00:45:15 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:31.516 { 00:05:31.516 "name": "Malloc2", 00:05:31.516 "aliases": [ 00:05:31.516 "f6e983f1-ddbf-4443-aca1-396fe330faf4" 00:05:31.516 ], 00:05:31.516 "product_name": "Malloc disk", 00:05:31.516 "block_size": 512, 00:05:31.516 "num_blocks": 16384, 00:05:31.516 "uuid": "f6e983f1-ddbf-4443-aca1-396fe330faf4", 00:05:31.516 "assigned_rate_limits": { 00:05:31.516 "rw_ios_per_sec": 0, 00:05:31.516 "rw_mbytes_per_sec": 0, 00:05:31.516 "r_mbytes_per_sec": 0, 00:05:31.516 "w_mbytes_per_sec": 0 00:05:31.516 }, 00:05:31.516 "claimed": true, 00:05:31.516 "claim_type": "exclusive_write", 00:05:31.516 "zoned": false, 00:05:31.516 "supported_io_types": { 00:05:31.516 "read": true, 00:05:31.516 "write": true, 00:05:31.516 "unmap": true, 00:05:31.516 "write_zeroes": true, 00:05:31.516 "flush": true, 00:05:31.516 "reset": true, 00:05:31.516 "compare": false, 00:05:31.516 "compare_and_write": false, 00:05:31.516 "abort": true, 00:05:31.516 "nvme_admin": false, 00:05:31.516 "nvme_io": false 00:05:31.516 }, 00:05:31.516 "memory_domains": [ 00:05:31.516 { 00:05:31.516 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:31.516 "dma_device_type": 2 00:05:31.516 } 00:05:31.516 ], 00:05:31.516 "driver_specific": {} 00:05:31.516 }, 00:05:31.516 { 00:05:31.516 "name": "Passthru0", 00:05:31.516 "aliases": [ 00:05:31.516 "81641db6-f101-5348-a4ba-a70cbdbceddc" 00:05:31.516 ], 00:05:31.516 "product_name": "passthru", 00:05:31.516 "block_size": 512, 00:05:31.516 "num_blocks": 16384, 00:05:31.516 "uuid": "81641db6-f101-5348-a4ba-a70cbdbceddc", 00:05:31.516 "assigned_rate_limits": { 00:05:31.516 "rw_ios_per_sec": 0, 00:05:31.516 "rw_mbytes_per_sec": 0, 00:05:31.516 "r_mbytes_per_sec": 0, 00:05:31.516 "w_mbytes_per_sec": 0 00:05:31.516 }, 00:05:31.516 "claimed": false, 00:05:31.516 "zoned": false, 00:05:31.516 "supported_io_types": { 00:05:31.516 "read": true, 00:05:31.516 "write": true, 00:05:31.516 "unmap": true, 00:05:31.516 "write_zeroes": true, 00:05:31.516 "flush": true, 00:05:31.516 "reset": true, 00:05:31.516 "compare": false, 00:05:31.516 "compare_and_write": false, 00:05:31.516 "abort": true, 00:05:31.516 "nvme_admin": false, 00:05:31.516 "nvme_io": false 00:05:31.516 }, 00:05:31.516 "memory_domains": [ 00:05:31.516 { 00:05:31.516 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:31.516 "dma_device_type": 2 00:05:31.516 } 00:05:31.516 ], 00:05:31.516 "driver_specific": { 00:05:31.516 "passthru": { 00:05:31.516 "name": "Passthru0", 00:05:31.516 "base_bdev_name": "Malloc2" 00:05:31.516 } 00:05:31.516 } 00:05:31.516 } 00:05:31.516 ]' 00:05:31.516 00:45:15 -- rpc/rpc.sh@21 -- # jq length 00:05:31.774 00:45:15 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:31.774 00:45:15 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:31.774 00:45:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:31.774 00:45:15 -- common/autotest_common.sh@10 -- # set +x 00:05:31.774 00:45:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:31.774 00:45:15 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:31.774 00:45:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:31.774 00:45:15 -- common/autotest_common.sh@10 -- # set +x 00:05:31.774 00:45:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:31.774 00:45:15 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:31.774 00:45:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:31.774 00:45:15 -- common/autotest_common.sh@10 -- # set +x 00:05:31.774 00:45:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:31.774 00:45:15 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:31.774 00:45:15 -- rpc/rpc.sh@26 -- # jq length 00:05:31.774 00:45:15 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:31.774 00:05:31.774 real 0m0.222s 00:05:31.774 user 0m0.147s 00:05:31.774 sys 0m0.021s 00:05:31.774 00:45:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.774 00:45:15 -- common/autotest_common.sh@10 -- # set +x 00:05:31.774 ************************************ 00:05:31.774 END TEST rpc_daemon_integrity 00:05:31.774 ************************************ 00:05:31.774 00:45:15 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:31.774 00:45:15 -- rpc/rpc.sh@84 -- # killprocess 3275988 00:05:31.774 00:45:15 -- common/autotest_common.sh@926 -- # '[' -z 3275988 ']' 00:05:31.774 00:45:15 -- common/autotest_common.sh@930 -- # kill -0 3275988 00:05:31.774 00:45:15 -- common/autotest_common.sh@931 -- # uname 00:05:31.774 00:45:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:31.774 00:45:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3275988 00:05:31.774 00:45:15 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:31.774 00:45:15 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:31.774 00:45:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3275988' 00:05:31.774 killing process with pid 3275988 00:05:31.774 00:45:15 -- common/autotest_common.sh@945 -- # kill 3275988 00:05:31.774 00:45:15 -- common/autotest_common.sh@950 -- # wait 3275988 00:05:32.033 00:05:32.033 real 0m2.306s 00:05:32.033 user 0m2.969s 00:05:32.033 sys 0m0.541s 00:05:32.033 00:45:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:32.033 00:45:16 -- common/autotest_common.sh@10 -- # set +x 00:05:32.033 ************************************ 00:05:32.033 END TEST rpc 00:05:32.033 ************************************ 00:05:32.292 00:45:16 -- spdk/autotest.sh@177 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:32.292 00:45:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:32.292 00:45:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:32.292 00:45:16 -- common/autotest_common.sh@10 -- # set +x 00:05:32.292 ************************************ 00:05:32.292 START TEST rpc_client 00:05:32.292 ************************************ 00:05:32.292 00:45:16 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:32.292 * Looking for test storage... 00:05:32.292 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:32.292 00:45:16 -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:32.292 OK 00:05:32.292 00:45:16 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:32.292 00:05:32.292 real 0m0.061s 00:05:32.292 user 0m0.031s 00:05:32.292 sys 0m0.035s 00:05:32.292 00:45:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:32.292 00:45:16 -- common/autotest_common.sh@10 -- # set +x 00:05:32.292 ************************************ 00:05:32.292 END TEST rpc_client 00:05:32.292 ************************************ 00:05:32.292 00:45:16 -- spdk/autotest.sh@178 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:32.292 00:45:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:32.292 00:45:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:32.292 00:45:16 -- common/autotest_common.sh@10 -- # set +x 00:05:32.292 ************************************ 00:05:32.292 START TEST json_config 00:05:32.292 ************************************ 00:05:32.292 00:45:16 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:32.292 00:45:16 -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:32.292 00:45:16 -- nvmf/common.sh@7 -- # uname -s 00:05:32.292 00:45:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:32.292 00:45:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:32.292 00:45:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:32.292 00:45:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:32.292 00:45:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:32.292 00:45:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:32.292 00:45:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:32.292 00:45:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:32.292 00:45:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:32.292 00:45:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:32.292 00:45:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:32.292 00:45:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:32.292 00:45:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:32.292 00:45:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:32.292 00:45:16 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:32.292 00:45:16 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:32.292 00:45:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:32.292 00:45:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:32.292 00:45:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:32.292 00:45:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:32.292 00:45:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:32.292 00:45:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:32.292 00:45:16 -- paths/export.sh@5 -- # export PATH 00:05:32.292 00:45:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:32.292 00:45:16 -- nvmf/common.sh@46 -- # : 0 00:05:32.292 00:45:16 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:32.292 00:45:16 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:32.292 00:45:16 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:32.292 00:45:16 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:32.292 00:45:16 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:32.292 00:45:16 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:32.292 00:45:16 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:32.292 00:45:16 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:32.292 00:45:16 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:05:32.292 00:45:16 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:05:32.292 00:45:16 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:05:32.292 00:45:16 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:32.292 00:45:16 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:05:32.292 00:45:16 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:05:32.292 00:45:16 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:32.292 00:45:16 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:05:32.292 00:45:16 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:32.292 00:45:16 -- json_config/json_config.sh@32 -- # declare -A app_params 00:05:32.292 00:45:16 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:32.292 00:45:16 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:05:32.292 00:45:16 -- json_config/json_config.sh@43 -- # last_event_id=0 00:05:32.292 00:45:16 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:32.292 00:45:16 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:05:32.292 INFO: JSON configuration test init 00:05:32.292 00:45:16 -- json_config/json_config.sh@420 -- # json_config_test_init 00:05:32.292 00:45:16 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:05:32.292 00:45:16 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:32.292 00:45:16 -- common/autotest_common.sh@10 -- # set +x 00:05:32.292 00:45:16 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:05:32.292 00:45:16 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:32.292 00:45:16 -- common/autotest_common.sh@10 -- # set +x 00:05:32.292 00:45:16 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:05:32.292 00:45:16 -- json_config/json_config.sh@98 -- # local app=target 00:05:32.292 00:45:16 -- json_config/json_config.sh@99 -- # shift 00:05:32.292 00:45:16 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:32.292 00:45:16 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:32.292 00:45:16 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:32.292 00:45:16 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:32.292 00:45:16 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:32.292 00:45:16 -- json_config/json_config.sh@111 -- # app_pid[$app]=3276471 00:05:32.292 00:45:16 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:32.292 00:45:16 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:32.292 Waiting for target to run... 00:05:32.292 00:45:16 -- json_config/json_config.sh@114 -- # waitforlisten 3276471 /var/tmp/spdk_tgt.sock 00:05:32.292 00:45:16 -- common/autotest_common.sh@819 -- # '[' -z 3276471 ']' 00:05:32.292 00:45:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:32.292 00:45:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:32.292 00:45:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:32.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:32.292 00:45:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:32.292 00:45:16 -- common/autotest_common.sh@10 -- # set +x 00:05:32.292 [2024-07-23 00:45:16.441432] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:05:32.292 [2024-07-23 00:45:16.441531] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3276471 ] 00:05:32.292 EAL: No free 2048 kB hugepages reported on node 1 00:05:32.860 [2024-07-23 00:45:16.787257] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.860 [2024-07-23 00:45:16.848011] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:32.860 [2024-07-23 00:45:16.848200] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.426 00:45:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:33.426 00:45:17 -- common/autotest_common.sh@852 -- # return 0 00:05:33.426 00:45:17 -- json_config/json_config.sh@115 -- # echo '' 00:05:33.426 00:05:33.426 00:45:17 -- json_config/json_config.sh@322 -- # create_accel_config 00:05:33.426 00:45:17 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:05:33.426 00:45:17 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:33.426 00:45:17 -- common/autotest_common.sh@10 -- # set +x 00:05:33.426 00:45:17 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:05:33.426 00:45:17 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:05:33.426 00:45:17 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:33.427 00:45:17 -- common/autotest_common.sh@10 -- # set +x 00:05:33.427 00:45:17 -- json_config/json_config.sh@326 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:33.427 00:45:17 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:05:33.427 00:45:17 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:36.707 00:45:20 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:05:36.707 00:45:20 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:05:36.707 00:45:20 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:36.707 00:45:20 -- common/autotest_common.sh@10 -- # set +x 00:05:36.707 00:45:20 -- json_config/json_config.sh@48 -- # local ret=0 00:05:36.707 00:45:20 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:36.707 00:45:20 -- json_config/json_config.sh@49 -- # local enabled_types 00:05:36.707 00:45:20 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:36.707 00:45:20 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:36.707 00:45:20 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:36.707 00:45:20 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:36.707 00:45:20 -- json_config/json_config.sh@51 -- # local get_types 00:05:36.707 00:45:20 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:36.707 00:45:20 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:05:36.707 00:45:20 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:36.707 00:45:20 -- common/autotest_common.sh@10 -- # set +x 00:05:36.707 00:45:20 -- json_config/json_config.sh@58 -- # return 0 00:05:36.707 00:45:20 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:05:36.707 00:45:20 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:05:36.707 00:45:20 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:05:36.707 00:45:20 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:05:36.707 00:45:20 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:05:36.707 00:45:20 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:05:36.707 00:45:20 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:36.707 00:45:20 -- common/autotest_common.sh@10 -- # set +x 00:05:36.707 00:45:20 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:36.707 00:45:20 -- json_config/json_config.sh@286 -- # [[ tcp == \r\d\m\a ]] 00:05:36.707 00:45:20 -- json_config/json_config.sh@290 -- # [[ -z 127.0.0.1 ]] 00:05:36.707 00:45:20 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:36.707 00:45:20 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:36.965 MallocForNvmf0 00:05:36.965 00:45:21 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:36.965 00:45:21 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:37.223 MallocForNvmf1 00:05:37.223 00:45:21 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:37.223 00:45:21 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:37.481 [2024-07-23 00:45:21.473724] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:37.481 00:45:21 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:37.481 00:45:21 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:37.739 00:45:21 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:37.739 00:45:21 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:37.997 00:45:21 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:37.997 00:45:21 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:38.255 00:45:22 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:38.255 00:45:22 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:38.255 [2024-07-23 00:45:22.437119] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:38.255 00:45:22 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:05:38.255 00:45:22 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:38.255 00:45:22 -- common/autotest_common.sh@10 -- # set +x 00:05:38.512 00:45:22 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:05:38.512 00:45:22 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:38.512 00:45:22 -- common/autotest_common.sh@10 -- # set +x 00:05:38.512 00:45:22 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:05:38.512 00:45:22 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:38.512 00:45:22 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:38.512 MallocBdevForConfigChangeCheck 00:05:38.771 00:45:22 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:05:38.771 00:45:22 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:38.771 00:45:22 -- common/autotest_common.sh@10 -- # set +x 00:05:38.771 00:45:22 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:05:38.771 00:45:22 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:39.029 00:45:23 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:05:39.029 INFO: shutting down applications... 00:05:39.029 00:45:23 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:05:39.029 00:45:23 -- json_config/json_config.sh@431 -- # json_config_clear target 00:05:39.029 00:45:23 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:05:39.029 00:45:23 -- json_config/json_config.sh@386 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:40.929 Calling clear_iscsi_subsystem 00:05:40.929 Calling clear_nvmf_subsystem 00:05:40.929 Calling clear_nbd_subsystem 00:05:40.929 Calling clear_ublk_subsystem 00:05:40.929 Calling clear_vhost_blk_subsystem 00:05:40.929 Calling clear_vhost_scsi_subsystem 00:05:40.929 Calling clear_scheduler_subsystem 00:05:40.929 Calling clear_bdev_subsystem 00:05:40.929 Calling clear_accel_subsystem 00:05:40.929 Calling clear_vmd_subsystem 00:05:40.929 Calling clear_sock_subsystem 00:05:40.929 Calling clear_iobuf_subsystem 00:05:40.929 00:45:24 -- json_config/json_config.sh@390 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:40.929 00:45:24 -- json_config/json_config.sh@396 -- # count=100 00:05:40.929 00:45:24 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:05:40.929 00:45:24 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:40.929 00:45:24 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:40.929 00:45:24 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:40.929 00:45:25 -- json_config/json_config.sh@398 -- # break 00:05:40.929 00:45:25 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:05:40.929 00:45:25 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:05:40.929 00:45:25 -- json_config/json_config.sh@120 -- # local app=target 00:05:40.929 00:45:25 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:05:40.929 00:45:25 -- json_config/json_config.sh@124 -- # [[ -n 3276471 ]] 00:05:40.929 00:45:25 -- json_config/json_config.sh@127 -- # kill -SIGINT 3276471 00:05:40.929 00:45:25 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:05:40.929 00:45:25 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:40.929 00:45:25 -- json_config/json_config.sh@130 -- # kill -0 3276471 00:05:40.929 00:45:25 -- json_config/json_config.sh@134 -- # sleep 0.5 00:05:41.497 00:45:25 -- json_config/json_config.sh@129 -- # (( i++ )) 00:05:41.497 00:45:25 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:41.497 00:45:25 -- json_config/json_config.sh@130 -- # kill -0 3276471 00:05:41.497 00:45:25 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:05:41.497 00:45:25 -- json_config/json_config.sh@132 -- # break 00:05:41.497 00:45:25 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:05:41.497 00:45:25 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:05:41.497 SPDK target shutdown done 00:05:41.497 00:45:25 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:05:41.497 INFO: relaunching applications... 00:05:41.497 00:45:25 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:41.497 00:45:25 -- json_config/json_config.sh@98 -- # local app=target 00:05:41.497 00:45:25 -- json_config/json_config.sh@99 -- # shift 00:05:41.497 00:45:25 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:41.497 00:45:25 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:41.497 00:45:25 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:41.497 00:45:25 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:41.497 00:45:25 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:41.497 00:45:25 -- json_config/json_config.sh@111 -- # app_pid[$app]=3277692 00:05:41.497 00:45:25 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:41.497 00:45:25 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:41.497 Waiting for target to run... 00:05:41.497 00:45:25 -- json_config/json_config.sh@114 -- # waitforlisten 3277692 /var/tmp/spdk_tgt.sock 00:05:41.497 00:45:25 -- common/autotest_common.sh@819 -- # '[' -z 3277692 ']' 00:05:41.497 00:45:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:41.497 00:45:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:41.497 00:45:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:41.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:41.497 00:45:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:41.497 00:45:25 -- common/autotest_common.sh@10 -- # set +x 00:05:41.497 [2024-07-23 00:45:25.632418] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:05:41.497 [2024-07-23 00:45:25.632512] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3277692 ] 00:05:41.497 EAL: No free 2048 kB hugepages reported on node 1 00:05:42.065 [2024-07-23 00:45:25.987344] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.065 [2024-07-23 00:45:26.048373] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:42.065 [2024-07-23 00:45:26.048562] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.345 [2024-07-23 00:45:29.065465] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:45.345 [2024-07-23 00:45:29.097917] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:45.602 00:45:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:45.603 00:45:29 -- common/autotest_common.sh@852 -- # return 0 00:05:45.603 00:45:29 -- json_config/json_config.sh@115 -- # echo '' 00:05:45.603 00:05:45.603 00:45:29 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:05:45.603 00:45:29 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:45.603 INFO: Checking if target configuration is the same... 00:05:45.603 00:45:29 -- json_config/json_config.sh@441 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:45.603 00:45:29 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:05:45.603 00:45:29 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:45.603 + '[' 2 -ne 2 ']' 00:05:45.603 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:45.603 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:45.603 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:45.603 +++ basename /dev/fd/62 00:05:45.603 ++ mktemp /tmp/62.XXX 00:05:45.603 + tmp_file_1=/tmp/62.g6a 00:05:45.603 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:45.603 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:45.603 + tmp_file_2=/tmp/spdk_tgt_config.json.dfD 00:05:45.603 + ret=0 00:05:45.603 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:45.860 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:45.860 + diff -u /tmp/62.g6a /tmp/spdk_tgt_config.json.dfD 00:05:45.860 + echo 'INFO: JSON config files are the same' 00:05:45.860 INFO: JSON config files are the same 00:05:45.860 + rm /tmp/62.g6a /tmp/spdk_tgt_config.json.dfD 00:05:45.860 + exit 0 00:05:45.860 00:45:30 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:05:45.860 00:45:30 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:45.860 INFO: changing configuration and checking if this can be detected... 00:05:45.860 00:45:30 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:45.860 00:45:30 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:46.119 00:45:30 -- json_config/json_config.sh@450 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:46.120 00:45:30 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:05:46.120 00:45:30 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:46.120 + '[' 2 -ne 2 ']' 00:05:46.120 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:46.120 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:46.120 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:46.120 +++ basename /dev/fd/62 00:05:46.120 ++ mktemp /tmp/62.XXX 00:05:46.120 + tmp_file_1=/tmp/62.uxL 00:05:46.120 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:46.120 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:46.120 + tmp_file_2=/tmp/spdk_tgt_config.json.qfK 00:05:46.120 + ret=0 00:05:46.120 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:46.686 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:46.686 + diff -u /tmp/62.uxL /tmp/spdk_tgt_config.json.qfK 00:05:46.686 + ret=1 00:05:46.686 + echo '=== Start of file: /tmp/62.uxL ===' 00:05:46.686 + cat /tmp/62.uxL 00:05:46.686 + echo '=== End of file: /tmp/62.uxL ===' 00:05:46.686 + echo '' 00:05:46.686 + echo '=== Start of file: /tmp/spdk_tgt_config.json.qfK ===' 00:05:46.686 + cat /tmp/spdk_tgt_config.json.qfK 00:05:46.686 + echo '=== End of file: /tmp/spdk_tgt_config.json.qfK ===' 00:05:46.686 + echo '' 00:05:46.686 + rm /tmp/62.uxL /tmp/spdk_tgt_config.json.qfK 00:05:46.686 + exit 1 00:05:46.686 00:45:30 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:05:46.686 INFO: configuration change detected. 00:05:46.686 00:45:30 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:05:46.686 00:45:30 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:05:46.686 00:45:30 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:46.686 00:45:30 -- common/autotest_common.sh@10 -- # set +x 00:05:46.686 00:45:30 -- json_config/json_config.sh@360 -- # local ret=0 00:05:46.686 00:45:30 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:05:46.686 00:45:30 -- json_config/json_config.sh@370 -- # [[ -n 3277692 ]] 00:05:46.686 00:45:30 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:05:46.686 00:45:30 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:05:46.686 00:45:30 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:46.686 00:45:30 -- common/autotest_common.sh@10 -- # set +x 00:05:46.686 00:45:30 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:05:46.686 00:45:30 -- json_config/json_config.sh@246 -- # uname -s 00:05:46.686 00:45:30 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:05:46.686 00:45:30 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:05:46.686 00:45:30 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:05:46.686 00:45:30 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:05:46.686 00:45:30 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:46.686 00:45:30 -- common/autotest_common.sh@10 -- # set +x 00:05:46.686 00:45:30 -- json_config/json_config.sh@376 -- # killprocess 3277692 00:05:46.686 00:45:30 -- common/autotest_common.sh@926 -- # '[' -z 3277692 ']' 00:05:46.686 00:45:30 -- common/autotest_common.sh@930 -- # kill -0 3277692 00:05:46.686 00:45:30 -- common/autotest_common.sh@931 -- # uname 00:05:46.686 00:45:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:46.686 00:45:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3277692 00:05:46.686 00:45:30 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:46.686 00:45:30 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:46.686 00:45:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3277692' 00:05:46.686 killing process with pid 3277692 00:05:46.686 00:45:30 -- common/autotest_common.sh@945 -- # kill 3277692 00:05:46.686 00:45:30 -- common/autotest_common.sh@950 -- # wait 3277692 00:05:48.585 00:45:32 -- json_config/json_config.sh@379 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:48.585 00:45:32 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:05:48.585 00:45:32 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:48.585 00:45:32 -- common/autotest_common.sh@10 -- # set +x 00:05:48.585 00:45:32 -- json_config/json_config.sh@381 -- # return 0 00:05:48.585 00:45:32 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:05:48.585 INFO: Success 00:05:48.585 00:05:48.585 real 0m16.030s 00:05:48.585 user 0m18.435s 00:05:48.585 sys 0m1.915s 00:05:48.585 00:45:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.585 00:45:32 -- common/autotest_common.sh@10 -- # set +x 00:05:48.585 ************************************ 00:05:48.585 END TEST json_config 00:05:48.585 ************************************ 00:05:48.585 00:45:32 -- spdk/autotest.sh@179 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:48.585 00:45:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:48.585 00:45:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:48.585 00:45:32 -- common/autotest_common.sh@10 -- # set +x 00:05:48.585 ************************************ 00:05:48.585 START TEST json_config_extra_key 00:05:48.585 ************************************ 00:05:48.585 00:45:32 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:48.585 00:45:32 -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:48.585 00:45:32 -- nvmf/common.sh@7 -- # uname -s 00:05:48.585 00:45:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:48.585 00:45:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:48.585 00:45:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:48.585 00:45:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:48.585 00:45:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:48.585 00:45:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:48.585 00:45:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:48.585 00:45:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:48.585 00:45:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:48.585 00:45:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:48.585 00:45:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:48.585 00:45:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:48.585 00:45:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:48.585 00:45:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:48.585 00:45:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:48.585 00:45:32 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:48.585 00:45:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:48.585 00:45:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:48.585 00:45:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:48.585 00:45:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:48.585 00:45:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:48.585 00:45:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:48.585 00:45:32 -- paths/export.sh@5 -- # export PATH 00:05:48.585 00:45:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:48.585 00:45:32 -- nvmf/common.sh@46 -- # : 0 00:05:48.585 00:45:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:48.585 00:45:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:48.585 00:45:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:48.585 00:45:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:48.585 00:45:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:48.585 00:45:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:48.585 00:45:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:48.585 00:45:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:48.585 00:45:32 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:05:48.585 00:45:32 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:05:48.585 00:45:32 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:48.585 00:45:32 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:05:48.585 00:45:32 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:48.585 00:45:32 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:05:48.585 00:45:32 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:48.585 00:45:32 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:05:48.585 00:45:32 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:48.585 00:45:32 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:05:48.585 INFO: launching applications... 00:05:48.585 00:45:32 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:48.585 00:45:32 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:05:48.585 00:45:32 -- json_config/json_config_extra_key.sh@25 -- # shift 00:05:48.585 00:45:32 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:05:48.585 00:45:32 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:05:48.585 00:45:32 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=3278637 00:05:48.585 00:45:32 -- json_config/json_config_extra_key.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:48.585 00:45:32 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:05:48.585 Waiting for target to run... 00:05:48.585 00:45:32 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 3278637 /var/tmp/spdk_tgt.sock 00:05:48.585 00:45:32 -- common/autotest_common.sh@819 -- # '[' -z 3278637 ']' 00:05:48.585 00:45:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:48.585 00:45:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:48.585 00:45:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:48.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:48.585 00:45:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:48.585 00:45:32 -- common/autotest_common.sh@10 -- # set +x 00:05:48.585 [2024-07-23 00:45:32.487921] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:05:48.585 [2024-07-23 00:45:32.488027] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3278637 ] 00:05:48.585 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.843 [2024-07-23 00:45:32.984032] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.102 [2024-07-23 00:45:33.059866] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:49.102 [2024-07-23 00:45:33.060057] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.361 00:45:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:49.361 00:45:33 -- common/autotest_common.sh@852 -- # return 0 00:05:49.361 00:45:33 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:05:49.361 00:05:49.361 00:45:33 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:05:49.361 INFO: shutting down applications... 00:05:49.361 00:45:33 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:05:49.361 00:45:33 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:05:49.361 00:45:33 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:05:49.361 00:45:33 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 3278637 ]] 00:05:49.361 00:45:33 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 3278637 00:05:49.361 00:45:33 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:05:49.361 00:45:33 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:49.361 00:45:33 -- json_config/json_config_extra_key.sh@50 -- # kill -0 3278637 00:05:49.361 00:45:33 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:05:50.134 00:45:33 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:05:50.134 00:45:33 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:50.134 00:45:33 -- json_config/json_config_extra_key.sh@50 -- # kill -0 3278637 00:05:50.134 00:45:33 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:05:50.134 00:45:33 -- json_config/json_config_extra_key.sh@52 -- # break 00:05:50.134 00:45:33 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:05:50.134 00:45:33 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:05:50.134 SPDK target shutdown done 00:05:50.134 00:45:33 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:05:50.134 Success 00:05:50.134 00:05:50.134 real 0m1.528s 00:05:50.134 user 0m1.313s 00:05:50.134 sys 0m0.603s 00:05:50.134 00:45:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.134 00:45:33 -- common/autotest_common.sh@10 -- # set +x 00:05:50.134 ************************************ 00:05:50.134 END TEST json_config_extra_key 00:05:50.134 ************************************ 00:05:50.134 00:45:33 -- spdk/autotest.sh@180 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:50.134 00:45:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:50.134 00:45:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:50.134 00:45:33 -- common/autotest_common.sh@10 -- # set +x 00:05:50.134 ************************************ 00:05:50.134 START TEST alias_rpc 00:05:50.134 ************************************ 00:05:50.134 00:45:33 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:50.134 * Looking for test storage... 00:05:50.134 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:50.134 00:45:34 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:50.134 00:45:34 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3278820 00:05:50.134 00:45:34 -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:50.134 00:45:34 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3278820 00:05:50.134 00:45:34 -- common/autotest_common.sh@819 -- # '[' -z 3278820 ']' 00:05:50.134 00:45:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.134 00:45:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:50.134 00:45:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.134 00:45:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:50.134 00:45:34 -- common/autotest_common.sh@10 -- # set +x 00:05:50.134 [2024-07-23 00:45:34.047908] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:05:50.134 [2024-07-23 00:45:34.048012] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3278820 ] 00:05:50.134 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.134 [2024-07-23 00:45:34.112034] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.134 [2024-07-23 00:45:34.203385] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:50.134 [2024-07-23 00:45:34.203567] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.068 00:45:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:51.068 00:45:34 -- common/autotest_common.sh@852 -- # return 0 00:05:51.068 00:45:34 -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:51.068 00:45:35 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3278820 00:05:51.068 00:45:35 -- common/autotest_common.sh@926 -- # '[' -z 3278820 ']' 00:05:51.068 00:45:35 -- common/autotest_common.sh@930 -- # kill -0 3278820 00:05:51.068 00:45:35 -- common/autotest_common.sh@931 -- # uname 00:05:51.068 00:45:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:51.068 00:45:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3278820 00:05:51.326 00:45:35 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:51.326 00:45:35 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:51.326 00:45:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3278820' 00:05:51.326 killing process with pid 3278820 00:05:51.326 00:45:35 -- common/autotest_common.sh@945 -- # kill 3278820 00:05:51.326 00:45:35 -- common/autotest_common.sh@950 -- # wait 3278820 00:05:51.584 00:05:51.584 real 0m1.727s 00:05:51.584 user 0m1.973s 00:05:51.584 sys 0m0.468s 00:05:51.584 00:45:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.584 00:45:35 -- common/autotest_common.sh@10 -- # set +x 00:05:51.584 ************************************ 00:05:51.585 END TEST alias_rpc 00:05:51.585 ************************************ 00:05:51.585 00:45:35 -- spdk/autotest.sh@182 -- # [[ 0 -eq 0 ]] 00:05:51.585 00:45:35 -- spdk/autotest.sh@183 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:51.585 00:45:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:51.585 00:45:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:51.585 00:45:35 -- common/autotest_common.sh@10 -- # set +x 00:05:51.585 ************************************ 00:05:51.585 START TEST spdkcli_tcp 00:05:51.585 ************************************ 00:05:51.585 00:45:35 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:51.585 * Looking for test storage... 00:05:51.585 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:51.585 00:45:35 -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:51.585 00:45:35 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:51.585 00:45:35 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:51.585 00:45:35 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:51.585 00:45:35 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:51.585 00:45:35 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:51.585 00:45:35 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:51.585 00:45:35 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:51.585 00:45:35 -- common/autotest_common.sh@10 -- # set +x 00:05:51.585 00:45:35 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3279143 00:05:51.585 00:45:35 -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:51.585 00:45:35 -- spdkcli/tcp.sh@27 -- # waitforlisten 3279143 00:05:51.585 00:45:35 -- common/autotest_common.sh@819 -- # '[' -z 3279143 ']' 00:05:51.585 00:45:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.585 00:45:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:51.585 00:45:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.585 00:45:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:51.585 00:45:35 -- common/autotest_common.sh@10 -- # set +x 00:05:51.843 [2024-07-23 00:45:35.790974] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:05:51.843 [2024-07-23 00:45:35.791059] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3279143 ] 00:05:51.843 EAL: No free 2048 kB hugepages reported on node 1 00:05:51.843 [2024-07-23 00:45:35.848919] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:51.843 [2024-07-23 00:45:35.931998] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:51.843 [2024-07-23 00:45:35.932221] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:51.843 [2024-07-23 00:45:35.932226] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.776 00:45:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:52.776 00:45:36 -- common/autotest_common.sh@852 -- # return 0 00:05:52.776 00:45:36 -- spdkcli/tcp.sh@31 -- # socat_pid=3279278 00:05:52.776 00:45:36 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:52.776 00:45:36 -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:52.776 [ 00:05:52.776 "bdev_malloc_delete", 00:05:52.776 "bdev_malloc_create", 00:05:52.776 "bdev_null_resize", 00:05:52.776 "bdev_null_delete", 00:05:52.776 "bdev_null_create", 00:05:52.776 "bdev_nvme_cuse_unregister", 00:05:52.776 "bdev_nvme_cuse_register", 00:05:52.776 "bdev_opal_new_user", 00:05:52.776 "bdev_opal_set_lock_state", 00:05:52.776 "bdev_opal_delete", 00:05:52.776 "bdev_opal_get_info", 00:05:52.776 "bdev_opal_create", 00:05:52.776 "bdev_nvme_opal_revert", 00:05:52.776 "bdev_nvme_opal_init", 00:05:52.776 "bdev_nvme_send_cmd", 00:05:52.776 "bdev_nvme_get_path_iostat", 00:05:52.776 "bdev_nvme_get_mdns_discovery_info", 00:05:52.776 "bdev_nvme_stop_mdns_discovery", 00:05:52.776 "bdev_nvme_start_mdns_discovery", 00:05:52.776 "bdev_nvme_set_multipath_policy", 00:05:52.776 "bdev_nvme_set_preferred_path", 00:05:52.776 "bdev_nvme_get_io_paths", 00:05:52.776 "bdev_nvme_remove_error_injection", 00:05:52.776 "bdev_nvme_add_error_injection", 00:05:52.776 "bdev_nvme_get_discovery_info", 00:05:52.776 "bdev_nvme_stop_discovery", 00:05:52.776 "bdev_nvme_start_discovery", 00:05:52.776 "bdev_nvme_get_controller_health_info", 00:05:52.776 "bdev_nvme_disable_controller", 00:05:52.776 "bdev_nvme_enable_controller", 00:05:52.776 "bdev_nvme_reset_controller", 00:05:52.776 "bdev_nvme_get_transport_statistics", 00:05:52.776 "bdev_nvme_apply_firmware", 00:05:52.776 "bdev_nvme_detach_controller", 00:05:52.776 "bdev_nvme_get_controllers", 00:05:52.776 "bdev_nvme_attach_controller", 00:05:52.776 "bdev_nvme_set_hotplug", 00:05:52.776 "bdev_nvme_set_options", 00:05:52.776 "bdev_passthru_delete", 00:05:52.776 "bdev_passthru_create", 00:05:52.776 "bdev_lvol_grow_lvstore", 00:05:52.776 "bdev_lvol_get_lvols", 00:05:52.776 "bdev_lvol_get_lvstores", 00:05:52.776 "bdev_lvol_delete", 00:05:52.776 "bdev_lvol_set_read_only", 00:05:52.776 "bdev_lvol_resize", 00:05:52.776 "bdev_lvol_decouple_parent", 00:05:52.776 "bdev_lvol_inflate", 00:05:52.776 "bdev_lvol_rename", 00:05:52.776 "bdev_lvol_clone_bdev", 00:05:52.776 "bdev_lvol_clone", 00:05:52.776 "bdev_lvol_snapshot", 00:05:52.776 "bdev_lvol_create", 00:05:52.776 "bdev_lvol_delete_lvstore", 00:05:52.776 "bdev_lvol_rename_lvstore", 00:05:52.776 "bdev_lvol_create_lvstore", 00:05:52.776 "bdev_raid_set_options", 00:05:52.776 "bdev_raid_remove_base_bdev", 00:05:52.776 "bdev_raid_add_base_bdev", 00:05:52.776 "bdev_raid_delete", 00:05:52.776 "bdev_raid_create", 00:05:52.776 "bdev_raid_get_bdevs", 00:05:52.776 "bdev_error_inject_error", 00:05:52.776 "bdev_error_delete", 00:05:52.776 "bdev_error_create", 00:05:52.776 "bdev_split_delete", 00:05:52.776 "bdev_split_create", 00:05:52.776 "bdev_delay_delete", 00:05:52.776 "bdev_delay_create", 00:05:52.776 "bdev_delay_update_latency", 00:05:52.776 "bdev_zone_block_delete", 00:05:52.776 "bdev_zone_block_create", 00:05:52.776 "blobfs_create", 00:05:52.776 "blobfs_detect", 00:05:52.776 "blobfs_set_cache_size", 00:05:52.776 "bdev_aio_delete", 00:05:52.776 "bdev_aio_rescan", 00:05:52.776 "bdev_aio_create", 00:05:52.776 "bdev_ftl_set_property", 00:05:52.776 "bdev_ftl_get_properties", 00:05:52.776 "bdev_ftl_get_stats", 00:05:52.776 "bdev_ftl_unmap", 00:05:52.776 "bdev_ftl_unload", 00:05:52.776 "bdev_ftl_delete", 00:05:52.776 "bdev_ftl_load", 00:05:52.776 "bdev_ftl_create", 00:05:52.776 "bdev_virtio_attach_controller", 00:05:52.776 "bdev_virtio_scsi_get_devices", 00:05:52.776 "bdev_virtio_detach_controller", 00:05:52.776 "bdev_virtio_blk_set_hotplug", 00:05:52.776 "bdev_iscsi_delete", 00:05:52.776 "bdev_iscsi_create", 00:05:52.776 "bdev_iscsi_set_options", 00:05:52.776 "accel_error_inject_error", 00:05:52.776 "ioat_scan_accel_module", 00:05:52.776 "dsa_scan_accel_module", 00:05:52.776 "iaa_scan_accel_module", 00:05:52.776 "vfu_virtio_create_scsi_endpoint", 00:05:52.776 "vfu_virtio_scsi_remove_target", 00:05:52.776 "vfu_virtio_scsi_add_target", 00:05:52.776 "vfu_virtio_create_blk_endpoint", 00:05:52.776 "vfu_virtio_delete_endpoint", 00:05:52.776 "iscsi_set_options", 00:05:52.776 "iscsi_get_auth_groups", 00:05:52.776 "iscsi_auth_group_remove_secret", 00:05:52.776 "iscsi_auth_group_add_secret", 00:05:52.776 "iscsi_delete_auth_group", 00:05:52.776 "iscsi_create_auth_group", 00:05:52.776 "iscsi_set_discovery_auth", 00:05:52.776 "iscsi_get_options", 00:05:52.776 "iscsi_target_node_request_logout", 00:05:52.776 "iscsi_target_node_set_redirect", 00:05:52.776 "iscsi_target_node_set_auth", 00:05:52.776 "iscsi_target_node_add_lun", 00:05:52.776 "iscsi_get_connections", 00:05:52.776 "iscsi_portal_group_set_auth", 00:05:52.776 "iscsi_start_portal_group", 00:05:52.776 "iscsi_delete_portal_group", 00:05:52.776 "iscsi_create_portal_group", 00:05:52.776 "iscsi_get_portal_groups", 00:05:52.776 "iscsi_delete_target_node", 00:05:52.776 "iscsi_target_node_remove_pg_ig_maps", 00:05:52.776 "iscsi_target_node_add_pg_ig_maps", 00:05:52.776 "iscsi_create_target_node", 00:05:52.776 "iscsi_get_target_nodes", 00:05:52.776 "iscsi_delete_initiator_group", 00:05:52.776 "iscsi_initiator_group_remove_initiators", 00:05:52.776 "iscsi_initiator_group_add_initiators", 00:05:52.776 "iscsi_create_initiator_group", 00:05:52.776 "iscsi_get_initiator_groups", 00:05:52.776 "nvmf_set_crdt", 00:05:52.776 "nvmf_set_config", 00:05:52.776 "nvmf_set_max_subsystems", 00:05:52.776 "nvmf_subsystem_get_listeners", 00:05:52.776 "nvmf_subsystem_get_qpairs", 00:05:52.776 "nvmf_subsystem_get_controllers", 00:05:52.776 "nvmf_get_stats", 00:05:52.776 "nvmf_get_transports", 00:05:52.776 "nvmf_create_transport", 00:05:52.776 "nvmf_get_targets", 00:05:52.776 "nvmf_delete_target", 00:05:52.776 "nvmf_create_target", 00:05:52.776 "nvmf_subsystem_allow_any_host", 00:05:52.776 "nvmf_subsystem_remove_host", 00:05:52.776 "nvmf_subsystem_add_host", 00:05:52.776 "nvmf_subsystem_remove_ns", 00:05:52.776 "nvmf_subsystem_add_ns", 00:05:52.776 "nvmf_subsystem_listener_set_ana_state", 00:05:52.776 "nvmf_discovery_get_referrals", 00:05:52.776 "nvmf_discovery_remove_referral", 00:05:52.776 "nvmf_discovery_add_referral", 00:05:52.776 "nvmf_subsystem_remove_listener", 00:05:52.776 "nvmf_subsystem_add_listener", 00:05:52.776 "nvmf_delete_subsystem", 00:05:52.776 "nvmf_create_subsystem", 00:05:52.776 "nvmf_get_subsystems", 00:05:52.776 "env_dpdk_get_mem_stats", 00:05:52.776 "nbd_get_disks", 00:05:52.776 "nbd_stop_disk", 00:05:52.776 "nbd_start_disk", 00:05:52.776 "ublk_recover_disk", 00:05:52.776 "ublk_get_disks", 00:05:52.776 "ublk_stop_disk", 00:05:52.776 "ublk_start_disk", 00:05:52.777 "ublk_destroy_target", 00:05:52.777 "ublk_create_target", 00:05:52.777 "virtio_blk_create_transport", 00:05:52.777 "virtio_blk_get_transports", 00:05:52.777 "vhost_controller_set_coalescing", 00:05:52.777 "vhost_get_controllers", 00:05:52.777 "vhost_delete_controller", 00:05:52.777 "vhost_create_blk_controller", 00:05:52.777 "vhost_scsi_controller_remove_target", 00:05:52.777 "vhost_scsi_controller_add_target", 00:05:52.777 "vhost_start_scsi_controller", 00:05:52.777 "vhost_create_scsi_controller", 00:05:52.777 "thread_set_cpumask", 00:05:52.777 "framework_get_scheduler", 00:05:52.777 "framework_set_scheduler", 00:05:52.777 "framework_get_reactors", 00:05:52.777 "thread_get_io_channels", 00:05:52.777 "thread_get_pollers", 00:05:52.777 "thread_get_stats", 00:05:52.777 "framework_monitor_context_switch", 00:05:52.777 "spdk_kill_instance", 00:05:52.777 "log_enable_timestamps", 00:05:52.777 "log_get_flags", 00:05:52.777 "log_clear_flag", 00:05:52.777 "log_set_flag", 00:05:52.777 "log_get_level", 00:05:52.777 "log_set_level", 00:05:52.777 "log_get_print_level", 00:05:52.777 "log_set_print_level", 00:05:52.777 "framework_enable_cpumask_locks", 00:05:52.777 "framework_disable_cpumask_locks", 00:05:52.777 "framework_wait_init", 00:05:52.777 "framework_start_init", 00:05:52.777 "scsi_get_devices", 00:05:52.777 "bdev_get_histogram", 00:05:52.777 "bdev_enable_histogram", 00:05:52.777 "bdev_set_qos_limit", 00:05:52.777 "bdev_set_qd_sampling_period", 00:05:52.777 "bdev_get_bdevs", 00:05:52.777 "bdev_reset_iostat", 00:05:52.777 "bdev_get_iostat", 00:05:52.777 "bdev_examine", 00:05:52.777 "bdev_wait_for_examine", 00:05:52.777 "bdev_set_options", 00:05:52.777 "notify_get_notifications", 00:05:52.777 "notify_get_types", 00:05:52.777 "accel_get_stats", 00:05:52.777 "accel_set_options", 00:05:52.777 "accel_set_driver", 00:05:52.777 "accel_crypto_key_destroy", 00:05:52.777 "accel_crypto_keys_get", 00:05:52.777 "accel_crypto_key_create", 00:05:52.777 "accel_assign_opc", 00:05:52.777 "accel_get_module_info", 00:05:52.777 "accel_get_opc_assignments", 00:05:52.777 "vmd_rescan", 00:05:52.777 "vmd_remove_device", 00:05:52.777 "vmd_enable", 00:05:52.777 "sock_set_default_impl", 00:05:52.777 "sock_impl_set_options", 00:05:52.777 "sock_impl_get_options", 00:05:52.777 "iobuf_get_stats", 00:05:52.777 "iobuf_set_options", 00:05:52.777 "framework_get_pci_devices", 00:05:52.777 "framework_get_config", 00:05:52.777 "framework_get_subsystems", 00:05:52.777 "vfu_tgt_set_base_path", 00:05:52.777 "trace_get_info", 00:05:52.777 "trace_get_tpoint_group_mask", 00:05:52.777 "trace_disable_tpoint_group", 00:05:52.777 "trace_enable_tpoint_group", 00:05:52.777 "trace_clear_tpoint_mask", 00:05:52.777 "trace_set_tpoint_mask", 00:05:52.777 "spdk_get_version", 00:05:52.777 "rpc_get_methods" 00:05:52.777 ] 00:05:52.777 00:45:36 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:52.777 00:45:36 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:52.777 00:45:36 -- common/autotest_common.sh@10 -- # set +x 00:05:53.035 00:45:36 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:53.035 00:45:36 -- spdkcli/tcp.sh@38 -- # killprocess 3279143 00:05:53.035 00:45:36 -- common/autotest_common.sh@926 -- # '[' -z 3279143 ']' 00:05:53.035 00:45:36 -- common/autotest_common.sh@930 -- # kill -0 3279143 00:05:53.035 00:45:36 -- common/autotest_common.sh@931 -- # uname 00:05:53.035 00:45:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:53.035 00:45:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3279143 00:05:53.035 00:45:37 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:53.035 00:45:37 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:53.035 00:45:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3279143' 00:05:53.035 killing process with pid 3279143 00:05:53.035 00:45:37 -- common/autotest_common.sh@945 -- # kill 3279143 00:05:53.035 00:45:37 -- common/autotest_common.sh@950 -- # wait 3279143 00:05:53.293 00:05:53.293 real 0m1.720s 00:05:53.293 user 0m3.355s 00:05:53.293 sys 0m0.488s 00:05:53.293 00:45:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.293 00:45:37 -- common/autotest_common.sh@10 -- # set +x 00:05:53.293 ************************************ 00:05:53.293 END TEST spdkcli_tcp 00:05:53.293 ************************************ 00:05:53.293 00:45:37 -- spdk/autotest.sh@186 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:53.293 00:45:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:53.293 00:45:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:53.293 00:45:37 -- common/autotest_common.sh@10 -- # set +x 00:05:53.293 ************************************ 00:05:53.293 START TEST dpdk_mem_utility 00:05:53.293 ************************************ 00:05:53.293 00:45:37 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:53.293 * Looking for test storage... 00:05:53.293 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:53.293 00:45:37 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:53.293 00:45:37 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3279383 00:05:53.293 00:45:37 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:53.293 00:45:37 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3279383 00:05:53.293 00:45:37 -- common/autotest_common.sh@819 -- # '[' -z 3279383 ']' 00:05:53.293 00:45:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.293 00:45:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:53.293 00:45:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.293 00:45:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:53.293 00:45:37 -- common/autotest_common.sh@10 -- # set +x 00:05:53.552 [2024-07-23 00:45:37.542409] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:05:53.552 [2024-07-23 00:45:37.542488] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3279383 ] 00:05:53.552 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.552 [2024-07-23 00:45:37.603307] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.552 [2024-07-23 00:45:37.690566] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:53.552 [2024-07-23 00:45:37.690777] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.486 00:45:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:54.486 00:45:38 -- common/autotest_common.sh@852 -- # return 0 00:05:54.486 00:45:38 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:54.486 00:45:38 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:54.486 00:45:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:54.486 00:45:38 -- common/autotest_common.sh@10 -- # set +x 00:05:54.486 { 00:05:54.486 "filename": "/tmp/spdk_mem_dump.txt" 00:05:54.486 } 00:05:54.486 00:45:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:54.486 00:45:38 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:54.486 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:54.486 1 heaps totaling size 814.000000 MiB 00:05:54.486 size: 814.000000 MiB heap id: 0 00:05:54.486 end heaps---------- 00:05:54.486 8 mempools totaling size 598.116089 MiB 00:05:54.486 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:54.486 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:54.486 size: 84.521057 MiB name: bdev_io_3279383 00:05:54.486 size: 51.011292 MiB name: evtpool_3279383 00:05:54.486 size: 50.003479 MiB name: msgpool_3279383 00:05:54.486 size: 21.763794 MiB name: PDU_Pool 00:05:54.486 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:54.486 size: 0.026123 MiB name: Session_Pool 00:05:54.486 end mempools------- 00:05:54.486 6 memzones totaling size 4.142822 MiB 00:05:54.486 size: 1.000366 MiB name: RG_ring_0_3279383 00:05:54.486 size: 1.000366 MiB name: RG_ring_1_3279383 00:05:54.486 size: 1.000366 MiB name: RG_ring_4_3279383 00:05:54.486 size: 1.000366 MiB name: RG_ring_5_3279383 00:05:54.486 size: 0.125366 MiB name: RG_ring_2_3279383 00:05:54.486 size: 0.015991 MiB name: RG_ring_3_3279383 00:05:54.486 end memzones------- 00:05:54.486 00:45:38 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:54.486 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:54.486 list of free elements. size: 12.519348 MiB 00:05:54.486 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:54.486 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:54.486 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:54.486 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:54.486 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:54.486 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:54.486 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:54.487 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:54.487 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:54.487 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:54.487 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:54.487 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:54.487 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:54.487 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:54.487 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:54.487 list of standard malloc elements. size: 199.218079 MiB 00:05:54.487 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:54.487 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:54.487 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:54.487 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:54.487 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:54.487 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:54.487 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:54.487 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:54.487 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:54.487 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:54.487 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:54.487 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:54.487 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:54.487 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:54.487 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:54.487 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:54.487 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:54.487 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:54.487 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:54.487 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:54.487 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:54.487 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:54.487 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:54.487 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:54.487 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:54.487 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:54.487 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:54.487 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:54.487 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:54.487 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:54.487 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:54.487 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:54.487 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:54.487 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:54.487 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:54.487 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:54.487 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:54.487 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:54.487 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:54.487 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:54.487 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:54.487 list of memzone associated elements. size: 602.262573 MiB 00:05:54.487 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:54.487 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:54.487 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:54.487 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:54.487 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:54.487 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_3279383_0 00:05:54.487 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:54.487 associated memzone info: size: 48.002930 MiB name: MP_evtpool_3279383_0 00:05:54.487 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:54.487 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3279383_0 00:05:54.487 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:54.487 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:54.487 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:54.487 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:54.487 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:54.487 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_3279383 00:05:54.487 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:54.487 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3279383 00:05:54.487 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:54.487 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3279383 00:05:54.487 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:54.487 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:54.487 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:54.487 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:54.487 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:54.487 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:54.487 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:54.487 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:54.487 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:54.487 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3279383 00:05:54.487 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:54.487 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3279383 00:05:54.487 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:54.487 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3279383 00:05:54.487 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:54.487 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3279383 00:05:54.487 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:54.487 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3279383 00:05:54.487 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:54.487 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:54.487 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:54.487 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:54.487 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:54.487 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:54.487 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:54.487 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3279383 00:05:54.487 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:54.487 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:54.487 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:54.487 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:54.487 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:54.487 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3279383 00:05:54.487 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:54.487 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:54.487 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:54.487 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3279383 00:05:54.487 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:54.487 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3279383 00:05:54.487 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:54.487 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:54.487 00:45:38 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:54.487 00:45:38 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3279383 00:05:54.487 00:45:38 -- common/autotest_common.sh@926 -- # '[' -z 3279383 ']' 00:05:54.487 00:45:38 -- common/autotest_common.sh@930 -- # kill -0 3279383 00:05:54.487 00:45:38 -- common/autotest_common.sh@931 -- # uname 00:05:54.487 00:45:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:54.487 00:45:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3279383 00:05:54.487 00:45:38 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:54.487 00:45:38 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:54.487 00:45:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3279383' 00:05:54.487 killing process with pid 3279383 00:05:54.487 00:45:38 -- common/autotest_common.sh@945 -- # kill 3279383 00:05:54.487 00:45:38 -- common/autotest_common.sh@950 -- # wait 3279383 00:05:55.054 00:05:55.054 real 0m1.624s 00:05:55.054 user 0m1.824s 00:05:55.054 sys 0m0.439s 00:05:55.054 00:45:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.054 00:45:39 -- common/autotest_common.sh@10 -- # set +x 00:05:55.054 ************************************ 00:05:55.054 END TEST dpdk_mem_utility 00:05:55.054 ************************************ 00:05:55.054 00:45:39 -- spdk/autotest.sh@187 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:55.054 00:45:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:55.054 00:45:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:55.054 00:45:39 -- common/autotest_common.sh@10 -- # set +x 00:05:55.054 ************************************ 00:05:55.054 START TEST event 00:05:55.054 ************************************ 00:05:55.054 00:45:39 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:55.054 * Looking for test storage... 00:05:55.054 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:55.054 00:45:39 -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:55.054 00:45:39 -- bdev/nbd_common.sh@6 -- # set -e 00:05:55.054 00:45:39 -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:55.054 00:45:39 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:05:55.054 00:45:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:55.054 00:45:39 -- common/autotest_common.sh@10 -- # set +x 00:05:55.054 ************************************ 00:05:55.054 START TEST event_perf 00:05:55.054 ************************************ 00:05:55.054 00:45:39 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:55.054 Running I/O for 1 seconds...[2024-07-23 00:45:39.161497] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:05:55.054 [2024-07-23 00:45:39.161585] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3279673 ] 00:05:55.054 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.054 [2024-07-23 00:45:39.230249] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:55.312 [2024-07-23 00:45:39.325050] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:55.312 [2024-07-23 00:45:39.325076] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:55.312 [2024-07-23 00:45:39.325133] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:55.312 [2024-07-23 00:45:39.325135] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.247 Running I/O for 1 seconds... 00:05:56.247 lcore 0: 232668 00:05:56.247 lcore 1: 232667 00:05:56.247 lcore 2: 232666 00:05:56.247 lcore 3: 232667 00:05:56.247 done. 00:05:56.247 00:05:56.247 real 0m1.257s 00:05:56.247 user 0m4.157s 00:05:56.247 sys 0m0.094s 00:05:56.247 00:45:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.247 00:45:40 -- common/autotest_common.sh@10 -- # set +x 00:05:56.247 ************************************ 00:05:56.247 END TEST event_perf 00:05:56.247 ************************************ 00:05:56.247 00:45:40 -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:56.247 00:45:40 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:05:56.247 00:45:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:56.247 00:45:40 -- common/autotest_common.sh@10 -- # set +x 00:05:56.247 ************************************ 00:05:56.247 START TEST event_reactor 00:05:56.247 ************************************ 00:05:56.247 00:45:40 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:56.247 [2024-07-23 00:45:40.445127] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:05:56.247 [2024-07-23 00:45:40.445225] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3279832 ] 00:05:56.505 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.505 [2024-07-23 00:45:40.507170] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.505 [2024-07-23 00:45:40.599003] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.880 test_start 00:05:57.880 oneshot 00:05:57.880 tick 100 00:05:57.880 tick 100 00:05:57.880 tick 250 00:05:57.880 tick 100 00:05:57.880 tick 100 00:05:57.880 tick 100 00:05:57.880 tick 250 00:05:57.880 tick 500 00:05:57.880 tick 100 00:05:57.880 tick 100 00:05:57.880 tick 250 00:05:57.880 tick 100 00:05:57.880 tick 100 00:05:57.880 test_end 00:05:57.880 00:05:57.880 real 0m1.249s 00:05:57.880 user 0m1.157s 00:05:57.880 sys 0m0.086s 00:05:57.880 00:45:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.880 00:45:41 -- common/autotest_common.sh@10 -- # set +x 00:05:57.880 ************************************ 00:05:57.880 END TEST event_reactor 00:05:57.880 ************************************ 00:05:57.880 00:45:41 -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:57.880 00:45:41 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:05:57.880 00:45:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:57.880 00:45:41 -- common/autotest_common.sh@10 -- # set +x 00:05:57.880 ************************************ 00:05:57.880 START TEST event_reactor_perf 00:05:57.880 ************************************ 00:05:57.880 00:45:41 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:57.880 [2024-07-23 00:45:41.720738] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:05:57.880 [2024-07-23 00:45:41.720811] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3279995 ] 00:05:57.880 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.880 [2024-07-23 00:45:41.783071] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.880 [2024-07-23 00:45:41.873382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.814 test_start 00:05:58.814 test_end 00:05:58.814 Performance: 356109 events per second 00:05:58.814 00:05:58.814 real 0m1.247s 00:05:58.814 user 0m1.162s 00:05:58.814 sys 0m0.081s 00:05:58.814 00:45:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.814 00:45:42 -- common/autotest_common.sh@10 -- # set +x 00:05:58.814 ************************************ 00:05:58.814 END TEST event_reactor_perf 00:05:58.814 ************************************ 00:05:58.814 00:45:42 -- event/event.sh@49 -- # uname -s 00:05:58.814 00:45:42 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:58.814 00:45:42 -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:58.814 00:45:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:58.814 00:45:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:58.814 00:45:42 -- common/autotest_common.sh@10 -- # set +x 00:05:58.814 ************************************ 00:05:58.814 START TEST event_scheduler 00:05:58.814 ************************************ 00:05:58.814 00:45:42 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:59.072 * Looking for test storage... 00:05:59.072 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:59.072 00:45:43 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:59.072 00:45:43 -- scheduler/scheduler.sh@35 -- # scheduler_pid=3280174 00:05:59.072 00:45:43 -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:59.072 00:45:43 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:59.072 00:45:43 -- scheduler/scheduler.sh@37 -- # waitforlisten 3280174 00:05:59.072 00:45:43 -- common/autotest_common.sh@819 -- # '[' -z 3280174 ']' 00:05:59.072 00:45:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.072 00:45:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:59.072 00:45:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.072 00:45:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:59.072 00:45:43 -- common/autotest_common.sh@10 -- # set +x 00:05:59.072 [2024-07-23 00:45:43.067191] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:05:59.072 [2024-07-23 00:45:43.067266] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3280174 ] 00:05:59.072 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.072 [2024-07-23 00:45:43.124290] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:59.072 [2024-07-23 00:45:43.213138] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.072 [2024-07-23 00:45:43.213209] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:59.072 [2024-07-23 00:45:43.213267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:59.072 [2024-07-23 00:45:43.213270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:59.073 00:45:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:59.330 00:45:43 -- common/autotest_common.sh@852 -- # return 0 00:05:59.330 00:45:43 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:59.330 00:45:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:59.330 00:45:43 -- common/autotest_common.sh@10 -- # set +x 00:05:59.330 POWER: Env isn't set yet! 00:05:59.330 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:59.330 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_available_frequencies 00:05:59.330 POWER: Cannot get available frequencies of lcore 0 00:05:59.330 POWER: Attempting to initialise PSTAT power management... 00:05:59.330 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:05:59.330 POWER: Initialized successfully for lcore 0 power management 00:05:59.330 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:05:59.330 POWER: Initialized successfully for lcore 1 power management 00:05:59.330 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:05:59.330 POWER: Initialized successfully for lcore 2 power management 00:05:59.330 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:05:59.330 POWER: Initialized successfully for lcore 3 power management 00:05:59.330 [2024-07-23 00:45:43.309823] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:59.330 [2024-07-23 00:45:43.309841] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:59.330 [2024-07-23 00:45:43.309852] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:59.330 00:45:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:59.330 00:45:43 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:59.330 00:45:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:59.330 00:45:43 -- common/autotest_common.sh@10 -- # set +x 00:05:59.330 [2024-07-23 00:45:43.410706] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:59.330 00:45:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:59.330 00:45:43 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:59.330 00:45:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:59.330 00:45:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:59.330 00:45:43 -- common/autotest_common.sh@10 -- # set +x 00:05:59.330 ************************************ 00:05:59.330 START TEST scheduler_create_thread 00:05:59.330 ************************************ 00:05:59.330 00:45:43 -- common/autotest_common.sh@1104 -- # scheduler_create_thread 00:05:59.330 00:45:43 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:59.330 00:45:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:59.330 00:45:43 -- common/autotest_common.sh@10 -- # set +x 00:05:59.330 2 00:05:59.330 00:45:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:59.330 00:45:43 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:59.330 00:45:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:59.330 00:45:43 -- common/autotest_common.sh@10 -- # set +x 00:05:59.330 3 00:05:59.330 00:45:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:59.330 00:45:43 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:59.330 00:45:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:59.330 00:45:43 -- common/autotest_common.sh@10 -- # set +x 00:05:59.330 4 00:05:59.330 00:45:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:59.330 00:45:43 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:59.330 00:45:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:59.330 00:45:43 -- common/autotest_common.sh@10 -- # set +x 00:05:59.330 5 00:05:59.330 00:45:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:59.330 00:45:43 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:59.330 00:45:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:59.330 00:45:43 -- common/autotest_common.sh@10 -- # set +x 00:05:59.330 6 00:05:59.330 00:45:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:59.330 00:45:43 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:59.330 00:45:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:59.330 00:45:43 -- common/autotest_common.sh@10 -- # set +x 00:05:59.330 7 00:05:59.330 00:45:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:59.330 00:45:43 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:59.330 00:45:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:59.330 00:45:43 -- common/autotest_common.sh@10 -- # set +x 00:05:59.330 8 00:05:59.330 00:45:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:59.330 00:45:43 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:59.330 00:45:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:59.330 00:45:43 -- common/autotest_common.sh@10 -- # set +x 00:05:59.330 9 00:05:59.330 00:45:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:59.330 00:45:43 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:59.330 00:45:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:59.330 00:45:43 -- common/autotest_common.sh@10 -- # set +x 00:05:59.330 10 00:05:59.330 00:45:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:59.330 00:45:43 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:59.330 00:45:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:59.330 00:45:43 -- common/autotest_common.sh@10 -- # set +x 00:05:59.330 00:45:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:59.330 00:45:43 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:59.330 00:45:43 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:59.330 00:45:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:59.330 00:45:43 -- common/autotest_common.sh@10 -- # set +x 00:05:59.896 00:45:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:59.896 00:45:44 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:59.896 00:45:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:59.896 00:45:44 -- common/autotest_common.sh@10 -- # set +x 00:06:01.269 00:45:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:01.269 00:45:45 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:01.269 00:45:45 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:01.269 00:45:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:01.269 00:45:45 -- common/autotest_common.sh@10 -- # set +x 00:06:02.642 00:45:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:02.642 00:06:02.642 real 0m3.098s 00:06:02.642 user 0m0.008s 00:06:02.642 sys 0m0.007s 00:06:02.642 00:45:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.642 00:45:46 -- common/autotest_common.sh@10 -- # set +x 00:06:02.642 ************************************ 00:06:02.642 END TEST scheduler_create_thread 00:06:02.642 ************************************ 00:06:02.642 00:45:46 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:02.642 00:45:46 -- scheduler/scheduler.sh@46 -- # killprocess 3280174 00:06:02.642 00:45:46 -- common/autotest_common.sh@926 -- # '[' -z 3280174 ']' 00:06:02.642 00:45:46 -- common/autotest_common.sh@930 -- # kill -0 3280174 00:06:02.642 00:45:46 -- common/autotest_common.sh@931 -- # uname 00:06:02.642 00:45:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:02.642 00:45:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3280174 00:06:02.642 00:45:46 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:06:02.642 00:45:46 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:06:02.642 00:45:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3280174' 00:06:02.642 killing process with pid 3280174 00:06:02.642 00:45:46 -- common/autotest_common.sh@945 -- # kill 3280174 00:06:02.642 00:45:46 -- common/autotest_common.sh@950 -- # wait 3280174 00:06:02.900 [2024-07-23 00:45:46.894671] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:02.900 POWER: Power management governor of lcore 0 has been set to 'userspace' successfully 00:06:02.900 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:06:02.900 POWER: Power management governor of lcore 1 has been set to 'schedutil' successfully 00:06:02.900 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:06:02.900 POWER: Power management governor of lcore 2 has been set to 'schedutil' successfully 00:06:02.900 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:06:02.900 POWER: Power management governor of lcore 3 has been set to 'schedutil' successfully 00:06:02.900 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:06:03.159 00:06:03.159 real 0m4.167s 00:06:03.159 user 0m6.840s 00:06:03.159 sys 0m0.291s 00:06:03.159 00:45:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.159 00:45:47 -- common/autotest_common.sh@10 -- # set +x 00:06:03.159 ************************************ 00:06:03.159 END TEST event_scheduler 00:06:03.159 ************************************ 00:06:03.159 00:45:47 -- event/event.sh@51 -- # modprobe -n nbd 00:06:03.159 00:45:47 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:03.159 00:45:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:03.159 00:45:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:03.159 00:45:47 -- common/autotest_common.sh@10 -- # set +x 00:06:03.159 ************************************ 00:06:03.159 START TEST app_repeat 00:06:03.159 ************************************ 00:06:03.159 00:45:47 -- common/autotest_common.sh@1104 -- # app_repeat_test 00:06:03.159 00:45:47 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.159 00:45:47 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.159 00:45:47 -- event/event.sh@13 -- # local nbd_list 00:06:03.159 00:45:47 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:03.159 00:45:47 -- event/event.sh@14 -- # local bdev_list 00:06:03.159 00:45:47 -- event/event.sh@15 -- # local repeat_times=4 00:06:03.159 00:45:47 -- event/event.sh@17 -- # modprobe nbd 00:06:03.159 00:45:47 -- event/event.sh@19 -- # repeat_pid=3280762 00:06:03.159 00:45:47 -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:03.159 00:45:47 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:03.159 00:45:47 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3280762' 00:06:03.159 Process app_repeat pid: 3280762 00:06:03.159 00:45:47 -- event/event.sh@23 -- # for i in {0..2} 00:06:03.159 00:45:47 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:03.159 spdk_app_start Round 0 00:06:03.159 00:45:47 -- event/event.sh@25 -- # waitforlisten 3280762 /var/tmp/spdk-nbd.sock 00:06:03.159 00:45:47 -- common/autotest_common.sh@819 -- # '[' -z 3280762 ']' 00:06:03.159 00:45:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:03.159 00:45:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:03.159 00:45:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:03.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:03.159 00:45:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:03.159 00:45:47 -- common/autotest_common.sh@10 -- # set +x 00:06:03.159 [2024-07-23 00:45:47.206739] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:03.159 [2024-07-23 00:45:47.206819] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3280762 ] 00:06:03.159 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.159 [2024-07-23 00:45:47.267115] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:03.159 [2024-07-23 00:45:47.356441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:03.159 [2024-07-23 00:45:47.356445] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.093 00:45:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:04.093 00:45:48 -- common/autotest_common.sh@852 -- # return 0 00:06:04.093 00:45:48 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:04.351 Malloc0 00:06:04.351 00:45:48 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:04.609 Malloc1 00:06:04.609 00:45:48 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:04.609 00:45:48 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.609 00:45:48 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:04.609 00:45:48 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:04.609 00:45:48 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.609 00:45:48 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:04.609 00:45:48 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:04.609 00:45:48 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.609 00:45:48 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:04.609 00:45:48 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:04.609 00:45:48 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.609 00:45:48 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:04.609 00:45:48 -- bdev/nbd_common.sh@12 -- # local i 00:06:04.609 00:45:48 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:04.609 00:45:48 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:04.609 00:45:48 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:04.876 /dev/nbd0 00:06:04.876 00:45:48 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:04.876 00:45:48 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:04.876 00:45:48 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:06:04.876 00:45:48 -- common/autotest_common.sh@857 -- # local i 00:06:04.876 00:45:48 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:04.876 00:45:48 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:04.876 00:45:48 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:06:04.876 00:45:48 -- common/autotest_common.sh@861 -- # break 00:06:04.876 00:45:48 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:04.876 00:45:48 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:04.876 00:45:48 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:04.876 1+0 records in 00:06:04.876 1+0 records out 00:06:04.876 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000151462 s, 27.0 MB/s 00:06:04.876 00:45:48 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:04.876 00:45:48 -- common/autotest_common.sh@874 -- # size=4096 00:06:04.876 00:45:48 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:04.876 00:45:48 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:04.876 00:45:48 -- common/autotest_common.sh@877 -- # return 0 00:06:04.876 00:45:48 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:04.876 00:45:48 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:04.876 00:45:48 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:05.164 /dev/nbd1 00:06:05.164 00:45:49 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:05.164 00:45:49 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:05.164 00:45:49 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:06:05.164 00:45:49 -- common/autotest_common.sh@857 -- # local i 00:06:05.164 00:45:49 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:05.164 00:45:49 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:05.164 00:45:49 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:06:05.164 00:45:49 -- common/autotest_common.sh@861 -- # break 00:06:05.164 00:45:49 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:05.164 00:45:49 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:05.164 00:45:49 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:05.164 1+0 records in 00:06:05.164 1+0 records out 00:06:05.164 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000195933 s, 20.9 MB/s 00:06:05.164 00:45:49 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:05.164 00:45:49 -- common/autotest_common.sh@874 -- # size=4096 00:06:05.164 00:45:49 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:05.164 00:45:49 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:05.164 00:45:49 -- common/autotest_common.sh@877 -- # return 0 00:06:05.164 00:45:49 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:05.164 00:45:49 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:05.164 00:45:49 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:05.164 00:45:49 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.164 00:45:49 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:05.422 00:45:49 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:05.422 { 00:06:05.422 "nbd_device": "/dev/nbd0", 00:06:05.422 "bdev_name": "Malloc0" 00:06:05.422 }, 00:06:05.422 { 00:06:05.422 "nbd_device": "/dev/nbd1", 00:06:05.422 "bdev_name": "Malloc1" 00:06:05.422 } 00:06:05.422 ]' 00:06:05.422 00:45:49 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:05.422 { 00:06:05.422 "nbd_device": "/dev/nbd0", 00:06:05.422 "bdev_name": "Malloc0" 00:06:05.422 }, 00:06:05.422 { 00:06:05.422 "nbd_device": "/dev/nbd1", 00:06:05.422 "bdev_name": "Malloc1" 00:06:05.422 } 00:06:05.422 ]' 00:06:05.422 00:45:49 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:05.422 00:45:49 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:05.422 /dev/nbd1' 00:06:05.423 00:45:49 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:05.423 /dev/nbd1' 00:06:05.423 00:45:49 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:05.423 00:45:49 -- bdev/nbd_common.sh@65 -- # count=2 00:06:05.423 00:45:49 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:05.423 00:45:49 -- bdev/nbd_common.sh@95 -- # count=2 00:06:05.423 00:45:49 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:05.423 00:45:49 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:05.423 00:45:49 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.423 00:45:49 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:05.423 00:45:49 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:05.423 00:45:49 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:05.423 00:45:49 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:05.423 00:45:49 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:05.423 256+0 records in 00:06:05.423 256+0 records out 00:06:05.423 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00385975 s, 272 MB/s 00:06:05.423 00:45:49 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:05.423 00:45:49 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:05.423 256+0 records in 00:06:05.423 256+0 records out 00:06:05.423 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0237064 s, 44.2 MB/s 00:06:05.423 00:45:49 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:05.423 00:45:49 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:05.423 256+0 records in 00:06:05.423 256+0 records out 00:06:05.423 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0252355 s, 41.6 MB/s 00:06:05.423 00:45:49 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:05.423 00:45:49 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.423 00:45:49 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:05.423 00:45:49 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:05.423 00:45:49 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:05.423 00:45:49 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:05.423 00:45:49 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:05.423 00:45:49 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:05.423 00:45:49 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:05.423 00:45:49 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:05.423 00:45:49 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:05.423 00:45:49 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:05.423 00:45:49 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:05.423 00:45:49 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.423 00:45:49 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.423 00:45:49 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:05.423 00:45:49 -- bdev/nbd_common.sh@51 -- # local i 00:06:05.423 00:45:49 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:05.423 00:45:49 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:05.681 00:45:49 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:05.681 00:45:49 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:05.681 00:45:49 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:05.681 00:45:49 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:05.681 00:45:49 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:05.681 00:45:49 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:05.681 00:45:49 -- bdev/nbd_common.sh@41 -- # break 00:06:05.681 00:45:49 -- bdev/nbd_common.sh@45 -- # return 0 00:06:05.681 00:45:49 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:05.681 00:45:49 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:05.939 00:45:50 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:05.939 00:45:50 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:05.939 00:45:50 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:05.939 00:45:50 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:05.939 00:45:50 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:05.939 00:45:50 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:05.939 00:45:50 -- bdev/nbd_common.sh@41 -- # break 00:06:05.939 00:45:50 -- bdev/nbd_common.sh@45 -- # return 0 00:06:05.939 00:45:50 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:05.939 00:45:50 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.939 00:45:50 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:06.197 00:45:50 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:06.197 00:45:50 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:06.197 00:45:50 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:06.197 00:45:50 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:06.197 00:45:50 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:06.197 00:45:50 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:06.197 00:45:50 -- bdev/nbd_common.sh@65 -- # true 00:06:06.197 00:45:50 -- bdev/nbd_common.sh@65 -- # count=0 00:06:06.197 00:45:50 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:06.197 00:45:50 -- bdev/nbd_common.sh@104 -- # count=0 00:06:06.197 00:45:50 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:06.197 00:45:50 -- bdev/nbd_common.sh@109 -- # return 0 00:06:06.197 00:45:50 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:06.454 00:45:50 -- event/event.sh@35 -- # sleep 3 00:06:06.713 [2024-07-23 00:45:50.852007] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:06.971 [2024-07-23 00:45:50.940393] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.971 [2024-07-23 00:45:50.940393] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:06.971 [2024-07-23 00:45:51.000062] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:06.971 [2024-07-23 00:45:51.000133] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:09.499 00:45:53 -- event/event.sh@23 -- # for i in {0..2} 00:06:09.499 00:45:53 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:09.499 spdk_app_start Round 1 00:06:09.499 00:45:53 -- event/event.sh@25 -- # waitforlisten 3280762 /var/tmp/spdk-nbd.sock 00:06:09.499 00:45:53 -- common/autotest_common.sh@819 -- # '[' -z 3280762 ']' 00:06:09.499 00:45:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:09.499 00:45:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:09.499 00:45:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:09.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:09.499 00:45:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:09.499 00:45:53 -- common/autotest_common.sh@10 -- # set +x 00:06:09.756 00:45:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:09.756 00:45:53 -- common/autotest_common.sh@852 -- # return 0 00:06:09.756 00:45:53 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:10.015 Malloc0 00:06:10.015 00:45:54 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:10.273 Malloc1 00:06:10.273 00:45:54 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:10.273 00:45:54 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.273 00:45:54 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:10.273 00:45:54 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:10.273 00:45:54 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.273 00:45:54 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:10.273 00:45:54 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:10.273 00:45:54 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.273 00:45:54 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:10.273 00:45:54 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:10.273 00:45:54 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.273 00:45:54 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:10.273 00:45:54 -- bdev/nbd_common.sh@12 -- # local i 00:06:10.273 00:45:54 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:10.273 00:45:54 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:10.273 00:45:54 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:10.531 /dev/nbd0 00:06:10.531 00:45:54 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:10.531 00:45:54 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:10.531 00:45:54 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:06:10.531 00:45:54 -- common/autotest_common.sh@857 -- # local i 00:06:10.531 00:45:54 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:10.531 00:45:54 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:10.531 00:45:54 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:06:10.531 00:45:54 -- common/autotest_common.sh@861 -- # break 00:06:10.531 00:45:54 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:10.531 00:45:54 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:10.531 00:45:54 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:10.531 1+0 records in 00:06:10.531 1+0 records out 00:06:10.531 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000193781 s, 21.1 MB/s 00:06:10.531 00:45:54 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:10.531 00:45:54 -- common/autotest_common.sh@874 -- # size=4096 00:06:10.531 00:45:54 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:10.532 00:45:54 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:10.532 00:45:54 -- common/autotest_common.sh@877 -- # return 0 00:06:10.532 00:45:54 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:10.532 00:45:54 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:10.532 00:45:54 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:10.789 /dev/nbd1 00:06:10.789 00:45:54 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:10.789 00:45:54 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:10.789 00:45:54 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:06:10.789 00:45:54 -- common/autotest_common.sh@857 -- # local i 00:06:10.789 00:45:54 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:10.789 00:45:54 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:10.789 00:45:54 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:06:10.789 00:45:54 -- common/autotest_common.sh@861 -- # break 00:06:10.789 00:45:54 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:10.789 00:45:54 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:10.789 00:45:54 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:10.789 1+0 records in 00:06:10.789 1+0 records out 00:06:10.789 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000202871 s, 20.2 MB/s 00:06:10.789 00:45:54 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:10.789 00:45:54 -- common/autotest_common.sh@874 -- # size=4096 00:06:10.789 00:45:54 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:10.789 00:45:54 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:10.789 00:45:54 -- common/autotest_common.sh@877 -- # return 0 00:06:10.789 00:45:54 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:10.789 00:45:54 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:10.789 00:45:54 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:10.789 00:45:54 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.789 00:45:54 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:11.047 00:45:55 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:11.048 { 00:06:11.048 "nbd_device": "/dev/nbd0", 00:06:11.048 "bdev_name": "Malloc0" 00:06:11.048 }, 00:06:11.048 { 00:06:11.048 "nbd_device": "/dev/nbd1", 00:06:11.048 "bdev_name": "Malloc1" 00:06:11.048 } 00:06:11.048 ]' 00:06:11.048 00:45:55 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:11.048 { 00:06:11.048 "nbd_device": "/dev/nbd0", 00:06:11.048 "bdev_name": "Malloc0" 00:06:11.048 }, 00:06:11.048 { 00:06:11.048 "nbd_device": "/dev/nbd1", 00:06:11.048 "bdev_name": "Malloc1" 00:06:11.048 } 00:06:11.048 ]' 00:06:11.048 00:45:55 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:11.048 00:45:55 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:11.048 /dev/nbd1' 00:06:11.048 00:45:55 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:11.048 /dev/nbd1' 00:06:11.048 00:45:55 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:11.048 00:45:55 -- bdev/nbd_common.sh@65 -- # count=2 00:06:11.048 00:45:55 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:11.048 00:45:55 -- bdev/nbd_common.sh@95 -- # count=2 00:06:11.048 00:45:55 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:11.048 00:45:55 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:11.048 00:45:55 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.048 00:45:55 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:11.048 00:45:55 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:11.048 00:45:55 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:11.048 00:45:55 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:11.048 00:45:55 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:11.048 256+0 records in 00:06:11.048 256+0 records out 00:06:11.048 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00464004 s, 226 MB/s 00:06:11.048 00:45:55 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:11.048 00:45:55 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:11.048 256+0 records in 00:06:11.048 256+0 records out 00:06:11.048 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0239532 s, 43.8 MB/s 00:06:11.048 00:45:55 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:11.048 00:45:55 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:11.343 256+0 records in 00:06:11.343 256+0 records out 00:06:11.343 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0254721 s, 41.2 MB/s 00:06:11.343 00:45:55 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:11.343 00:45:55 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.343 00:45:55 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:11.343 00:45:55 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:11.343 00:45:55 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:11.343 00:45:55 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:11.343 00:45:55 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:11.343 00:45:55 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:11.343 00:45:55 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:11.343 00:45:55 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:11.343 00:45:55 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:11.343 00:45:55 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:11.343 00:45:55 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:11.343 00:45:55 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.343 00:45:55 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.343 00:45:55 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:11.343 00:45:55 -- bdev/nbd_common.sh@51 -- # local i 00:06:11.343 00:45:55 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:11.343 00:45:55 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:11.343 00:45:55 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:11.343 00:45:55 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:11.343 00:45:55 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:11.343 00:45:55 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:11.343 00:45:55 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:11.343 00:45:55 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:11.343 00:45:55 -- bdev/nbd_common.sh@41 -- # break 00:06:11.343 00:45:55 -- bdev/nbd_common.sh@45 -- # return 0 00:06:11.343 00:45:55 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:11.344 00:45:55 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:11.601 00:45:55 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:11.601 00:45:55 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:11.601 00:45:55 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:11.601 00:45:55 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:11.601 00:45:55 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:11.601 00:45:55 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:11.601 00:45:55 -- bdev/nbd_common.sh@41 -- # break 00:06:11.601 00:45:55 -- bdev/nbd_common.sh@45 -- # return 0 00:06:11.601 00:45:55 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:11.601 00:45:55 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.601 00:45:55 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:11.858 00:45:56 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:11.858 00:45:56 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:11.858 00:45:56 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:12.116 00:45:56 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:12.116 00:45:56 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:12.116 00:45:56 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:12.116 00:45:56 -- bdev/nbd_common.sh@65 -- # true 00:06:12.116 00:45:56 -- bdev/nbd_common.sh@65 -- # count=0 00:06:12.116 00:45:56 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:12.116 00:45:56 -- bdev/nbd_common.sh@104 -- # count=0 00:06:12.116 00:45:56 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:12.116 00:45:56 -- bdev/nbd_common.sh@109 -- # return 0 00:06:12.116 00:45:56 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:12.374 00:45:56 -- event/event.sh@35 -- # sleep 3 00:06:12.374 [2024-07-23 00:45:56.537810] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:12.632 [2024-07-23 00:45:56.627584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:12.632 [2024-07-23 00:45:56.627587] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.632 [2024-07-23 00:45:56.685246] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:12.632 [2024-07-23 00:45:56.685314] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:15.157 00:45:59 -- event/event.sh@23 -- # for i in {0..2} 00:06:15.157 00:45:59 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:15.157 spdk_app_start Round 2 00:06:15.157 00:45:59 -- event/event.sh@25 -- # waitforlisten 3280762 /var/tmp/spdk-nbd.sock 00:06:15.157 00:45:59 -- common/autotest_common.sh@819 -- # '[' -z 3280762 ']' 00:06:15.157 00:45:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:15.157 00:45:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:15.157 00:45:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:15.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:15.157 00:45:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:15.157 00:45:59 -- common/autotest_common.sh@10 -- # set +x 00:06:15.414 00:45:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:15.415 00:45:59 -- common/autotest_common.sh@852 -- # return 0 00:06:15.415 00:45:59 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:15.672 Malloc0 00:06:15.672 00:45:59 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:15.930 Malloc1 00:06:15.930 00:46:00 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:15.930 00:46:00 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.930 00:46:00 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:15.930 00:46:00 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:15.930 00:46:00 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.930 00:46:00 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:15.930 00:46:00 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:15.930 00:46:00 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.930 00:46:00 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:15.930 00:46:00 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:15.930 00:46:00 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.930 00:46:00 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:15.930 00:46:00 -- bdev/nbd_common.sh@12 -- # local i 00:06:15.930 00:46:00 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:15.930 00:46:00 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:15.931 00:46:00 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:16.187 /dev/nbd0 00:06:16.187 00:46:00 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:16.187 00:46:00 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:16.187 00:46:00 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:06:16.187 00:46:00 -- common/autotest_common.sh@857 -- # local i 00:06:16.187 00:46:00 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:16.187 00:46:00 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:16.187 00:46:00 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:06:16.187 00:46:00 -- common/autotest_common.sh@861 -- # break 00:06:16.187 00:46:00 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:16.187 00:46:00 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:16.187 00:46:00 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:16.187 1+0 records in 00:06:16.187 1+0 records out 00:06:16.187 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000242357 s, 16.9 MB/s 00:06:16.187 00:46:00 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:16.187 00:46:00 -- common/autotest_common.sh@874 -- # size=4096 00:06:16.187 00:46:00 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:16.187 00:46:00 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:16.187 00:46:00 -- common/autotest_common.sh@877 -- # return 0 00:06:16.187 00:46:00 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:16.187 00:46:00 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:16.187 00:46:00 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:16.443 /dev/nbd1 00:06:16.443 00:46:00 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:16.443 00:46:00 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:16.443 00:46:00 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:06:16.443 00:46:00 -- common/autotest_common.sh@857 -- # local i 00:06:16.443 00:46:00 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:16.443 00:46:00 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:16.443 00:46:00 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:06:16.443 00:46:00 -- common/autotest_common.sh@861 -- # break 00:06:16.443 00:46:00 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:16.443 00:46:00 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:16.443 00:46:00 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:16.443 1+0 records in 00:06:16.443 1+0 records out 00:06:16.443 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000203155 s, 20.2 MB/s 00:06:16.443 00:46:00 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:16.443 00:46:00 -- common/autotest_common.sh@874 -- # size=4096 00:06:16.443 00:46:00 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:16.443 00:46:00 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:16.443 00:46:00 -- common/autotest_common.sh@877 -- # return 0 00:06:16.443 00:46:00 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:16.443 00:46:00 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:16.443 00:46:00 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:16.443 00:46:00 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.443 00:46:00 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:16.700 00:46:00 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:16.700 { 00:06:16.700 "nbd_device": "/dev/nbd0", 00:06:16.700 "bdev_name": "Malloc0" 00:06:16.700 }, 00:06:16.700 { 00:06:16.700 "nbd_device": "/dev/nbd1", 00:06:16.700 "bdev_name": "Malloc1" 00:06:16.700 } 00:06:16.700 ]' 00:06:16.700 00:46:00 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:16.700 { 00:06:16.700 "nbd_device": "/dev/nbd0", 00:06:16.700 "bdev_name": "Malloc0" 00:06:16.700 }, 00:06:16.700 { 00:06:16.700 "nbd_device": "/dev/nbd1", 00:06:16.700 "bdev_name": "Malloc1" 00:06:16.700 } 00:06:16.700 ]' 00:06:16.700 00:46:00 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:16.700 00:46:00 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:16.700 /dev/nbd1' 00:06:16.700 00:46:00 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:16.700 /dev/nbd1' 00:06:16.700 00:46:00 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:16.700 00:46:00 -- bdev/nbd_common.sh@65 -- # count=2 00:06:16.700 00:46:00 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:16.701 00:46:00 -- bdev/nbd_common.sh@95 -- # count=2 00:06:16.701 00:46:00 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:16.701 00:46:00 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:16.701 00:46:00 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.701 00:46:00 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:16.701 00:46:00 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:16.701 00:46:00 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:16.701 00:46:00 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:16.701 00:46:00 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:16.701 256+0 records in 00:06:16.701 256+0 records out 00:06:16.701 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00499213 s, 210 MB/s 00:06:16.701 00:46:00 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:16.701 00:46:00 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:16.959 256+0 records in 00:06:16.959 256+0 records out 00:06:16.959 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0235705 s, 44.5 MB/s 00:06:16.959 00:46:00 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:16.959 00:46:00 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:16.959 256+0 records in 00:06:16.959 256+0 records out 00:06:16.959 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0250463 s, 41.9 MB/s 00:06:16.959 00:46:00 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:16.959 00:46:00 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.959 00:46:00 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:16.959 00:46:00 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:16.959 00:46:00 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:16.959 00:46:00 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:16.959 00:46:00 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:16.959 00:46:00 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:16.959 00:46:00 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:16.959 00:46:00 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:16.959 00:46:00 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:16.959 00:46:00 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:16.959 00:46:00 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:16.959 00:46:00 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.959 00:46:00 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.959 00:46:00 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:16.959 00:46:00 -- bdev/nbd_common.sh@51 -- # local i 00:06:16.959 00:46:00 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:16.959 00:46:00 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:17.217 00:46:01 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:17.217 00:46:01 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:17.217 00:46:01 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:17.217 00:46:01 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:17.217 00:46:01 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:17.217 00:46:01 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:17.217 00:46:01 -- bdev/nbd_common.sh@41 -- # break 00:06:17.217 00:46:01 -- bdev/nbd_common.sh@45 -- # return 0 00:06:17.217 00:46:01 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:17.217 00:46:01 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:17.474 00:46:01 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:17.474 00:46:01 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:17.474 00:46:01 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:17.474 00:46:01 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:17.474 00:46:01 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:17.474 00:46:01 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:17.474 00:46:01 -- bdev/nbd_common.sh@41 -- # break 00:06:17.474 00:46:01 -- bdev/nbd_common.sh@45 -- # return 0 00:06:17.474 00:46:01 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:17.474 00:46:01 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.474 00:46:01 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:17.731 00:46:01 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:17.731 00:46:01 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:17.731 00:46:01 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:17.731 00:46:01 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:17.731 00:46:01 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:17.731 00:46:01 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:17.731 00:46:01 -- bdev/nbd_common.sh@65 -- # true 00:06:17.731 00:46:01 -- bdev/nbd_common.sh@65 -- # count=0 00:06:17.731 00:46:01 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:17.731 00:46:01 -- bdev/nbd_common.sh@104 -- # count=0 00:06:17.731 00:46:01 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:17.731 00:46:01 -- bdev/nbd_common.sh@109 -- # return 0 00:06:17.731 00:46:01 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:17.990 00:46:02 -- event/event.sh@35 -- # sleep 3 00:06:18.248 [2024-07-23 00:46:02.235718] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:18.248 [2024-07-23 00:46:02.322343] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.248 [2024-07-23 00:46:02.322343] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:18.248 [2024-07-23 00:46:02.379209] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:18.248 [2024-07-23 00:46:02.379284] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:21.530 00:46:05 -- event/event.sh@38 -- # waitforlisten 3280762 /var/tmp/spdk-nbd.sock 00:06:21.530 00:46:05 -- common/autotest_common.sh@819 -- # '[' -z 3280762 ']' 00:06:21.530 00:46:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:21.530 00:46:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:21.530 00:46:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:21.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:21.530 00:46:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:21.530 00:46:05 -- common/autotest_common.sh@10 -- # set +x 00:06:21.530 00:46:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:21.530 00:46:05 -- common/autotest_common.sh@852 -- # return 0 00:06:21.530 00:46:05 -- event/event.sh@39 -- # killprocess 3280762 00:06:21.530 00:46:05 -- common/autotest_common.sh@926 -- # '[' -z 3280762 ']' 00:06:21.530 00:46:05 -- common/autotest_common.sh@930 -- # kill -0 3280762 00:06:21.530 00:46:05 -- common/autotest_common.sh@931 -- # uname 00:06:21.530 00:46:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:21.530 00:46:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3280762 00:06:21.530 00:46:05 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:21.530 00:46:05 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:21.530 00:46:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3280762' 00:06:21.530 killing process with pid 3280762 00:06:21.530 00:46:05 -- common/autotest_common.sh@945 -- # kill 3280762 00:06:21.530 00:46:05 -- common/autotest_common.sh@950 -- # wait 3280762 00:06:21.530 spdk_app_start is called in Round 0. 00:06:21.530 Shutdown signal received, stop current app iteration 00:06:21.530 Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 reinitialization... 00:06:21.530 spdk_app_start is called in Round 1. 00:06:21.530 Shutdown signal received, stop current app iteration 00:06:21.530 Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 reinitialization... 00:06:21.530 spdk_app_start is called in Round 2. 00:06:21.530 Shutdown signal received, stop current app iteration 00:06:21.530 Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 reinitialization... 00:06:21.530 spdk_app_start is called in Round 3. 00:06:21.530 Shutdown signal received, stop current app iteration 00:06:21.530 00:46:05 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:21.530 00:46:05 -- event/event.sh@42 -- # return 0 00:06:21.530 00:06:21.530 real 0m18.304s 00:06:21.530 user 0m39.765s 00:06:21.530 sys 0m3.137s 00:06:21.530 00:46:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.530 00:46:05 -- common/autotest_common.sh@10 -- # set +x 00:06:21.530 ************************************ 00:06:21.530 END TEST app_repeat 00:06:21.530 ************************************ 00:06:21.530 00:46:05 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:21.530 00:46:05 -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:21.530 00:46:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:21.530 00:46:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:21.530 00:46:05 -- common/autotest_common.sh@10 -- # set +x 00:06:21.530 ************************************ 00:06:21.530 START TEST cpu_locks 00:06:21.530 ************************************ 00:06:21.530 00:46:05 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:21.530 * Looking for test storage... 00:06:21.530 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:21.530 00:46:05 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:21.530 00:46:05 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:21.530 00:46:05 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:21.530 00:46:05 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:21.530 00:46:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:21.530 00:46:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:21.530 00:46:05 -- common/autotest_common.sh@10 -- # set +x 00:06:21.530 ************************************ 00:06:21.530 START TEST default_locks 00:06:21.530 ************************************ 00:06:21.530 00:46:05 -- common/autotest_common.sh@1104 -- # default_locks 00:06:21.530 00:46:05 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3283177 00:06:21.531 00:46:05 -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:21.531 00:46:05 -- event/cpu_locks.sh@47 -- # waitforlisten 3283177 00:06:21.531 00:46:05 -- common/autotest_common.sh@819 -- # '[' -z 3283177 ']' 00:06:21.531 00:46:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.531 00:46:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:21.531 00:46:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.531 00:46:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:21.531 00:46:05 -- common/autotest_common.sh@10 -- # set +x 00:06:21.531 [2024-07-23 00:46:05.621040] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:21.531 [2024-07-23 00:46:05.621123] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3283177 ] 00:06:21.531 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.531 [2024-07-23 00:46:05.680671] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.789 [2024-07-23 00:46:05.769326] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:21.789 [2024-07-23 00:46:05.769486] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.356 00:46:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:22.356 00:46:06 -- common/autotest_common.sh@852 -- # return 0 00:06:22.356 00:46:06 -- event/cpu_locks.sh@49 -- # locks_exist 3283177 00:06:22.356 00:46:06 -- event/cpu_locks.sh@22 -- # lslocks -p 3283177 00:06:22.356 00:46:06 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:22.923 lslocks: write error 00:06:22.923 00:46:06 -- event/cpu_locks.sh@50 -- # killprocess 3283177 00:06:22.923 00:46:06 -- common/autotest_common.sh@926 -- # '[' -z 3283177 ']' 00:06:22.923 00:46:06 -- common/autotest_common.sh@930 -- # kill -0 3283177 00:06:22.923 00:46:06 -- common/autotest_common.sh@931 -- # uname 00:06:22.923 00:46:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:22.923 00:46:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3283177 00:06:22.923 00:46:06 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:22.923 00:46:06 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:22.923 00:46:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3283177' 00:06:22.923 killing process with pid 3283177 00:06:22.923 00:46:06 -- common/autotest_common.sh@945 -- # kill 3283177 00:06:22.923 00:46:06 -- common/autotest_common.sh@950 -- # wait 3283177 00:06:23.182 00:46:07 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3283177 00:06:23.183 00:46:07 -- common/autotest_common.sh@640 -- # local es=0 00:06:23.183 00:46:07 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 3283177 00:06:23.183 00:46:07 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:06:23.183 00:46:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:23.183 00:46:07 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:06:23.183 00:46:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:23.183 00:46:07 -- common/autotest_common.sh@643 -- # waitforlisten 3283177 00:06:23.183 00:46:07 -- common/autotest_common.sh@819 -- # '[' -z 3283177 ']' 00:06:23.183 00:46:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.183 00:46:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:23.183 00:46:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.183 00:46:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:23.183 00:46:07 -- common/autotest_common.sh@10 -- # set +x 00:06:23.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (3283177) - No such process 00:06:23.183 ERROR: process (pid: 3283177) is no longer running 00:06:23.183 00:46:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:23.183 00:46:07 -- common/autotest_common.sh@852 -- # return 1 00:06:23.183 00:46:07 -- common/autotest_common.sh@643 -- # es=1 00:06:23.183 00:46:07 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:23.183 00:46:07 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:23.183 00:46:07 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:23.183 00:46:07 -- event/cpu_locks.sh@54 -- # no_locks 00:06:23.183 00:46:07 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:23.183 00:46:07 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:23.183 00:46:07 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:23.183 00:06:23.183 real 0m1.752s 00:06:23.183 user 0m1.850s 00:06:23.183 sys 0m0.582s 00:06:23.183 00:46:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:23.183 00:46:07 -- common/autotest_common.sh@10 -- # set +x 00:06:23.183 ************************************ 00:06:23.183 END TEST default_locks 00:06:23.183 ************************************ 00:06:23.183 00:46:07 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:23.183 00:46:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:23.183 00:46:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:23.183 00:46:07 -- common/autotest_common.sh@10 -- # set +x 00:06:23.183 ************************************ 00:06:23.183 START TEST default_locks_via_rpc 00:06:23.183 ************************************ 00:06:23.183 00:46:07 -- common/autotest_common.sh@1104 -- # default_locks_via_rpc 00:06:23.183 00:46:07 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3283475 00:06:23.183 00:46:07 -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:23.183 00:46:07 -- event/cpu_locks.sh@63 -- # waitforlisten 3283475 00:06:23.183 00:46:07 -- common/autotest_common.sh@819 -- # '[' -z 3283475 ']' 00:06:23.183 00:46:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.183 00:46:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:23.183 00:46:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.183 00:46:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:23.183 00:46:07 -- common/autotest_common.sh@10 -- # set +x 00:06:23.442 [2024-07-23 00:46:07.401840] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:23.442 [2024-07-23 00:46:07.401935] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3283475 ] 00:06:23.442 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.442 [2024-07-23 00:46:07.463308] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.442 [2024-07-23 00:46:07.548949] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:23.442 [2024-07-23 00:46:07.549109] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.378 00:46:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:24.378 00:46:08 -- common/autotest_common.sh@852 -- # return 0 00:06:24.379 00:46:08 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:24.379 00:46:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:24.379 00:46:08 -- common/autotest_common.sh@10 -- # set +x 00:06:24.379 00:46:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:24.379 00:46:08 -- event/cpu_locks.sh@67 -- # no_locks 00:06:24.379 00:46:08 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:24.379 00:46:08 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:24.379 00:46:08 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:24.379 00:46:08 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:24.379 00:46:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:24.379 00:46:08 -- common/autotest_common.sh@10 -- # set +x 00:06:24.379 00:46:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:24.379 00:46:08 -- event/cpu_locks.sh@71 -- # locks_exist 3283475 00:06:24.379 00:46:08 -- event/cpu_locks.sh@22 -- # lslocks -p 3283475 00:06:24.379 00:46:08 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:24.686 00:46:08 -- event/cpu_locks.sh@73 -- # killprocess 3283475 00:06:24.686 00:46:08 -- common/autotest_common.sh@926 -- # '[' -z 3283475 ']' 00:06:24.686 00:46:08 -- common/autotest_common.sh@930 -- # kill -0 3283475 00:06:24.686 00:46:08 -- common/autotest_common.sh@931 -- # uname 00:06:24.686 00:46:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:24.686 00:46:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3283475 00:06:24.686 00:46:08 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:24.686 00:46:08 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:24.686 00:46:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3283475' 00:06:24.686 killing process with pid 3283475 00:06:24.686 00:46:08 -- common/autotest_common.sh@945 -- # kill 3283475 00:06:24.686 00:46:08 -- common/autotest_common.sh@950 -- # wait 3283475 00:06:24.945 00:06:24.945 real 0m1.674s 00:06:24.945 user 0m1.789s 00:06:24.945 sys 0m0.553s 00:06:24.945 00:46:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.945 00:46:09 -- common/autotest_common.sh@10 -- # set +x 00:06:24.945 ************************************ 00:06:24.945 END TEST default_locks_via_rpc 00:06:24.945 ************************************ 00:06:24.945 00:46:09 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:24.945 00:46:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:24.945 00:46:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:24.945 00:46:09 -- common/autotest_common.sh@10 -- # set +x 00:06:24.945 ************************************ 00:06:24.945 START TEST non_locking_app_on_locked_coremask 00:06:24.945 ************************************ 00:06:24.945 00:46:09 -- common/autotest_common.sh@1104 -- # non_locking_app_on_locked_coremask 00:06:24.945 00:46:09 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3283672 00:06:24.945 00:46:09 -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:24.945 00:46:09 -- event/cpu_locks.sh@81 -- # waitforlisten 3283672 /var/tmp/spdk.sock 00:06:24.945 00:46:09 -- common/autotest_common.sh@819 -- # '[' -z 3283672 ']' 00:06:24.945 00:46:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.945 00:46:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:24.945 00:46:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.945 00:46:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:24.945 00:46:09 -- common/autotest_common.sh@10 -- # set +x 00:06:24.945 [2024-07-23 00:46:09.099926] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:24.945 [2024-07-23 00:46:09.100016] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3283672 ] 00:06:24.945 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.204 [2024-07-23 00:46:09.158742] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.204 [2024-07-23 00:46:09.241685] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:25.204 [2024-07-23 00:46:09.241854] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.138 00:46:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:26.138 00:46:10 -- common/autotest_common.sh@852 -- # return 0 00:06:26.138 00:46:10 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3283789 00:06:26.138 00:46:10 -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:26.138 00:46:10 -- event/cpu_locks.sh@85 -- # waitforlisten 3283789 /var/tmp/spdk2.sock 00:06:26.138 00:46:10 -- common/autotest_common.sh@819 -- # '[' -z 3283789 ']' 00:06:26.139 00:46:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:26.139 00:46:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:26.139 00:46:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:26.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:26.139 00:46:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:26.139 00:46:10 -- common/autotest_common.sh@10 -- # set +x 00:06:26.139 [2024-07-23 00:46:10.084943] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:26.139 [2024-07-23 00:46:10.085049] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3283789 ] 00:06:26.139 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.139 [2024-07-23 00:46:10.177716] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:26.139 [2024-07-23 00:46:10.177746] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.396 [2024-07-23 00:46:10.363461] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:26.396 [2024-07-23 00:46:10.363645] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.962 00:46:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:26.962 00:46:11 -- common/autotest_common.sh@852 -- # return 0 00:06:26.962 00:46:11 -- event/cpu_locks.sh@87 -- # locks_exist 3283672 00:06:26.962 00:46:11 -- event/cpu_locks.sh@22 -- # lslocks -p 3283672 00:06:26.962 00:46:11 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:27.220 lslocks: write error 00:06:27.220 00:46:11 -- event/cpu_locks.sh@89 -- # killprocess 3283672 00:06:27.220 00:46:11 -- common/autotest_common.sh@926 -- # '[' -z 3283672 ']' 00:06:27.220 00:46:11 -- common/autotest_common.sh@930 -- # kill -0 3283672 00:06:27.220 00:46:11 -- common/autotest_common.sh@931 -- # uname 00:06:27.220 00:46:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:27.220 00:46:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3283672 00:06:27.479 00:46:11 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:27.479 00:46:11 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:27.479 00:46:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3283672' 00:06:27.479 killing process with pid 3283672 00:06:27.479 00:46:11 -- common/autotest_common.sh@945 -- # kill 3283672 00:06:27.479 00:46:11 -- common/autotest_common.sh@950 -- # wait 3283672 00:06:28.414 00:46:12 -- event/cpu_locks.sh@90 -- # killprocess 3283789 00:06:28.414 00:46:12 -- common/autotest_common.sh@926 -- # '[' -z 3283789 ']' 00:06:28.414 00:46:12 -- common/autotest_common.sh@930 -- # kill -0 3283789 00:06:28.414 00:46:12 -- common/autotest_common.sh@931 -- # uname 00:06:28.414 00:46:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:28.414 00:46:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3283789 00:06:28.414 00:46:12 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:28.414 00:46:12 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:28.414 00:46:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3283789' 00:06:28.414 killing process with pid 3283789 00:06:28.414 00:46:12 -- common/autotest_common.sh@945 -- # kill 3283789 00:06:28.414 00:46:12 -- common/autotest_common.sh@950 -- # wait 3283789 00:06:28.673 00:06:28.673 real 0m3.615s 00:06:28.673 user 0m3.950s 00:06:28.673 sys 0m1.058s 00:06:28.673 00:46:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:28.673 00:46:12 -- common/autotest_common.sh@10 -- # set +x 00:06:28.673 ************************************ 00:06:28.673 END TEST non_locking_app_on_locked_coremask 00:06:28.673 ************************************ 00:06:28.673 00:46:12 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:28.673 00:46:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:28.673 00:46:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:28.673 00:46:12 -- common/autotest_common.sh@10 -- # set +x 00:06:28.673 ************************************ 00:06:28.673 START TEST locking_app_on_unlocked_coremask 00:06:28.673 ************************************ 00:06:28.673 00:46:12 -- common/autotest_common.sh@1104 -- # locking_app_on_unlocked_coremask 00:06:28.673 00:46:12 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3284224 00:06:28.673 00:46:12 -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:28.673 00:46:12 -- event/cpu_locks.sh@99 -- # waitforlisten 3284224 /var/tmp/spdk.sock 00:06:28.673 00:46:12 -- common/autotest_common.sh@819 -- # '[' -z 3284224 ']' 00:06:28.673 00:46:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.673 00:46:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:28.673 00:46:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.673 00:46:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:28.673 00:46:12 -- common/autotest_common.sh@10 -- # set +x 00:06:28.673 [2024-07-23 00:46:12.745163] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:28.673 [2024-07-23 00:46:12.745245] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3284224 ] 00:06:28.673 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.673 [2024-07-23 00:46:12.806494] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:28.673 [2024-07-23 00:46:12.806540] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.931 [2024-07-23 00:46:12.899802] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:28.931 [2024-07-23 00:46:12.899986] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.867 00:46:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:29.867 00:46:13 -- common/autotest_common.sh@852 -- # return 0 00:06:29.867 00:46:13 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3284365 00:06:29.867 00:46:13 -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:29.867 00:46:13 -- event/cpu_locks.sh@103 -- # waitforlisten 3284365 /var/tmp/spdk2.sock 00:06:29.867 00:46:13 -- common/autotest_common.sh@819 -- # '[' -z 3284365 ']' 00:06:29.867 00:46:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:29.867 00:46:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:29.867 00:46:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:29.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:29.867 00:46:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:29.867 00:46:13 -- common/autotest_common.sh@10 -- # set +x 00:06:29.867 [2024-07-23 00:46:13.794116] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:29.867 [2024-07-23 00:46:13.794198] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3284365 ] 00:06:29.867 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.867 [2024-07-23 00:46:13.883141] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.867 [2024-07-23 00:46:14.065088] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:29.867 [2024-07-23 00:46:14.065266] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.802 00:46:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:30.802 00:46:14 -- common/autotest_common.sh@852 -- # return 0 00:06:30.802 00:46:14 -- event/cpu_locks.sh@105 -- # locks_exist 3284365 00:06:30.802 00:46:14 -- event/cpu_locks.sh@22 -- # lslocks -p 3284365 00:06:30.802 00:46:14 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:31.060 lslocks: write error 00:06:31.060 00:46:15 -- event/cpu_locks.sh@107 -- # killprocess 3284224 00:06:31.060 00:46:15 -- common/autotest_common.sh@926 -- # '[' -z 3284224 ']' 00:06:31.060 00:46:15 -- common/autotest_common.sh@930 -- # kill -0 3284224 00:06:31.060 00:46:15 -- common/autotest_common.sh@931 -- # uname 00:06:31.060 00:46:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:31.060 00:46:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3284224 00:06:31.060 00:46:15 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:31.060 00:46:15 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:31.060 00:46:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3284224' 00:06:31.060 killing process with pid 3284224 00:06:31.060 00:46:15 -- common/autotest_common.sh@945 -- # kill 3284224 00:06:31.060 00:46:15 -- common/autotest_common.sh@950 -- # wait 3284224 00:06:31.994 00:46:15 -- event/cpu_locks.sh@108 -- # killprocess 3284365 00:06:31.994 00:46:15 -- common/autotest_common.sh@926 -- # '[' -z 3284365 ']' 00:06:31.994 00:46:15 -- common/autotest_common.sh@930 -- # kill -0 3284365 00:06:31.995 00:46:15 -- common/autotest_common.sh@931 -- # uname 00:06:31.995 00:46:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:31.995 00:46:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3284365 00:06:31.995 00:46:15 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:31.995 00:46:15 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:31.995 00:46:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3284365' 00:06:31.995 killing process with pid 3284365 00:06:31.995 00:46:15 -- common/autotest_common.sh@945 -- # kill 3284365 00:06:31.995 00:46:15 -- common/autotest_common.sh@950 -- # wait 3284365 00:06:32.252 00:06:32.252 real 0m3.715s 00:06:32.252 user 0m4.069s 00:06:32.252 sys 0m1.067s 00:06:32.252 00:46:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.252 00:46:16 -- common/autotest_common.sh@10 -- # set +x 00:06:32.252 ************************************ 00:06:32.252 END TEST locking_app_on_unlocked_coremask 00:06:32.252 ************************************ 00:06:32.252 00:46:16 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:32.252 00:46:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:32.252 00:46:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:32.252 00:46:16 -- common/autotest_common.sh@10 -- # set +x 00:06:32.252 ************************************ 00:06:32.252 START TEST locking_app_on_locked_coremask 00:06:32.252 ************************************ 00:06:32.252 00:46:16 -- common/autotest_common.sh@1104 -- # locking_app_on_locked_coremask 00:06:32.252 00:46:16 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3284676 00:06:32.252 00:46:16 -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:32.252 00:46:16 -- event/cpu_locks.sh@116 -- # waitforlisten 3284676 /var/tmp/spdk.sock 00:06:32.252 00:46:16 -- common/autotest_common.sh@819 -- # '[' -z 3284676 ']' 00:06:32.252 00:46:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.252 00:46:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:32.252 00:46:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.252 00:46:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:32.252 00:46:16 -- common/autotest_common.sh@10 -- # set +x 00:06:32.512 [2024-07-23 00:46:16.487056] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:32.512 [2024-07-23 00:46:16.487147] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3284676 ] 00:06:32.512 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.512 [2024-07-23 00:46:16.544094] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.512 [2024-07-23 00:46:16.630573] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:32.512 [2024-07-23 00:46:16.630799] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.447 00:46:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:33.447 00:46:17 -- common/autotest_common.sh@852 -- # return 0 00:06:33.447 00:46:17 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3284818 00:06:33.447 00:46:17 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3284818 /var/tmp/spdk2.sock 00:06:33.447 00:46:17 -- common/autotest_common.sh@640 -- # local es=0 00:06:33.447 00:46:17 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 3284818 /var/tmp/spdk2.sock 00:06:33.447 00:46:17 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:06:33.447 00:46:17 -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:33.447 00:46:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:33.447 00:46:17 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:06:33.447 00:46:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:33.447 00:46:17 -- common/autotest_common.sh@643 -- # waitforlisten 3284818 /var/tmp/spdk2.sock 00:06:33.447 00:46:17 -- common/autotest_common.sh@819 -- # '[' -z 3284818 ']' 00:06:33.447 00:46:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:33.447 00:46:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:33.447 00:46:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:33.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:33.447 00:46:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:33.447 00:46:17 -- common/autotest_common.sh@10 -- # set +x 00:06:33.447 [2024-07-23 00:46:17.468290] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:33.447 [2024-07-23 00:46:17.468366] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3284818 ] 00:06:33.447 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.447 [2024-07-23 00:46:17.551375] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3284676 has claimed it. 00:06:33.447 [2024-07-23 00:46:17.551447] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:34.014 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (3284818) - No such process 00:06:34.014 ERROR: process (pid: 3284818) is no longer running 00:06:34.014 00:46:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:34.014 00:46:18 -- common/autotest_common.sh@852 -- # return 1 00:06:34.014 00:46:18 -- common/autotest_common.sh@643 -- # es=1 00:06:34.014 00:46:18 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:34.014 00:46:18 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:34.014 00:46:18 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:34.014 00:46:18 -- event/cpu_locks.sh@122 -- # locks_exist 3284676 00:06:34.014 00:46:18 -- event/cpu_locks.sh@22 -- # lslocks -p 3284676 00:06:34.014 00:46:18 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:34.272 lslocks: write error 00:06:34.272 00:46:18 -- event/cpu_locks.sh@124 -- # killprocess 3284676 00:06:34.272 00:46:18 -- common/autotest_common.sh@926 -- # '[' -z 3284676 ']' 00:06:34.272 00:46:18 -- common/autotest_common.sh@930 -- # kill -0 3284676 00:06:34.272 00:46:18 -- common/autotest_common.sh@931 -- # uname 00:06:34.272 00:46:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:34.272 00:46:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3284676 00:06:34.272 00:46:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:34.272 00:46:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:34.272 00:46:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3284676' 00:06:34.272 killing process with pid 3284676 00:06:34.272 00:46:18 -- common/autotest_common.sh@945 -- # kill 3284676 00:06:34.272 00:46:18 -- common/autotest_common.sh@950 -- # wait 3284676 00:06:34.838 00:06:34.838 real 0m2.420s 00:06:34.838 user 0m2.747s 00:06:34.838 sys 0m0.650s 00:06:34.838 00:46:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:34.838 00:46:18 -- common/autotest_common.sh@10 -- # set +x 00:06:34.838 ************************************ 00:06:34.838 END TEST locking_app_on_locked_coremask 00:06:34.838 ************************************ 00:06:34.838 00:46:18 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:34.838 00:46:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:34.838 00:46:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:34.838 00:46:18 -- common/autotest_common.sh@10 -- # set +x 00:06:34.838 ************************************ 00:06:34.838 START TEST locking_overlapped_coremask 00:06:34.838 ************************************ 00:06:34.838 00:46:18 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask 00:06:34.838 00:46:18 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3284982 00:06:34.838 00:46:18 -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:34.838 00:46:18 -- event/cpu_locks.sh@133 -- # waitforlisten 3284982 /var/tmp/spdk.sock 00:06:34.838 00:46:18 -- common/autotest_common.sh@819 -- # '[' -z 3284982 ']' 00:06:34.838 00:46:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.838 00:46:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:34.838 00:46:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.838 00:46:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:34.838 00:46:18 -- common/autotest_common.sh@10 -- # set +x 00:06:34.838 [2024-07-23 00:46:18.933895] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:34.838 [2024-07-23 00:46:18.933995] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3284982 ] 00:06:34.838 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.838 [2024-07-23 00:46:18.991759] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:35.097 [2024-07-23 00:46:19.080290] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:35.097 [2024-07-23 00:46:19.080500] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:35.097 [2024-07-23 00:46:19.080567] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:35.097 [2024-07-23 00:46:19.080570] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.662 00:46:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:35.662 00:46:19 -- common/autotest_common.sh@852 -- # return 0 00:06:35.662 00:46:19 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3285126 00:06:35.662 00:46:19 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3285126 /var/tmp/spdk2.sock 00:06:35.662 00:46:19 -- common/autotest_common.sh@640 -- # local es=0 00:06:35.662 00:46:19 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 3285126 /var/tmp/spdk2.sock 00:06:35.662 00:46:19 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:06:35.662 00:46:19 -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:35.662 00:46:19 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:35.662 00:46:19 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:06:35.662 00:46:19 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:35.662 00:46:19 -- common/autotest_common.sh@643 -- # waitforlisten 3285126 /var/tmp/spdk2.sock 00:06:35.662 00:46:19 -- common/autotest_common.sh@819 -- # '[' -z 3285126 ']' 00:06:35.662 00:46:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:35.663 00:46:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:35.663 00:46:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:35.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:35.663 00:46:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:35.663 00:46:19 -- common/autotest_common.sh@10 -- # set +x 00:06:35.921 [2024-07-23 00:46:19.909999] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:35.921 [2024-07-23 00:46:19.910072] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3285126 ] 00:06:35.921 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.921 [2024-07-23 00:46:19.995538] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3284982 has claimed it. 00:06:35.921 [2024-07-23 00:46:19.995598] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:36.487 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (3285126) - No such process 00:06:36.487 ERROR: process (pid: 3285126) is no longer running 00:06:36.487 00:46:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:36.487 00:46:20 -- common/autotest_common.sh@852 -- # return 1 00:06:36.487 00:46:20 -- common/autotest_common.sh@643 -- # es=1 00:06:36.487 00:46:20 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:36.487 00:46:20 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:36.487 00:46:20 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:36.487 00:46:20 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:36.487 00:46:20 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:36.487 00:46:20 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:36.487 00:46:20 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:36.487 00:46:20 -- event/cpu_locks.sh@141 -- # killprocess 3284982 00:06:36.487 00:46:20 -- common/autotest_common.sh@926 -- # '[' -z 3284982 ']' 00:06:36.487 00:46:20 -- common/autotest_common.sh@930 -- # kill -0 3284982 00:06:36.487 00:46:20 -- common/autotest_common.sh@931 -- # uname 00:06:36.487 00:46:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:36.487 00:46:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3284982 00:06:36.487 00:46:20 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:36.487 00:46:20 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:36.487 00:46:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3284982' 00:06:36.487 killing process with pid 3284982 00:06:36.487 00:46:20 -- common/autotest_common.sh@945 -- # kill 3284982 00:06:36.487 00:46:20 -- common/autotest_common.sh@950 -- # wait 3284982 00:06:37.053 00:06:37.053 real 0m2.153s 00:06:37.053 user 0m6.179s 00:06:37.053 sys 0m0.467s 00:06:37.053 00:46:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:37.053 00:46:21 -- common/autotest_common.sh@10 -- # set +x 00:06:37.053 ************************************ 00:06:37.053 END TEST locking_overlapped_coremask 00:06:37.053 ************************************ 00:06:37.053 00:46:21 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:37.053 00:46:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:37.053 00:46:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:37.053 00:46:21 -- common/autotest_common.sh@10 -- # set +x 00:06:37.053 ************************************ 00:06:37.053 START TEST locking_overlapped_coremask_via_rpc 00:06:37.053 ************************************ 00:06:37.053 00:46:21 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask_via_rpc 00:06:37.053 00:46:21 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3285296 00:06:37.053 00:46:21 -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:37.053 00:46:21 -- event/cpu_locks.sh@149 -- # waitforlisten 3285296 /var/tmp/spdk.sock 00:06:37.053 00:46:21 -- common/autotest_common.sh@819 -- # '[' -z 3285296 ']' 00:06:37.053 00:46:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.053 00:46:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:37.053 00:46:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.053 00:46:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:37.053 00:46:21 -- common/autotest_common.sh@10 -- # set +x 00:06:37.053 [2024-07-23 00:46:21.113127] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:37.053 [2024-07-23 00:46:21.113222] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3285296 ] 00:06:37.053 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.053 [2024-07-23 00:46:21.175140] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:37.054 [2024-07-23 00:46:21.175203] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:37.311 [2024-07-23 00:46:21.262520] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:37.311 [2024-07-23 00:46:21.262730] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:37.311 [2024-07-23 00:46:21.262783] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:37.311 [2024-07-23 00:46:21.262787] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.875 00:46:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:37.875 00:46:22 -- common/autotest_common.sh@852 -- # return 0 00:06:37.875 00:46:22 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3285439 00:06:37.875 00:46:22 -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:37.875 00:46:22 -- event/cpu_locks.sh@153 -- # waitforlisten 3285439 /var/tmp/spdk2.sock 00:06:37.875 00:46:22 -- common/autotest_common.sh@819 -- # '[' -z 3285439 ']' 00:06:37.875 00:46:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:37.875 00:46:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:37.875 00:46:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:37.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:37.875 00:46:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:37.875 00:46:22 -- common/autotest_common.sh@10 -- # set +x 00:06:37.875 [2024-07-23 00:46:22.070633] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:37.875 [2024-07-23 00:46:22.070720] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3285439 ] 00:06:38.133 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.133 [2024-07-23 00:46:22.158296] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:38.133 [2024-07-23 00:46:22.158334] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:38.133 [2024-07-23 00:46:22.328402] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:38.133 [2024-07-23 00:46:22.328648] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:38.133 [2024-07-23 00:46:22.332670] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:38.133 [2024-07-23 00:46:22.332672] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:39.067 00:46:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:39.067 00:46:22 -- common/autotest_common.sh@852 -- # return 0 00:06:39.067 00:46:22 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:39.067 00:46:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:39.067 00:46:22 -- common/autotest_common.sh@10 -- # set +x 00:06:39.067 00:46:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:39.067 00:46:22 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:39.067 00:46:22 -- common/autotest_common.sh@640 -- # local es=0 00:06:39.067 00:46:22 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:39.067 00:46:22 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:06:39.067 00:46:22 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:39.067 00:46:22 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:06:39.067 00:46:22 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:39.067 00:46:22 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:39.067 00:46:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:39.067 00:46:22 -- common/autotest_common.sh@10 -- # set +x 00:06:39.067 [2024-07-23 00:46:22.986704] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3285296 has claimed it. 00:06:39.067 request: 00:06:39.067 { 00:06:39.067 "method": "framework_enable_cpumask_locks", 00:06:39.067 "req_id": 1 00:06:39.067 } 00:06:39.067 Got JSON-RPC error response 00:06:39.067 response: 00:06:39.067 { 00:06:39.067 "code": -32603, 00:06:39.067 "message": "Failed to claim CPU core: 2" 00:06:39.067 } 00:06:39.067 00:46:22 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:06:39.067 00:46:22 -- common/autotest_common.sh@643 -- # es=1 00:06:39.067 00:46:22 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:39.067 00:46:22 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:39.067 00:46:22 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:39.067 00:46:22 -- event/cpu_locks.sh@158 -- # waitforlisten 3285296 /var/tmp/spdk.sock 00:06:39.067 00:46:22 -- common/autotest_common.sh@819 -- # '[' -z 3285296 ']' 00:06:39.067 00:46:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.067 00:46:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:39.067 00:46:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.067 00:46:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:39.067 00:46:22 -- common/autotest_common.sh@10 -- # set +x 00:06:39.067 00:46:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:39.067 00:46:23 -- common/autotest_common.sh@852 -- # return 0 00:06:39.067 00:46:23 -- event/cpu_locks.sh@159 -- # waitforlisten 3285439 /var/tmp/spdk2.sock 00:06:39.067 00:46:23 -- common/autotest_common.sh@819 -- # '[' -z 3285439 ']' 00:06:39.067 00:46:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:39.067 00:46:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:39.067 00:46:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:39.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:39.067 00:46:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:39.067 00:46:23 -- common/autotest_common.sh@10 -- # set +x 00:06:39.324 00:46:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:39.324 00:46:23 -- common/autotest_common.sh@852 -- # return 0 00:06:39.324 00:46:23 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:39.324 00:46:23 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:39.324 00:46:23 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:39.324 00:46:23 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:39.324 00:06:39.324 real 0m2.397s 00:06:39.324 user 0m1.135s 00:06:39.324 sys 0m0.199s 00:06:39.324 00:46:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.324 00:46:23 -- common/autotest_common.sh@10 -- # set +x 00:06:39.324 ************************************ 00:06:39.324 END TEST locking_overlapped_coremask_via_rpc 00:06:39.324 ************************************ 00:06:39.324 00:46:23 -- event/cpu_locks.sh@174 -- # cleanup 00:06:39.324 00:46:23 -- event/cpu_locks.sh@15 -- # [[ -z 3285296 ]] 00:06:39.324 00:46:23 -- event/cpu_locks.sh@15 -- # killprocess 3285296 00:06:39.324 00:46:23 -- common/autotest_common.sh@926 -- # '[' -z 3285296 ']' 00:06:39.324 00:46:23 -- common/autotest_common.sh@930 -- # kill -0 3285296 00:06:39.324 00:46:23 -- common/autotest_common.sh@931 -- # uname 00:06:39.324 00:46:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:39.324 00:46:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3285296 00:06:39.324 00:46:23 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:39.324 00:46:23 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:39.324 00:46:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3285296' 00:06:39.324 killing process with pid 3285296 00:06:39.324 00:46:23 -- common/autotest_common.sh@945 -- # kill 3285296 00:06:39.324 00:46:23 -- common/autotest_common.sh@950 -- # wait 3285296 00:06:39.889 00:46:23 -- event/cpu_locks.sh@16 -- # [[ -z 3285439 ]] 00:06:39.889 00:46:23 -- event/cpu_locks.sh@16 -- # killprocess 3285439 00:06:39.889 00:46:23 -- common/autotest_common.sh@926 -- # '[' -z 3285439 ']' 00:06:39.889 00:46:23 -- common/autotest_common.sh@930 -- # kill -0 3285439 00:06:39.889 00:46:23 -- common/autotest_common.sh@931 -- # uname 00:06:39.889 00:46:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:39.889 00:46:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3285439 00:06:39.889 00:46:23 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:06:39.889 00:46:23 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:06:39.889 00:46:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3285439' 00:06:39.889 killing process with pid 3285439 00:06:39.889 00:46:23 -- common/autotest_common.sh@945 -- # kill 3285439 00:06:39.889 00:46:23 -- common/autotest_common.sh@950 -- # wait 3285439 00:06:40.147 00:46:24 -- event/cpu_locks.sh@18 -- # rm -f 00:06:40.147 00:46:24 -- event/cpu_locks.sh@1 -- # cleanup 00:06:40.147 00:46:24 -- event/cpu_locks.sh@15 -- # [[ -z 3285296 ]] 00:06:40.147 00:46:24 -- event/cpu_locks.sh@15 -- # killprocess 3285296 00:06:40.147 00:46:24 -- common/autotest_common.sh@926 -- # '[' -z 3285296 ']' 00:06:40.147 00:46:24 -- common/autotest_common.sh@930 -- # kill -0 3285296 00:06:40.147 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (3285296) - No such process 00:06:40.147 00:46:24 -- common/autotest_common.sh@953 -- # echo 'Process with pid 3285296 is not found' 00:06:40.147 Process with pid 3285296 is not found 00:06:40.147 00:46:24 -- event/cpu_locks.sh@16 -- # [[ -z 3285439 ]] 00:06:40.147 00:46:24 -- event/cpu_locks.sh@16 -- # killprocess 3285439 00:06:40.147 00:46:24 -- common/autotest_common.sh@926 -- # '[' -z 3285439 ']' 00:06:40.147 00:46:24 -- common/autotest_common.sh@930 -- # kill -0 3285439 00:06:40.147 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (3285439) - No such process 00:06:40.147 00:46:24 -- common/autotest_common.sh@953 -- # echo 'Process with pid 3285439 is not found' 00:06:40.147 Process with pid 3285439 is not found 00:06:40.147 00:46:24 -- event/cpu_locks.sh@18 -- # rm -f 00:06:40.404 00:06:40.404 real 0m18.831s 00:06:40.404 user 0m33.564s 00:06:40.404 sys 0m5.403s 00:06:40.404 00:46:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:40.404 00:46:24 -- common/autotest_common.sh@10 -- # set +x 00:06:40.404 ************************************ 00:06:40.404 END TEST cpu_locks 00:06:40.404 ************************************ 00:06:40.404 00:06:40.404 real 0m45.281s 00:06:40.404 user 1m26.744s 00:06:40.404 sys 0m9.249s 00:06:40.404 00:46:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:40.405 00:46:24 -- common/autotest_common.sh@10 -- # set +x 00:06:40.405 ************************************ 00:06:40.405 END TEST event 00:06:40.405 ************************************ 00:06:40.405 00:46:24 -- spdk/autotest.sh@188 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:40.405 00:46:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:40.405 00:46:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:40.405 00:46:24 -- common/autotest_common.sh@10 -- # set +x 00:06:40.405 ************************************ 00:06:40.405 START TEST thread 00:06:40.405 ************************************ 00:06:40.405 00:46:24 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:40.405 * Looking for test storage... 00:06:40.405 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:40.405 00:46:24 -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:40.405 00:46:24 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:06:40.405 00:46:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:40.405 00:46:24 -- common/autotest_common.sh@10 -- # set +x 00:06:40.405 ************************************ 00:06:40.405 START TEST thread_poller_perf 00:06:40.405 ************************************ 00:06:40.405 00:46:24 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:40.405 [2024-07-23 00:46:24.467133] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:40.405 [2024-07-23 00:46:24.467211] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3285805 ] 00:06:40.405 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.405 [2024-07-23 00:46:24.531142] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.679 [2024-07-23 00:46:24.618990] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.679 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:41.656 ====================================== 00:06:41.656 busy:2710516086 (cyc) 00:06:41.656 total_run_count: 280000 00:06:41.656 tsc_hz: 2700000000 (cyc) 00:06:41.656 ====================================== 00:06:41.656 poller_cost: 9680 (cyc), 3585 (nsec) 00:06:41.656 00:06:41.656 real 0m1.255s 00:06:41.656 user 0m1.167s 00:06:41.656 sys 0m0.082s 00:06:41.656 00:46:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.656 00:46:25 -- common/autotest_common.sh@10 -- # set +x 00:06:41.656 ************************************ 00:06:41.656 END TEST thread_poller_perf 00:06:41.656 ************************************ 00:06:41.656 00:46:25 -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:41.656 00:46:25 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:06:41.656 00:46:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:41.656 00:46:25 -- common/autotest_common.sh@10 -- # set +x 00:06:41.656 ************************************ 00:06:41.656 START TEST thread_poller_perf 00:06:41.656 ************************************ 00:06:41.656 00:46:25 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:41.656 [2024-07-23 00:46:25.748177] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:41.656 [2024-07-23 00:46:25.748259] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3285962 ] 00:06:41.656 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.656 [2024-07-23 00:46:25.808468] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.913 [2024-07-23 00:46:25.899220] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.913 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:42.846 ====================================== 00:06:42.846 busy:2703612098 (cyc) 00:06:42.846 total_run_count: 3828000 00:06:42.846 tsc_hz: 2700000000 (cyc) 00:06:42.846 ====================================== 00:06:42.846 poller_cost: 706 (cyc), 261 (nsec) 00:06:42.846 00:06:42.846 real 0m1.248s 00:06:42.846 user 0m1.162s 00:06:42.846 sys 0m0.079s 00:06:42.846 00:46:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.846 00:46:26 -- common/autotest_common.sh@10 -- # set +x 00:06:42.846 ************************************ 00:06:42.846 END TEST thread_poller_perf 00:06:42.846 ************************************ 00:06:42.846 00:46:27 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:42.846 00:06:42.846 real 0m2.605s 00:06:42.846 user 0m2.372s 00:06:42.846 sys 0m0.234s 00:06:42.846 00:46:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.846 00:46:27 -- common/autotest_common.sh@10 -- # set +x 00:06:42.846 ************************************ 00:06:42.846 END TEST thread 00:06:42.846 ************************************ 00:06:42.846 00:46:27 -- spdk/autotest.sh@189 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:42.846 00:46:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:42.846 00:46:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:42.846 00:46:27 -- common/autotest_common.sh@10 -- # set +x 00:06:42.846 ************************************ 00:06:42.846 START TEST accel 00:06:42.846 ************************************ 00:06:42.846 00:46:27 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:43.104 * Looking for test storage... 00:06:43.104 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:43.104 00:46:27 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:06:43.104 00:46:27 -- accel/accel.sh@74 -- # get_expected_opcs 00:06:43.104 00:46:27 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:43.104 00:46:27 -- accel/accel.sh@59 -- # spdk_tgt_pid=3286160 00:06:43.104 00:46:27 -- accel/accel.sh@60 -- # waitforlisten 3286160 00:06:43.104 00:46:27 -- common/autotest_common.sh@819 -- # '[' -z 3286160 ']' 00:06:43.104 00:46:27 -- accel/accel.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:43.104 00:46:27 -- accel/accel.sh@58 -- # build_accel_config 00:06:43.104 00:46:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.104 00:46:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:43.104 00:46:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:43.104 00:46:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.104 00:46:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:43.104 00:46:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:43.104 00:46:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:43.104 00:46:27 -- common/autotest_common.sh@10 -- # set +x 00:06:43.104 00:46:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:43.104 00:46:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:43.104 00:46:27 -- accel/accel.sh@41 -- # local IFS=, 00:06:43.104 00:46:27 -- accel/accel.sh@42 -- # jq -r . 00:06:43.104 [2024-07-23 00:46:27.127226] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:43.104 [2024-07-23 00:46:27.127312] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3286160 ] 00:06:43.104 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.104 [2024-07-23 00:46:27.195036] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.104 [2024-07-23 00:46:27.287674] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:43.104 [2024-07-23 00:46:27.287839] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.037 00:46:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:44.037 00:46:28 -- common/autotest_common.sh@852 -- # return 0 00:06:44.037 00:46:28 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:44.037 00:46:28 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:06:44.037 00:46:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:44.037 00:46:28 -- common/autotest_common.sh@10 -- # set +x 00:06:44.037 00:46:28 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:44.037 00:46:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:44.037 00:46:28 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:44.037 00:46:28 -- accel/accel.sh@64 -- # IFS== 00:06:44.037 00:46:28 -- accel/accel.sh@64 -- # read -r opc module 00:06:44.037 00:46:28 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:44.038 00:46:28 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:44.038 00:46:28 -- accel/accel.sh@64 -- # IFS== 00:06:44.038 00:46:28 -- accel/accel.sh@64 -- # read -r opc module 00:06:44.038 00:46:28 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:44.038 00:46:28 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:44.038 00:46:28 -- accel/accel.sh@64 -- # IFS== 00:06:44.038 00:46:28 -- accel/accel.sh@64 -- # read -r opc module 00:06:44.038 00:46:28 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:44.038 00:46:28 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:44.038 00:46:28 -- accel/accel.sh@64 -- # IFS== 00:06:44.038 00:46:28 -- accel/accel.sh@64 -- # read -r opc module 00:06:44.038 00:46:28 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:44.038 00:46:28 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:44.038 00:46:28 -- accel/accel.sh@64 -- # IFS== 00:06:44.038 00:46:28 -- accel/accel.sh@64 -- # read -r opc module 00:06:44.038 00:46:28 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:44.038 00:46:28 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:44.038 00:46:28 -- accel/accel.sh@64 -- # IFS== 00:06:44.038 00:46:28 -- accel/accel.sh@64 -- # read -r opc module 00:06:44.038 00:46:28 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:44.038 00:46:28 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:44.038 00:46:28 -- accel/accel.sh@64 -- # IFS== 00:06:44.038 00:46:28 -- accel/accel.sh@64 -- # read -r opc module 00:06:44.038 00:46:28 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:44.038 00:46:28 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:44.038 00:46:28 -- accel/accel.sh@64 -- # IFS== 00:06:44.038 00:46:28 -- accel/accel.sh@64 -- # read -r opc module 00:06:44.038 00:46:28 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:44.038 00:46:28 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:44.038 00:46:28 -- accel/accel.sh@64 -- # IFS== 00:06:44.038 00:46:28 -- accel/accel.sh@64 -- # read -r opc module 00:06:44.038 00:46:28 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:44.038 00:46:28 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:44.038 00:46:28 -- accel/accel.sh@64 -- # IFS== 00:06:44.038 00:46:28 -- accel/accel.sh@64 -- # read -r opc module 00:06:44.038 00:46:28 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:44.038 00:46:28 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:44.038 00:46:28 -- accel/accel.sh@64 -- # IFS== 00:06:44.038 00:46:28 -- accel/accel.sh@64 -- # read -r opc module 00:06:44.038 00:46:28 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:44.038 00:46:28 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:44.038 00:46:28 -- accel/accel.sh@64 -- # IFS== 00:06:44.038 00:46:28 -- accel/accel.sh@64 -- # read -r opc module 00:06:44.038 00:46:28 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:44.038 00:46:28 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:44.038 00:46:28 -- accel/accel.sh@64 -- # IFS== 00:06:44.038 00:46:28 -- accel/accel.sh@64 -- # read -r opc module 00:06:44.038 00:46:28 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:44.038 00:46:28 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:44.038 00:46:28 -- accel/accel.sh@64 -- # IFS== 00:06:44.038 00:46:28 -- accel/accel.sh@64 -- # read -r opc module 00:06:44.038 00:46:28 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:44.038 00:46:28 -- accel/accel.sh@67 -- # killprocess 3286160 00:06:44.038 00:46:28 -- common/autotest_common.sh@926 -- # '[' -z 3286160 ']' 00:06:44.038 00:46:28 -- common/autotest_common.sh@930 -- # kill -0 3286160 00:06:44.038 00:46:28 -- common/autotest_common.sh@931 -- # uname 00:06:44.038 00:46:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:44.038 00:46:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3286160 00:06:44.038 00:46:28 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:44.038 00:46:28 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:44.038 00:46:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3286160' 00:06:44.038 killing process with pid 3286160 00:06:44.038 00:46:28 -- common/autotest_common.sh@945 -- # kill 3286160 00:06:44.038 00:46:28 -- common/autotest_common.sh@950 -- # wait 3286160 00:06:44.604 00:46:28 -- accel/accel.sh@68 -- # trap - ERR 00:06:44.604 00:46:28 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:06:44.604 00:46:28 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:44.604 00:46:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:44.604 00:46:28 -- common/autotest_common.sh@10 -- # set +x 00:06:44.604 00:46:28 -- common/autotest_common.sh@1104 -- # accel_perf -h 00:06:44.604 00:46:28 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:44.604 00:46:28 -- accel/accel.sh@12 -- # build_accel_config 00:06:44.604 00:46:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:44.604 00:46:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:44.604 00:46:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.604 00:46:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:44.604 00:46:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:44.604 00:46:28 -- accel/accel.sh@41 -- # local IFS=, 00:06:44.604 00:46:28 -- accel/accel.sh@42 -- # jq -r . 00:06:44.604 00:46:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.604 00:46:28 -- common/autotest_common.sh@10 -- # set +x 00:06:44.604 00:46:28 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:44.604 00:46:28 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:44.604 00:46:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:44.604 00:46:28 -- common/autotest_common.sh@10 -- # set +x 00:06:44.604 ************************************ 00:06:44.604 START TEST accel_missing_filename 00:06:44.604 ************************************ 00:06:44.604 00:46:28 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress 00:06:44.604 00:46:28 -- common/autotest_common.sh@640 -- # local es=0 00:06:44.604 00:46:28 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:44.604 00:46:28 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:44.604 00:46:28 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:44.604 00:46:28 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:44.604 00:46:28 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:44.604 00:46:28 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress 00:06:44.604 00:46:28 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:44.604 00:46:28 -- accel/accel.sh@12 -- # build_accel_config 00:06:44.604 00:46:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:44.604 00:46:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:44.604 00:46:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.604 00:46:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:44.604 00:46:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:44.604 00:46:28 -- accel/accel.sh@41 -- # local IFS=, 00:06:44.604 00:46:28 -- accel/accel.sh@42 -- # jq -r . 00:06:44.604 [2024-07-23 00:46:28.574474] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:44.604 [2024-07-23 00:46:28.574568] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3286459 ] 00:06:44.604 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.604 [2024-07-23 00:46:28.637277] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.604 [2024-07-23 00:46:28.727826] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.604 [2024-07-23 00:46:28.789385] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:44.862 [2024-07-23 00:46:28.874052] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:44.862 A filename is required. 00:06:44.862 00:46:28 -- common/autotest_common.sh@643 -- # es=234 00:06:44.862 00:46:28 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:44.862 00:46:28 -- common/autotest_common.sh@652 -- # es=106 00:06:44.862 00:46:28 -- common/autotest_common.sh@653 -- # case "$es" in 00:06:44.862 00:46:28 -- common/autotest_common.sh@660 -- # es=1 00:06:44.862 00:46:28 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:44.862 00:06:44.862 real 0m0.398s 00:06:44.862 user 0m0.282s 00:06:44.862 sys 0m0.149s 00:06:44.862 00:46:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.862 00:46:28 -- common/autotest_common.sh@10 -- # set +x 00:06:44.862 ************************************ 00:06:44.862 END TEST accel_missing_filename 00:06:44.862 ************************************ 00:06:44.862 00:46:28 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:44.862 00:46:28 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:06:44.862 00:46:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:44.862 00:46:28 -- common/autotest_common.sh@10 -- # set +x 00:06:44.862 ************************************ 00:06:44.862 START TEST accel_compress_verify 00:06:44.862 ************************************ 00:06:44.862 00:46:28 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:44.862 00:46:28 -- common/autotest_common.sh@640 -- # local es=0 00:06:44.862 00:46:28 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:44.862 00:46:28 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:44.862 00:46:28 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:44.862 00:46:28 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:44.862 00:46:28 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:44.862 00:46:28 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:44.862 00:46:28 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:44.862 00:46:28 -- accel/accel.sh@12 -- # build_accel_config 00:06:44.862 00:46:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:44.862 00:46:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:44.862 00:46:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.862 00:46:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:44.862 00:46:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:44.862 00:46:28 -- accel/accel.sh@41 -- # local IFS=, 00:06:44.862 00:46:28 -- accel/accel.sh@42 -- # jq -r . 00:06:44.862 [2024-07-23 00:46:29.003554] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:44.862 [2024-07-23 00:46:29.003668] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3286480 ] 00:06:44.862 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.862 [2024-07-23 00:46:29.062410] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.120 [2024-07-23 00:46:29.154307] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.120 [2024-07-23 00:46:29.215832] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:45.120 [2024-07-23 00:46:29.301817] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:45.379 00:06:45.379 Compression does not support the verify option, aborting. 00:06:45.379 00:46:29 -- common/autotest_common.sh@643 -- # es=161 00:06:45.379 00:46:29 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:45.379 00:46:29 -- common/autotest_common.sh@652 -- # es=33 00:06:45.379 00:46:29 -- common/autotest_common.sh@653 -- # case "$es" in 00:06:45.379 00:46:29 -- common/autotest_common.sh@660 -- # es=1 00:06:45.379 00:46:29 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:45.379 00:06:45.379 real 0m0.399s 00:06:45.379 user 0m0.288s 00:06:45.379 sys 0m0.145s 00:06:45.379 00:46:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.379 00:46:29 -- common/autotest_common.sh@10 -- # set +x 00:06:45.379 ************************************ 00:06:45.379 END TEST accel_compress_verify 00:06:45.379 ************************************ 00:06:45.379 00:46:29 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:45.379 00:46:29 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:45.379 00:46:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:45.379 00:46:29 -- common/autotest_common.sh@10 -- # set +x 00:06:45.379 ************************************ 00:06:45.379 START TEST accel_wrong_workload 00:06:45.379 ************************************ 00:06:45.379 00:46:29 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w foobar 00:06:45.379 00:46:29 -- common/autotest_common.sh@640 -- # local es=0 00:06:45.379 00:46:29 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:45.379 00:46:29 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:45.379 00:46:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:45.379 00:46:29 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:45.379 00:46:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:45.379 00:46:29 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w foobar 00:06:45.379 00:46:29 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:45.379 00:46:29 -- accel/accel.sh@12 -- # build_accel_config 00:06:45.379 00:46:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:45.379 00:46:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.379 00:46:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.379 00:46:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:45.379 00:46:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:45.379 00:46:29 -- accel/accel.sh@41 -- # local IFS=, 00:06:45.379 00:46:29 -- accel/accel.sh@42 -- # jq -r . 00:06:45.379 Unsupported workload type: foobar 00:06:45.379 [2024-07-23 00:46:29.423379] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:45.379 accel_perf options: 00:06:45.379 [-h help message] 00:06:45.379 [-q queue depth per core] 00:06:45.379 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:45.379 [-T number of threads per core 00:06:45.379 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:45.379 [-t time in seconds] 00:06:45.379 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:45.379 [ dif_verify, , dif_generate, dif_generate_copy 00:06:45.379 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:45.379 [-l for compress/decompress workloads, name of uncompressed input file 00:06:45.379 [-S for crc32c workload, use this seed value (default 0) 00:06:45.379 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:45.379 [-f for fill workload, use this BYTE value (default 255) 00:06:45.379 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:45.379 [-y verify result if this switch is on] 00:06:45.379 [-a tasks to allocate per core (default: same value as -q)] 00:06:45.379 Can be used to spread operations across a wider range of memory. 00:06:45.379 00:46:29 -- common/autotest_common.sh@643 -- # es=1 00:06:45.379 00:46:29 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:45.379 00:46:29 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:45.379 00:46:29 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:45.379 00:06:45.379 real 0m0.022s 00:06:45.379 user 0m0.013s 00:06:45.379 sys 0m0.009s 00:06:45.379 00:46:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.379 00:46:29 -- common/autotest_common.sh@10 -- # set +x 00:06:45.379 ************************************ 00:06:45.379 END TEST accel_wrong_workload 00:06:45.379 ************************************ 00:06:45.379 Error: writing output failed: Broken pipe 00:06:45.379 00:46:29 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:45.379 00:46:29 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:06:45.379 00:46:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:45.379 00:46:29 -- common/autotest_common.sh@10 -- # set +x 00:06:45.379 ************************************ 00:06:45.379 START TEST accel_negative_buffers 00:06:45.379 ************************************ 00:06:45.379 00:46:29 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:45.379 00:46:29 -- common/autotest_common.sh@640 -- # local es=0 00:06:45.379 00:46:29 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:45.379 00:46:29 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:45.379 00:46:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:45.379 00:46:29 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:45.379 00:46:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:45.379 00:46:29 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w xor -y -x -1 00:06:45.379 00:46:29 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:45.379 00:46:29 -- accel/accel.sh@12 -- # build_accel_config 00:06:45.379 00:46:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:45.379 00:46:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.379 00:46:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.379 00:46:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:45.379 00:46:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:45.379 00:46:29 -- accel/accel.sh@41 -- # local IFS=, 00:06:45.379 00:46:29 -- accel/accel.sh@42 -- # jq -r . 00:06:45.379 -x option must be non-negative. 00:06:45.379 [2024-07-23 00:46:29.469434] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:45.379 accel_perf options: 00:06:45.379 [-h help message] 00:06:45.379 [-q queue depth per core] 00:06:45.379 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:45.379 [-T number of threads per core 00:06:45.379 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:45.379 [-t time in seconds] 00:06:45.379 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:45.379 [ dif_verify, , dif_generate, dif_generate_copy 00:06:45.379 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:45.379 [-l for compress/decompress workloads, name of uncompressed input file 00:06:45.379 [-S for crc32c workload, use this seed value (default 0) 00:06:45.380 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:45.380 [-f for fill workload, use this BYTE value (default 255) 00:06:45.380 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:45.380 [-y verify result if this switch is on] 00:06:45.380 [-a tasks to allocate per core (default: same value as -q)] 00:06:45.380 Can be used to spread operations across a wider range of memory. 00:06:45.380 00:46:29 -- common/autotest_common.sh@643 -- # es=1 00:06:45.380 00:46:29 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:45.380 00:46:29 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:45.380 00:46:29 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:45.380 00:06:45.380 real 0m0.022s 00:06:45.380 user 0m0.012s 00:06:45.380 sys 0m0.010s 00:06:45.380 00:46:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.380 00:46:29 -- common/autotest_common.sh@10 -- # set +x 00:06:45.380 ************************************ 00:06:45.380 END TEST accel_negative_buffers 00:06:45.380 ************************************ 00:06:45.380 Error: writing output failed: Broken pipe 00:06:45.380 00:46:29 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:45.380 00:46:29 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:45.380 00:46:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:45.380 00:46:29 -- common/autotest_common.sh@10 -- # set +x 00:06:45.380 ************************************ 00:06:45.380 START TEST accel_crc32c 00:06:45.380 ************************************ 00:06:45.380 00:46:29 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:45.380 00:46:29 -- accel/accel.sh@16 -- # local accel_opc 00:06:45.380 00:46:29 -- accel/accel.sh@17 -- # local accel_module 00:06:45.380 00:46:29 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:45.380 00:46:29 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:45.380 00:46:29 -- accel/accel.sh@12 -- # build_accel_config 00:06:45.380 00:46:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:45.380 00:46:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.380 00:46:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.380 00:46:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:45.380 00:46:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:45.380 00:46:29 -- accel/accel.sh@41 -- # local IFS=, 00:06:45.380 00:46:29 -- accel/accel.sh@42 -- # jq -r . 00:06:45.380 [2024-07-23 00:46:29.509248] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:45.380 [2024-07-23 00:46:29.509301] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3286662 ] 00:06:45.380 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.380 [2024-07-23 00:46:29.570620] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.639 [2024-07-23 00:46:29.664534] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.012 00:46:30 -- accel/accel.sh@18 -- # out=' 00:06:47.012 SPDK Configuration: 00:06:47.012 Core mask: 0x1 00:06:47.012 00:06:47.012 Accel Perf Configuration: 00:06:47.012 Workload Type: crc32c 00:06:47.012 CRC-32C seed: 32 00:06:47.012 Transfer size: 4096 bytes 00:06:47.012 Vector count 1 00:06:47.012 Module: software 00:06:47.012 Queue depth: 32 00:06:47.012 Allocate depth: 32 00:06:47.012 # threads/core: 1 00:06:47.012 Run time: 1 seconds 00:06:47.012 Verify: Yes 00:06:47.012 00:06:47.012 Running for 1 seconds... 00:06:47.012 00:06:47.012 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:47.012 ------------------------------------------------------------------------------------ 00:06:47.012 0,0 400096/s 1562 MiB/s 0 0 00:06:47.012 ==================================================================================== 00:06:47.012 Total 400096/s 1562 MiB/s 0 0' 00:06:47.012 00:46:30 -- accel/accel.sh@20 -- # IFS=: 00:06:47.012 00:46:30 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:47.012 00:46:30 -- accel/accel.sh@20 -- # read -r var val 00:06:47.012 00:46:30 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:47.012 00:46:30 -- accel/accel.sh@12 -- # build_accel_config 00:06:47.012 00:46:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:47.012 00:46:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.012 00:46:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.012 00:46:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:47.012 00:46:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:47.012 00:46:30 -- accel/accel.sh@41 -- # local IFS=, 00:06:47.012 00:46:30 -- accel/accel.sh@42 -- # jq -r . 00:06:47.012 [2024-07-23 00:46:30.907540] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:47.012 [2024-07-23 00:46:30.907681] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3286806 ] 00:06:47.012 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.012 [2024-07-23 00:46:30.971296] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.012 [2024-07-23 00:46:31.061713] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.012 00:46:31 -- accel/accel.sh@21 -- # val= 00:06:47.012 00:46:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.012 00:46:31 -- accel/accel.sh@20 -- # IFS=: 00:06:47.012 00:46:31 -- accel/accel.sh@20 -- # read -r var val 00:06:47.012 00:46:31 -- accel/accel.sh@21 -- # val= 00:06:47.012 00:46:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.012 00:46:31 -- accel/accel.sh@20 -- # IFS=: 00:06:47.012 00:46:31 -- accel/accel.sh@20 -- # read -r var val 00:06:47.012 00:46:31 -- accel/accel.sh@21 -- # val=0x1 00:06:47.012 00:46:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.012 00:46:31 -- accel/accel.sh@20 -- # IFS=: 00:06:47.012 00:46:31 -- accel/accel.sh@20 -- # read -r var val 00:06:47.012 00:46:31 -- accel/accel.sh@21 -- # val= 00:06:47.012 00:46:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.012 00:46:31 -- accel/accel.sh@20 -- # IFS=: 00:06:47.012 00:46:31 -- accel/accel.sh@20 -- # read -r var val 00:06:47.012 00:46:31 -- accel/accel.sh@21 -- # val= 00:06:47.012 00:46:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.012 00:46:31 -- accel/accel.sh@20 -- # IFS=: 00:06:47.012 00:46:31 -- accel/accel.sh@20 -- # read -r var val 00:06:47.012 00:46:31 -- accel/accel.sh@21 -- # val=crc32c 00:06:47.012 00:46:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.012 00:46:31 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:47.012 00:46:31 -- accel/accel.sh@20 -- # IFS=: 00:06:47.012 00:46:31 -- accel/accel.sh@20 -- # read -r var val 00:06:47.012 00:46:31 -- accel/accel.sh@21 -- # val=32 00:06:47.012 00:46:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.012 00:46:31 -- accel/accel.sh@20 -- # IFS=: 00:06:47.012 00:46:31 -- accel/accel.sh@20 -- # read -r var val 00:06:47.012 00:46:31 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:47.012 00:46:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.012 00:46:31 -- accel/accel.sh@20 -- # IFS=: 00:06:47.012 00:46:31 -- accel/accel.sh@20 -- # read -r var val 00:06:47.012 00:46:31 -- accel/accel.sh@21 -- # val= 00:06:47.012 00:46:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.012 00:46:31 -- accel/accel.sh@20 -- # IFS=: 00:06:47.012 00:46:31 -- accel/accel.sh@20 -- # read -r var val 00:06:47.012 00:46:31 -- accel/accel.sh@21 -- # val=software 00:06:47.012 00:46:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.012 00:46:31 -- accel/accel.sh@23 -- # accel_module=software 00:06:47.012 00:46:31 -- accel/accel.sh@20 -- # IFS=: 00:06:47.012 00:46:31 -- accel/accel.sh@20 -- # read -r var val 00:06:47.012 00:46:31 -- accel/accel.sh@21 -- # val=32 00:06:47.012 00:46:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.012 00:46:31 -- accel/accel.sh@20 -- # IFS=: 00:06:47.012 00:46:31 -- accel/accel.sh@20 -- # read -r var val 00:06:47.012 00:46:31 -- accel/accel.sh@21 -- # val=32 00:06:47.012 00:46:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.012 00:46:31 -- accel/accel.sh@20 -- # IFS=: 00:06:47.012 00:46:31 -- accel/accel.sh@20 -- # read -r var val 00:06:47.012 00:46:31 -- accel/accel.sh@21 -- # val=1 00:06:47.012 00:46:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.012 00:46:31 -- accel/accel.sh@20 -- # IFS=: 00:06:47.012 00:46:31 -- accel/accel.sh@20 -- # read -r var val 00:06:47.012 00:46:31 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:47.012 00:46:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.012 00:46:31 -- accel/accel.sh@20 -- # IFS=: 00:06:47.012 00:46:31 -- accel/accel.sh@20 -- # read -r var val 00:06:47.012 00:46:31 -- accel/accel.sh@21 -- # val=Yes 00:06:47.012 00:46:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.012 00:46:31 -- accel/accel.sh@20 -- # IFS=: 00:06:47.012 00:46:31 -- accel/accel.sh@20 -- # read -r var val 00:06:47.013 00:46:31 -- accel/accel.sh@21 -- # val= 00:06:47.013 00:46:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.013 00:46:31 -- accel/accel.sh@20 -- # IFS=: 00:06:47.013 00:46:31 -- accel/accel.sh@20 -- # read -r var val 00:06:47.013 00:46:31 -- accel/accel.sh@21 -- # val= 00:06:47.013 00:46:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.013 00:46:31 -- accel/accel.sh@20 -- # IFS=: 00:06:47.013 00:46:31 -- accel/accel.sh@20 -- # read -r var val 00:06:48.385 00:46:32 -- accel/accel.sh@21 -- # val= 00:06:48.385 00:46:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.385 00:46:32 -- accel/accel.sh@20 -- # IFS=: 00:06:48.385 00:46:32 -- accel/accel.sh@20 -- # read -r var val 00:06:48.385 00:46:32 -- accel/accel.sh@21 -- # val= 00:06:48.385 00:46:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.385 00:46:32 -- accel/accel.sh@20 -- # IFS=: 00:06:48.385 00:46:32 -- accel/accel.sh@20 -- # read -r var val 00:06:48.385 00:46:32 -- accel/accel.sh@21 -- # val= 00:06:48.385 00:46:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.385 00:46:32 -- accel/accel.sh@20 -- # IFS=: 00:06:48.385 00:46:32 -- accel/accel.sh@20 -- # read -r var val 00:06:48.385 00:46:32 -- accel/accel.sh@21 -- # val= 00:06:48.385 00:46:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.385 00:46:32 -- accel/accel.sh@20 -- # IFS=: 00:06:48.385 00:46:32 -- accel/accel.sh@20 -- # read -r var val 00:06:48.385 00:46:32 -- accel/accel.sh@21 -- # val= 00:06:48.385 00:46:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.385 00:46:32 -- accel/accel.sh@20 -- # IFS=: 00:06:48.385 00:46:32 -- accel/accel.sh@20 -- # read -r var val 00:06:48.385 00:46:32 -- accel/accel.sh@21 -- # val= 00:06:48.385 00:46:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.385 00:46:32 -- accel/accel.sh@20 -- # IFS=: 00:06:48.385 00:46:32 -- accel/accel.sh@20 -- # read -r var val 00:06:48.385 00:46:32 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:48.385 00:46:32 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:48.385 00:46:32 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:48.385 00:06:48.385 real 0m2.801s 00:06:48.385 user 0m2.494s 00:06:48.385 sys 0m0.299s 00:06:48.385 00:46:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:48.385 00:46:32 -- common/autotest_common.sh@10 -- # set +x 00:06:48.385 ************************************ 00:06:48.386 END TEST accel_crc32c 00:06:48.386 ************************************ 00:06:48.386 00:46:32 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:48.386 00:46:32 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:48.386 00:46:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:48.386 00:46:32 -- common/autotest_common.sh@10 -- # set +x 00:06:48.386 ************************************ 00:06:48.386 START TEST accel_crc32c_C2 00:06:48.386 ************************************ 00:06:48.386 00:46:32 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:48.386 00:46:32 -- accel/accel.sh@16 -- # local accel_opc 00:06:48.386 00:46:32 -- accel/accel.sh@17 -- # local accel_module 00:06:48.386 00:46:32 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:48.386 00:46:32 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:48.386 00:46:32 -- accel/accel.sh@12 -- # build_accel_config 00:06:48.386 00:46:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:48.386 00:46:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.386 00:46:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.386 00:46:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:48.386 00:46:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:48.386 00:46:32 -- accel/accel.sh@41 -- # local IFS=, 00:06:48.386 00:46:32 -- accel/accel.sh@42 -- # jq -r . 00:06:48.386 [2024-07-23 00:46:32.344037] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:48.386 [2024-07-23 00:46:32.344116] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3286967 ] 00:06:48.386 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.386 [2024-07-23 00:46:32.404644] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.386 [2024-07-23 00:46:32.495316] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.768 00:46:33 -- accel/accel.sh@18 -- # out=' 00:06:49.768 SPDK Configuration: 00:06:49.768 Core mask: 0x1 00:06:49.768 00:06:49.768 Accel Perf Configuration: 00:06:49.768 Workload Type: crc32c 00:06:49.768 CRC-32C seed: 0 00:06:49.768 Transfer size: 4096 bytes 00:06:49.768 Vector count 2 00:06:49.768 Module: software 00:06:49.768 Queue depth: 32 00:06:49.768 Allocate depth: 32 00:06:49.768 # threads/core: 1 00:06:49.768 Run time: 1 seconds 00:06:49.768 Verify: Yes 00:06:49.768 00:06:49.768 Running for 1 seconds... 00:06:49.768 00:06:49.768 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:49.768 ------------------------------------------------------------------------------------ 00:06:49.768 0,0 312256/s 2439 MiB/s 0 0 00:06:49.768 ==================================================================================== 00:06:49.768 Total 312256/s 1219 MiB/s 0 0' 00:06:49.768 00:46:33 -- accel/accel.sh@20 -- # IFS=: 00:06:49.768 00:46:33 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:49.768 00:46:33 -- accel/accel.sh@20 -- # read -r var val 00:06:49.768 00:46:33 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:49.768 00:46:33 -- accel/accel.sh@12 -- # build_accel_config 00:06:49.768 00:46:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:49.768 00:46:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:49.768 00:46:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:49.768 00:46:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:49.768 00:46:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:49.768 00:46:33 -- accel/accel.sh@41 -- # local IFS=, 00:06:49.768 00:46:33 -- accel/accel.sh@42 -- # jq -r . 00:06:49.768 [2024-07-23 00:46:33.746975] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:49.769 [2024-07-23 00:46:33.747058] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3287105 ] 00:06:49.769 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.769 [2024-07-23 00:46:33.810193] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.769 [2024-07-23 00:46:33.900535] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.769 00:46:33 -- accel/accel.sh@21 -- # val= 00:06:49.769 00:46:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.769 00:46:33 -- accel/accel.sh@20 -- # IFS=: 00:06:49.769 00:46:33 -- accel/accel.sh@20 -- # read -r var val 00:06:49.769 00:46:33 -- accel/accel.sh@21 -- # val= 00:06:49.769 00:46:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.769 00:46:33 -- accel/accel.sh@20 -- # IFS=: 00:06:49.769 00:46:33 -- accel/accel.sh@20 -- # read -r var val 00:06:49.769 00:46:33 -- accel/accel.sh@21 -- # val=0x1 00:06:49.769 00:46:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.769 00:46:33 -- accel/accel.sh@20 -- # IFS=: 00:06:49.769 00:46:33 -- accel/accel.sh@20 -- # read -r var val 00:06:49.769 00:46:33 -- accel/accel.sh@21 -- # val= 00:06:49.769 00:46:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.769 00:46:33 -- accel/accel.sh@20 -- # IFS=: 00:06:49.769 00:46:33 -- accel/accel.sh@20 -- # read -r var val 00:06:49.769 00:46:33 -- accel/accel.sh@21 -- # val= 00:06:49.769 00:46:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.769 00:46:33 -- accel/accel.sh@20 -- # IFS=: 00:06:49.769 00:46:33 -- accel/accel.sh@20 -- # read -r var val 00:06:49.769 00:46:33 -- accel/accel.sh@21 -- # val=crc32c 00:06:49.769 00:46:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.769 00:46:33 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:49.769 00:46:33 -- accel/accel.sh@20 -- # IFS=: 00:06:49.769 00:46:33 -- accel/accel.sh@20 -- # read -r var val 00:06:49.769 00:46:33 -- accel/accel.sh@21 -- # val=0 00:06:49.769 00:46:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.769 00:46:33 -- accel/accel.sh@20 -- # IFS=: 00:06:49.769 00:46:33 -- accel/accel.sh@20 -- # read -r var val 00:06:49.769 00:46:33 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:49.769 00:46:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.769 00:46:33 -- accel/accel.sh@20 -- # IFS=: 00:06:49.769 00:46:33 -- accel/accel.sh@20 -- # read -r var val 00:06:49.769 00:46:33 -- accel/accel.sh@21 -- # val= 00:06:49.769 00:46:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.769 00:46:33 -- accel/accel.sh@20 -- # IFS=: 00:06:49.769 00:46:33 -- accel/accel.sh@20 -- # read -r var val 00:06:49.769 00:46:33 -- accel/accel.sh@21 -- # val=software 00:06:49.769 00:46:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.769 00:46:33 -- accel/accel.sh@23 -- # accel_module=software 00:06:49.769 00:46:33 -- accel/accel.sh@20 -- # IFS=: 00:06:49.769 00:46:33 -- accel/accel.sh@20 -- # read -r var val 00:06:49.769 00:46:33 -- accel/accel.sh@21 -- # val=32 00:06:49.769 00:46:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.769 00:46:33 -- accel/accel.sh@20 -- # IFS=: 00:06:49.769 00:46:33 -- accel/accel.sh@20 -- # read -r var val 00:06:49.769 00:46:33 -- accel/accel.sh@21 -- # val=32 00:06:49.769 00:46:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.769 00:46:33 -- accel/accel.sh@20 -- # IFS=: 00:06:49.769 00:46:33 -- accel/accel.sh@20 -- # read -r var val 00:06:49.769 00:46:33 -- accel/accel.sh@21 -- # val=1 00:06:49.769 00:46:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.769 00:46:33 -- accel/accel.sh@20 -- # IFS=: 00:06:49.769 00:46:33 -- accel/accel.sh@20 -- # read -r var val 00:06:49.769 00:46:33 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:49.769 00:46:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.769 00:46:33 -- accel/accel.sh@20 -- # IFS=: 00:06:49.769 00:46:33 -- accel/accel.sh@20 -- # read -r var val 00:06:49.769 00:46:33 -- accel/accel.sh@21 -- # val=Yes 00:06:49.769 00:46:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.769 00:46:33 -- accel/accel.sh@20 -- # IFS=: 00:06:49.769 00:46:33 -- accel/accel.sh@20 -- # read -r var val 00:06:49.769 00:46:33 -- accel/accel.sh@21 -- # val= 00:06:49.769 00:46:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.769 00:46:33 -- accel/accel.sh@20 -- # IFS=: 00:06:49.769 00:46:33 -- accel/accel.sh@20 -- # read -r var val 00:06:49.769 00:46:33 -- accel/accel.sh@21 -- # val= 00:06:49.769 00:46:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.769 00:46:33 -- accel/accel.sh@20 -- # IFS=: 00:06:49.769 00:46:33 -- accel/accel.sh@20 -- # read -r var val 00:06:51.142 00:46:35 -- accel/accel.sh@21 -- # val= 00:06:51.142 00:46:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.142 00:46:35 -- accel/accel.sh@20 -- # IFS=: 00:06:51.142 00:46:35 -- accel/accel.sh@20 -- # read -r var val 00:06:51.142 00:46:35 -- accel/accel.sh@21 -- # val= 00:06:51.142 00:46:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.142 00:46:35 -- accel/accel.sh@20 -- # IFS=: 00:06:51.142 00:46:35 -- accel/accel.sh@20 -- # read -r var val 00:06:51.142 00:46:35 -- accel/accel.sh@21 -- # val= 00:06:51.142 00:46:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.142 00:46:35 -- accel/accel.sh@20 -- # IFS=: 00:06:51.142 00:46:35 -- accel/accel.sh@20 -- # read -r var val 00:06:51.142 00:46:35 -- accel/accel.sh@21 -- # val= 00:06:51.142 00:46:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.142 00:46:35 -- accel/accel.sh@20 -- # IFS=: 00:06:51.142 00:46:35 -- accel/accel.sh@20 -- # read -r var val 00:06:51.142 00:46:35 -- accel/accel.sh@21 -- # val= 00:06:51.142 00:46:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.142 00:46:35 -- accel/accel.sh@20 -- # IFS=: 00:06:51.142 00:46:35 -- accel/accel.sh@20 -- # read -r var val 00:06:51.142 00:46:35 -- accel/accel.sh@21 -- # val= 00:06:51.142 00:46:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.142 00:46:35 -- accel/accel.sh@20 -- # IFS=: 00:06:51.142 00:46:35 -- accel/accel.sh@20 -- # read -r var val 00:06:51.142 00:46:35 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:51.142 00:46:35 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:51.142 00:46:35 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:51.142 00:06:51.142 real 0m2.805s 00:06:51.142 user 0m2.513s 00:06:51.142 sys 0m0.285s 00:06:51.142 00:46:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.142 00:46:35 -- common/autotest_common.sh@10 -- # set +x 00:06:51.142 ************************************ 00:06:51.142 END TEST accel_crc32c_C2 00:06:51.142 ************************************ 00:06:51.142 00:46:35 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:51.142 00:46:35 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:51.142 00:46:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:51.142 00:46:35 -- common/autotest_common.sh@10 -- # set +x 00:06:51.142 ************************************ 00:06:51.142 START TEST accel_copy 00:06:51.142 ************************************ 00:06:51.142 00:46:35 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy -y 00:06:51.142 00:46:35 -- accel/accel.sh@16 -- # local accel_opc 00:06:51.142 00:46:35 -- accel/accel.sh@17 -- # local accel_module 00:06:51.142 00:46:35 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:06:51.142 00:46:35 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:51.142 00:46:35 -- accel/accel.sh@12 -- # build_accel_config 00:06:51.142 00:46:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:51.142 00:46:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.142 00:46:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.142 00:46:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:51.142 00:46:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:51.142 00:46:35 -- accel/accel.sh@41 -- # local IFS=, 00:06:51.142 00:46:35 -- accel/accel.sh@42 -- # jq -r . 00:06:51.142 [2024-07-23 00:46:35.172493] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:51.142 [2024-07-23 00:46:35.172570] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3287362 ] 00:06:51.142 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.142 [2024-07-23 00:46:35.233623] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.142 [2024-07-23 00:46:35.322164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.516 00:46:36 -- accel/accel.sh@18 -- # out=' 00:06:52.516 SPDK Configuration: 00:06:52.516 Core mask: 0x1 00:06:52.516 00:06:52.516 Accel Perf Configuration: 00:06:52.516 Workload Type: copy 00:06:52.516 Transfer size: 4096 bytes 00:06:52.516 Vector count 1 00:06:52.516 Module: software 00:06:52.516 Queue depth: 32 00:06:52.516 Allocate depth: 32 00:06:52.516 # threads/core: 1 00:06:52.516 Run time: 1 seconds 00:06:52.516 Verify: Yes 00:06:52.516 00:06:52.516 Running for 1 seconds... 00:06:52.516 00:06:52.516 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:52.516 ------------------------------------------------------------------------------------ 00:06:52.516 0,0 277952/s 1085 MiB/s 0 0 00:06:52.516 ==================================================================================== 00:06:52.516 Total 277952/s 1085 MiB/s 0 0' 00:06:52.516 00:46:36 -- accel/accel.sh@20 -- # IFS=: 00:06:52.516 00:46:36 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:52.516 00:46:36 -- accel/accel.sh@20 -- # read -r var val 00:06:52.516 00:46:36 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:52.516 00:46:36 -- accel/accel.sh@12 -- # build_accel_config 00:06:52.516 00:46:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:52.516 00:46:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.516 00:46:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.516 00:46:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:52.516 00:46:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:52.516 00:46:36 -- accel/accel.sh@41 -- # local IFS=, 00:06:52.516 00:46:36 -- accel/accel.sh@42 -- # jq -r . 00:06:52.516 [2024-07-23 00:46:36.568930] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:52.516 [2024-07-23 00:46:36.569011] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3287533 ] 00:06:52.516 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.516 [2024-07-23 00:46:36.629166] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.775 [2024-07-23 00:46:36.720485] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.775 00:46:36 -- accel/accel.sh@21 -- # val= 00:06:52.775 00:46:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.775 00:46:36 -- accel/accel.sh@20 -- # IFS=: 00:06:52.775 00:46:36 -- accel/accel.sh@20 -- # read -r var val 00:06:52.775 00:46:36 -- accel/accel.sh@21 -- # val= 00:06:52.775 00:46:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.775 00:46:36 -- accel/accel.sh@20 -- # IFS=: 00:06:52.775 00:46:36 -- accel/accel.sh@20 -- # read -r var val 00:06:52.775 00:46:36 -- accel/accel.sh@21 -- # val=0x1 00:06:52.775 00:46:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.775 00:46:36 -- accel/accel.sh@20 -- # IFS=: 00:06:52.775 00:46:36 -- accel/accel.sh@20 -- # read -r var val 00:06:52.775 00:46:36 -- accel/accel.sh@21 -- # val= 00:06:52.775 00:46:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.775 00:46:36 -- accel/accel.sh@20 -- # IFS=: 00:06:52.775 00:46:36 -- accel/accel.sh@20 -- # read -r var val 00:06:52.775 00:46:36 -- accel/accel.sh@21 -- # val= 00:06:52.775 00:46:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.775 00:46:36 -- accel/accel.sh@20 -- # IFS=: 00:06:52.775 00:46:36 -- accel/accel.sh@20 -- # read -r var val 00:06:52.775 00:46:36 -- accel/accel.sh@21 -- # val=copy 00:06:52.775 00:46:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.775 00:46:36 -- accel/accel.sh@24 -- # accel_opc=copy 00:06:52.775 00:46:36 -- accel/accel.sh@20 -- # IFS=: 00:06:52.775 00:46:36 -- accel/accel.sh@20 -- # read -r var val 00:06:52.775 00:46:36 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:52.775 00:46:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.775 00:46:36 -- accel/accel.sh@20 -- # IFS=: 00:06:52.775 00:46:36 -- accel/accel.sh@20 -- # read -r var val 00:06:52.775 00:46:36 -- accel/accel.sh@21 -- # val= 00:06:52.775 00:46:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.775 00:46:36 -- accel/accel.sh@20 -- # IFS=: 00:06:52.775 00:46:36 -- accel/accel.sh@20 -- # read -r var val 00:06:52.775 00:46:36 -- accel/accel.sh@21 -- # val=software 00:06:52.775 00:46:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.775 00:46:36 -- accel/accel.sh@23 -- # accel_module=software 00:06:52.775 00:46:36 -- accel/accel.sh@20 -- # IFS=: 00:06:52.775 00:46:36 -- accel/accel.sh@20 -- # read -r var val 00:06:52.775 00:46:36 -- accel/accel.sh@21 -- # val=32 00:06:52.775 00:46:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.775 00:46:36 -- accel/accel.sh@20 -- # IFS=: 00:06:52.775 00:46:36 -- accel/accel.sh@20 -- # read -r var val 00:06:52.775 00:46:36 -- accel/accel.sh@21 -- # val=32 00:06:52.775 00:46:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.775 00:46:36 -- accel/accel.sh@20 -- # IFS=: 00:06:52.775 00:46:36 -- accel/accel.sh@20 -- # read -r var val 00:06:52.775 00:46:36 -- accel/accel.sh@21 -- # val=1 00:06:52.775 00:46:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.775 00:46:36 -- accel/accel.sh@20 -- # IFS=: 00:06:52.775 00:46:36 -- accel/accel.sh@20 -- # read -r var val 00:06:52.775 00:46:36 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:52.775 00:46:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.775 00:46:36 -- accel/accel.sh@20 -- # IFS=: 00:06:52.775 00:46:36 -- accel/accel.sh@20 -- # read -r var val 00:06:52.775 00:46:36 -- accel/accel.sh@21 -- # val=Yes 00:06:52.775 00:46:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.775 00:46:36 -- accel/accel.sh@20 -- # IFS=: 00:06:52.775 00:46:36 -- accel/accel.sh@20 -- # read -r var val 00:06:52.775 00:46:36 -- accel/accel.sh@21 -- # val= 00:06:52.775 00:46:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.775 00:46:36 -- accel/accel.sh@20 -- # IFS=: 00:06:52.775 00:46:36 -- accel/accel.sh@20 -- # read -r var val 00:06:52.775 00:46:36 -- accel/accel.sh@21 -- # val= 00:06:52.775 00:46:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.775 00:46:36 -- accel/accel.sh@20 -- # IFS=: 00:06:52.775 00:46:36 -- accel/accel.sh@20 -- # read -r var val 00:06:54.148 00:46:37 -- accel/accel.sh@21 -- # val= 00:06:54.148 00:46:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.148 00:46:37 -- accel/accel.sh@20 -- # IFS=: 00:06:54.148 00:46:37 -- accel/accel.sh@20 -- # read -r var val 00:06:54.148 00:46:37 -- accel/accel.sh@21 -- # val= 00:06:54.148 00:46:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.148 00:46:37 -- accel/accel.sh@20 -- # IFS=: 00:06:54.148 00:46:37 -- accel/accel.sh@20 -- # read -r var val 00:06:54.148 00:46:37 -- accel/accel.sh@21 -- # val= 00:06:54.148 00:46:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.148 00:46:37 -- accel/accel.sh@20 -- # IFS=: 00:06:54.148 00:46:37 -- accel/accel.sh@20 -- # read -r var val 00:06:54.148 00:46:37 -- accel/accel.sh@21 -- # val= 00:06:54.148 00:46:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.148 00:46:37 -- accel/accel.sh@20 -- # IFS=: 00:06:54.148 00:46:37 -- accel/accel.sh@20 -- # read -r var val 00:06:54.148 00:46:37 -- accel/accel.sh@21 -- # val= 00:06:54.148 00:46:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.148 00:46:37 -- accel/accel.sh@20 -- # IFS=: 00:06:54.148 00:46:37 -- accel/accel.sh@20 -- # read -r var val 00:06:54.148 00:46:37 -- accel/accel.sh@21 -- # val= 00:06:54.148 00:46:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.148 00:46:37 -- accel/accel.sh@20 -- # IFS=: 00:06:54.148 00:46:37 -- accel/accel.sh@20 -- # read -r var val 00:06:54.148 00:46:37 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:54.148 00:46:37 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:06:54.148 00:46:37 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:54.148 00:06:54.148 real 0m2.804s 00:06:54.148 user 0m2.502s 00:06:54.148 sys 0m0.294s 00:06:54.148 00:46:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:54.148 00:46:37 -- common/autotest_common.sh@10 -- # set +x 00:06:54.148 ************************************ 00:06:54.148 END TEST accel_copy 00:06:54.148 ************************************ 00:06:54.148 00:46:37 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:54.148 00:46:37 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:06:54.148 00:46:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:54.148 00:46:37 -- common/autotest_common.sh@10 -- # set +x 00:06:54.148 ************************************ 00:06:54.148 START TEST accel_fill 00:06:54.148 ************************************ 00:06:54.148 00:46:37 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:54.148 00:46:37 -- accel/accel.sh@16 -- # local accel_opc 00:06:54.148 00:46:37 -- accel/accel.sh@17 -- # local accel_module 00:06:54.148 00:46:37 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:54.148 00:46:37 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:54.148 00:46:37 -- accel/accel.sh@12 -- # build_accel_config 00:06:54.148 00:46:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:54.148 00:46:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:54.148 00:46:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:54.148 00:46:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:54.148 00:46:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:54.148 00:46:37 -- accel/accel.sh@41 -- # local IFS=, 00:06:54.148 00:46:37 -- accel/accel.sh@42 -- # jq -r . 00:06:54.148 [2024-07-23 00:46:38.000236] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:54.148 [2024-07-23 00:46:38.000318] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3287688 ] 00:06:54.148 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.148 [2024-07-23 00:46:38.062162] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.148 [2024-07-23 00:46:38.152857] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.522 00:46:39 -- accel/accel.sh@18 -- # out=' 00:06:55.522 SPDK Configuration: 00:06:55.522 Core mask: 0x1 00:06:55.522 00:06:55.522 Accel Perf Configuration: 00:06:55.522 Workload Type: fill 00:06:55.522 Fill pattern: 0x80 00:06:55.522 Transfer size: 4096 bytes 00:06:55.522 Vector count 1 00:06:55.522 Module: software 00:06:55.522 Queue depth: 64 00:06:55.522 Allocate depth: 64 00:06:55.522 # threads/core: 1 00:06:55.522 Run time: 1 seconds 00:06:55.522 Verify: Yes 00:06:55.522 00:06:55.522 Running for 1 seconds... 00:06:55.522 00:06:55.522 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:55.522 ------------------------------------------------------------------------------------ 00:06:55.522 0,0 402880/s 1573 MiB/s 0 0 00:06:55.522 ==================================================================================== 00:06:55.522 Total 402880/s 1573 MiB/s 0 0' 00:06:55.522 00:46:39 -- accel/accel.sh@20 -- # IFS=: 00:06:55.522 00:46:39 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:55.522 00:46:39 -- accel/accel.sh@20 -- # read -r var val 00:06:55.522 00:46:39 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:55.522 00:46:39 -- accel/accel.sh@12 -- # build_accel_config 00:06:55.522 00:46:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:55.522 00:46:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.522 00:46:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.522 00:46:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:55.522 00:46:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:55.522 00:46:39 -- accel/accel.sh@41 -- # local IFS=, 00:06:55.522 00:46:39 -- accel/accel.sh@42 -- # jq -r . 00:06:55.522 [2024-07-23 00:46:39.402811] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:55.522 [2024-07-23 00:46:39.402883] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3287832 ] 00:06:55.522 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.522 [2024-07-23 00:46:39.464378] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.522 [2024-07-23 00:46:39.554342] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.522 00:46:39 -- accel/accel.sh@21 -- # val= 00:06:55.522 00:46:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.522 00:46:39 -- accel/accel.sh@20 -- # IFS=: 00:06:55.522 00:46:39 -- accel/accel.sh@20 -- # read -r var val 00:06:55.522 00:46:39 -- accel/accel.sh@21 -- # val= 00:06:55.522 00:46:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.522 00:46:39 -- accel/accel.sh@20 -- # IFS=: 00:06:55.522 00:46:39 -- accel/accel.sh@20 -- # read -r var val 00:06:55.522 00:46:39 -- accel/accel.sh@21 -- # val=0x1 00:06:55.522 00:46:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.522 00:46:39 -- accel/accel.sh@20 -- # IFS=: 00:06:55.522 00:46:39 -- accel/accel.sh@20 -- # read -r var val 00:06:55.522 00:46:39 -- accel/accel.sh@21 -- # val= 00:06:55.522 00:46:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.522 00:46:39 -- accel/accel.sh@20 -- # IFS=: 00:06:55.522 00:46:39 -- accel/accel.sh@20 -- # read -r var val 00:06:55.522 00:46:39 -- accel/accel.sh@21 -- # val= 00:06:55.522 00:46:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.522 00:46:39 -- accel/accel.sh@20 -- # IFS=: 00:06:55.522 00:46:39 -- accel/accel.sh@20 -- # read -r var val 00:06:55.522 00:46:39 -- accel/accel.sh@21 -- # val=fill 00:06:55.522 00:46:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.522 00:46:39 -- accel/accel.sh@24 -- # accel_opc=fill 00:06:55.522 00:46:39 -- accel/accel.sh@20 -- # IFS=: 00:06:55.522 00:46:39 -- accel/accel.sh@20 -- # read -r var val 00:06:55.522 00:46:39 -- accel/accel.sh@21 -- # val=0x80 00:06:55.522 00:46:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.522 00:46:39 -- accel/accel.sh@20 -- # IFS=: 00:06:55.522 00:46:39 -- accel/accel.sh@20 -- # read -r var val 00:06:55.522 00:46:39 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:55.522 00:46:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.522 00:46:39 -- accel/accel.sh@20 -- # IFS=: 00:06:55.522 00:46:39 -- accel/accel.sh@20 -- # read -r var val 00:06:55.522 00:46:39 -- accel/accel.sh@21 -- # val= 00:06:55.522 00:46:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.522 00:46:39 -- accel/accel.sh@20 -- # IFS=: 00:06:55.522 00:46:39 -- accel/accel.sh@20 -- # read -r var val 00:06:55.522 00:46:39 -- accel/accel.sh@21 -- # val=software 00:06:55.522 00:46:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.522 00:46:39 -- accel/accel.sh@23 -- # accel_module=software 00:06:55.522 00:46:39 -- accel/accel.sh@20 -- # IFS=: 00:06:55.522 00:46:39 -- accel/accel.sh@20 -- # read -r var val 00:06:55.522 00:46:39 -- accel/accel.sh@21 -- # val=64 00:06:55.522 00:46:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.522 00:46:39 -- accel/accel.sh@20 -- # IFS=: 00:06:55.522 00:46:39 -- accel/accel.sh@20 -- # read -r var val 00:06:55.522 00:46:39 -- accel/accel.sh@21 -- # val=64 00:06:55.522 00:46:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.522 00:46:39 -- accel/accel.sh@20 -- # IFS=: 00:06:55.522 00:46:39 -- accel/accel.sh@20 -- # read -r var val 00:06:55.522 00:46:39 -- accel/accel.sh@21 -- # val=1 00:06:55.522 00:46:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.522 00:46:39 -- accel/accel.sh@20 -- # IFS=: 00:06:55.522 00:46:39 -- accel/accel.sh@20 -- # read -r var val 00:06:55.522 00:46:39 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:55.522 00:46:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.522 00:46:39 -- accel/accel.sh@20 -- # IFS=: 00:06:55.522 00:46:39 -- accel/accel.sh@20 -- # read -r var val 00:06:55.522 00:46:39 -- accel/accel.sh@21 -- # val=Yes 00:06:55.522 00:46:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.522 00:46:39 -- accel/accel.sh@20 -- # IFS=: 00:06:55.522 00:46:39 -- accel/accel.sh@20 -- # read -r var val 00:06:55.522 00:46:39 -- accel/accel.sh@21 -- # val= 00:06:55.522 00:46:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.522 00:46:39 -- accel/accel.sh@20 -- # IFS=: 00:06:55.522 00:46:39 -- accel/accel.sh@20 -- # read -r var val 00:06:55.522 00:46:39 -- accel/accel.sh@21 -- # val= 00:06:55.522 00:46:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.522 00:46:39 -- accel/accel.sh@20 -- # IFS=: 00:06:55.522 00:46:39 -- accel/accel.sh@20 -- # read -r var val 00:06:56.896 00:46:40 -- accel/accel.sh@21 -- # val= 00:06:56.896 00:46:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.896 00:46:40 -- accel/accel.sh@20 -- # IFS=: 00:06:56.896 00:46:40 -- accel/accel.sh@20 -- # read -r var val 00:06:56.896 00:46:40 -- accel/accel.sh@21 -- # val= 00:06:56.896 00:46:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.896 00:46:40 -- accel/accel.sh@20 -- # IFS=: 00:06:56.896 00:46:40 -- accel/accel.sh@20 -- # read -r var val 00:06:56.896 00:46:40 -- accel/accel.sh@21 -- # val= 00:06:56.896 00:46:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.896 00:46:40 -- accel/accel.sh@20 -- # IFS=: 00:06:56.896 00:46:40 -- accel/accel.sh@20 -- # read -r var val 00:06:56.896 00:46:40 -- accel/accel.sh@21 -- # val= 00:06:56.896 00:46:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.896 00:46:40 -- accel/accel.sh@20 -- # IFS=: 00:06:56.897 00:46:40 -- accel/accel.sh@20 -- # read -r var val 00:06:56.897 00:46:40 -- accel/accel.sh@21 -- # val= 00:06:56.897 00:46:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.897 00:46:40 -- accel/accel.sh@20 -- # IFS=: 00:06:56.897 00:46:40 -- accel/accel.sh@20 -- # read -r var val 00:06:56.897 00:46:40 -- accel/accel.sh@21 -- # val= 00:06:56.897 00:46:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.897 00:46:40 -- accel/accel.sh@20 -- # IFS=: 00:06:56.897 00:46:40 -- accel/accel.sh@20 -- # read -r var val 00:06:56.897 00:46:40 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:56.897 00:46:40 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:06:56.897 00:46:40 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:56.897 00:06:56.897 real 0m2.803s 00:06:56.897 user 0m2.498s 00:06:56.897 sys 0m0.296s 00:06:56.897 00:46:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:56.897 00:46:40 -- common/autotest_common.sh@10 -- # set +x 00:06:56.897 ************************************ 00:06:56.897 END TEST accel_fill 00:06:56.897 ************************************ 00:06:56.897 00:46:40 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:56.897 00:46:40 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:56.897 00:46:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:56.897 00:46:40 -- common/autotest_common.sh@10 -- # set +x 00:06:56.897 ************************************ 00:06:56.897 START TEST accel_copy_crc32c 00:06:56.897 ************************************ 00:06:56.897 00:46:40 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y 00:06:56.897 00:46:40 -- accel/accel.sh@16 -- # local accel_opc 00:06:56.897 00:46:40 -- accel/accel.sh@17 -- # local accel_module 00:06:56.897 00:46:40 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:56.897 00:46:40 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:56.897 00:46:40 -- accel/accel.sh@12 -- # build_accel_config 00:06:56.897 00:46:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:56.897 00:46:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.897 00:46:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.897 00:46:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:56.897 00:46:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:56.897 00:46:40 -- accel/accel.sh@41 -- # local IFS=, 00:06:56.897 00:46:40 -- accel/accel.sh@42 -- # jq -r . 00:06:56.897 [2024-07-23 00:46:40.832189] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:56.897 [2024-07-23 00:46:40.832278] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3288056 ] 00:06:56.897 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.897 [2024-07-23 00:46:40.894778] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.897 [2024-07-23 00:46:40.985800] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.270 00:46:42 -- accel/accel.sh@18 -- # out=' 00:06:58.270 SPDK Configuration: 00:06:58.270 Core mask: 0x1 00:06:58.270 00:06:58.270 Accel Perf Configuration: 00:06:58.270 Workload Type: copy_crc32c 00:06:58.270 CRC-32C seed: 0 00:06:58.270 Vector size: 4096 bytes 00:06:58.270 Transfer size: 4096 bytes 00:06:58.270 Vector count 1 00:06:58.270 Module: software 00:06:58.270 Queue depth: 32 00:06:58.270 Allocate depth: 32 00:06:58.270 # threads/core: 1 00:06:58.270 Run time: 1 seconds 00:06:58.270 Verify: Yes 00:06:58.270 00:06:58.270 Running for 1 seconds... 00:06:58.270 00:06:58.270 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:58.270 ------------------------------------------------------------------------------------ 00:06:58.270 0,0 216384/s 845 MiB/s 0 0 00:06:58.270 ==================================================================================== 00:06:58.270 Total 216384/s 845 MiB/s 0 0' 00:06:58.270 00:46:42 -- accel/accel.sh@20 -- # IFS=: 00:06:58.270 00:46:42 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:58.270 00:46:42 -- accel/accel.sh@20 -- # read -r var val 00:06:58.270 00:46:42 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:58.270 00:46:42 -- accel/accel.sh@12 -- # build_accel_config 00:06:58.270 00:46:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:58.270 00:46:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.270 00:46:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.270 00:46:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:58.270 00:46:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:58.270 00:46:42 -- accel/accel.sh@41 -- # local IFS=, 00:06:58.270 00:46:42 -- accel/accel.sh@42 -- # jq -r . 00:06:58.270 [2024-07-23 00:46:42.218368] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:58.270 [2024-07-23 00:46:42.218445] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3288253 ] 00:06:58.270 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.270 [2024-07-23 00:46:42.280001] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.270 [2024-07-23 00:46:42.370056] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.270 00:46:42 -- accel/accel.sh@21 -- # val= 00:06:58.270 00:46:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.270 00:46:42 -- accel/accel.sh@20 -- # IFS=: 00:06:58.270 00:46:42 -- accel/accel.sh@20 -- # read -r var val 00:06:58.270 00:46:42 -- accel/accel.sh@21 -- # val= 00:06:58.270 00:46:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.270 00:46:42 -- accel/accel.sh@20 -- # IFS=: 00:06:58.270 00:46:42 -- accel/accel.sh@20 -- # read -r var val 00:06:58.270 00:46:42 -- accel/accel.sh@21 -- # val=0x1 00:06:58.270 00:46:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.270 00:46:42 -- accel/accel.sh@20 -- # IFS=: 00:06:58.270 00:46:42 -- accel/accel.sh@20 -- # read -r var val 00:06:58.270 00:46:42 -- accel/accel.sh@21 -- # val= 00:06:58.270 00:46:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.270 00:46:42 -- accel/accel.sh@20 -- # IFS=: 00:06:58.270 00:46:42 -- accel/accel.sh@20 -- # read -r var val 00:06:58.270 00:46:42 -- accel/accel.sh@21 -- # val= 00:06:58.270 00:46:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.270 00:46:42 -- accel/accel.sh@20 -- # IFS=: 00:06:58.270 00:46:42 -- accel/accel.sh@20 -- # read -r var val 00:06:58.270 00:46:42 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:58.270 00:46:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.270 00:46:42 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:58.270 00:46:42 -- accel/accel.sh@20 -- # IFS=: 00:06:58.270 00:46:42 -- accel/accel.sh@20 -- # read -r var val 00:06:58.270 00:46:42 -- accel/accel.sh@21 -- # val=0 00:06:58.270 00:46:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.270 00:46:42 -- accel/accel.sh@20 -- # IFS=: 00:06:58.270 00:46:42 -- accel/accel.sh@20 -- # read -r var val 00:06:58.270 00:46:42 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:58.270 00:46:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.270 00:46:42 -- accel/accel.sh@20 -- # IFS=: 00:06:58.270 00:46:42 -- accel/accel.sh@20 -- # read -r var val 00:06:58.270 00:46:42 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:58.270 00:46:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.270 00:46:42 -- accel/accel.sh@20 -- # IFS=: 00:06:58.270 00:46:42 -- accel/accel.sh@20 -- # read -r var val 00:06:58.270 00:46:42 -- accel/accel.sh@21 -- # val= 00:06:58.270 00:46:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.270 00:46:42 -- accel/accel.sh@20 -- # IFS=: 00:06:58.270 00:46:42 -- accel/accel.sh@20 -- # read -r var val 00:06:58.270 00:46:42 -- accel/accel.sh@21 -- # val=software 00:06:58.270 00:46:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.270 00:46:42 -- accel/accel.sh@23 -- # accel_module=software 00:06:58.270 00:46:42 -- accel/accel.sh@20 -- # IFS=: 00:06:58.270 00:46:42 -- accel/accel.sh@20 -- # read -r var val 00:06:58.270 00:46:42 -- accel/accel.sh@21 -- # val=32 00:06:58.270 00:46:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.270 00:46:42 -- accel/accel.sh@20 -- # IFS=: 00:06:58.270 00:46:42 -- accel/accel.sh@20 -- # read -r var val 00:06:58.270 00:46:42 -- accel/accel.sh@21 -- # val=32 00:06:58.270 00:46:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.270 00:46:42 -- accel/accel.sh@20 -- # IFS=: 00:06:58.270 00:46:42 -- accel/accel.sh@20 -- # read -r var val 00:06:58.270 00:46:42 -- accel/accel.sh@21 -- # val=1 00:06:58.270 00:46:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.270 00:46:42 -- accel/accel.sh@20 -- # IFS=: 00:06:58.270 00:46:42 -- accel/accel.sh@20 -- # read -r var val 00:06:58.270 00:46:42 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:58.270 00:46:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.270 00:46:42 -- accel/accel.sh@20 -- # IFS=: 00:06:58.270 00:46:42 -- accel/accel.sh@20 -- # read -r var val 00:06:58.270 00:46:42 -- accel/accel.sh@21 -- # val=Yes 00:06:58.270 00:46:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.270 00:46:42 -- accel/accel.sh@20 -- # IFS=: 00:06:58.270 00:46:42 -- accel/accel.sh@20 -- # read -r var val 00:06:58.270 00:46:42 -- accel/accel.sh@21 -- # val= 00:06:58.270 00:46:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.270 00:46:42 -- accel/accel.sh@20 -- # IFS=: 00:06:58.270 00:46:42 -- accel/accel.sh@20 -- # read -r var val 00:06:58.270 00:46:42 -- accel/accel.sh@21 -- # val= 00:06:58.270 00:46:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.270 00:46:42 -- accel/accel.sh@20 -- # IFS=: 00:06:58.270 00:46:42 -- accel/accel.sh@20 -- # read -r var val 00:06:59.646 00:46:43 -- accel/accel.sh@21 -- # val= 00:06:59.646 00:46:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.646 00:46:43 -- accel/accel.sh@20 -- # IFS=: 00:06:59.646 00:46:43 -- accel/accel.sh@20 -- # read -r var val 00:06:59.646 00:46:43 -- accel/accel.sh@21 -- # val= 00:06:59.646 00:46:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.646 00:46:43 -- accel/accel.sh@20 -- # IFS=: 00:06:59.646 00:46:43 -- accel/accel.sh@20 -- # read -r var val 00:06:59.646 00:46:43 -- accel/accel.sh@21 -- # val= 00:06:59.646 00:46:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.646 00:46:43 -- accel/accel.sh@20 -- # IFS=: 00:06:59.646 00:46:43 -- accel/accel.sh@20 -- # read -r var val 00:06:59.646 00:46:43 -- accel/accel.sh@21 -- # val= 00:06:59.646 00:46:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.646 00:46:43 -- accel/accel.sh@20 -- # IFS=: 00:06:59.646 00:46:43 -- accel/accel.sh@20 -- # read -r var val 00:06:59.646 00:46:43 -- accel/accel.sh@21 -- # val= 00:06:59.646 00:46:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.646 00:46:43 -- accel/accel.sh@20 -- # IFS=: 00:06:59.646 00:46:43 -- accel/accel.sh@20 -- # read -r var val 00:06:59.646 00:46:43 -- accel/accel.sh@21 -- # val= 00:06:59.646 00:46:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.646 00:46:43 -- accel/accel.sh@20 -- # IFS=: 00:06:59.646 00:46:43 -- accel/accel.sh@20 -- # read -r var val 00:06:59.646 00:46:43 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:59.646 00:46:43 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:59.646 00:46:43 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:59.646 00:06:59.646 real 0m2.795s 00:06:59.646 user 0m2.503s 00:06:59.646 sys 0m0.284s 00:06:59.646 00:46:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.646 00:46:43 -- common/autotest_common.sh@10 -- # set +x 00:06:59.646 ************************************ 00:06:59.646 END TEST accel_copy_crc32c 00:06:59.646 ************************************ 00:06:59.646 00:46:43 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:59.646 00:46:43 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:59.646 00:46:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:59.646 00:46:43 -- common/autotest_common.sh@10 -- # set +x 00:06:59.646 ************************************ 00:06:59.646 START TEST accel_copy_crc32c_C2 00:06:59.646 ************************************ 00:06:59.646 00:46:43 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:59.646 00:46:43 -- accel/accel.sh@16 -- # local accel_opc 00:06:59.646 00:46:43 -- accel/accel.sh@17 -- # local accel_module 00:06:59.646 00:46:43 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:59.646 00:46:43 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:59.646 00:46:43 -- accel/accel.sh@12 -- # build_accel_config 00:06:59.646 00:46:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:59.646 00:46:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.646 00:46:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.646 00:46:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:59.646 00:46:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:59.646 00:46:43 -- accel/accel.sh@41 -- # local IFS=, 00:06:59.646 00:46:43 -- accel/accel.sh@42 -- # jq -r . 00:06:59.646 [2024-07-23 00:46:43.652452] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:59.646 [2024-07-23 00:46:43.652532] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3288415 ] 00:06:59.646 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.646 [2024-07-23 00:46:43.713975] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.646 [2024-07-23 00:46:43.804402] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.052 00:46:45 -- accel/accel.sh@18 -- # out=' 00:07:01.052 SPDK Configuration: 00:07:01.052 Core mask: 0x1 00:07:01.052 00:07:01.052 Accel Perf Configuration: 00:07:01.052 Workload Type: copy_crc32c 00:07:01.052 CRC-32C seed: 0 00:07:01.052 Vector size: 4096 bytes 00:07:01.052 Transfer size: 8192 bytes 00:07:01.052 Vector count 2 00:07:01.052 Module: software 00:07:01.052 Queue depth: 32 00:07:01.052 Allocate depth: 32 00:07:01.052 # threads/core: 1 00:07:01.052 Run time: 1 seconds 00:07:01.052 Verify: Yes 00:07:01.052 00:07:01.052 Running for 1 seconds... 00:07:01.052 00:07:01.052 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:01.052 ------------------------------------------------------------------------------------ 00:07:01.052 0,0 157344/s 1229 MiB/s 0 0 00:07:01.052 ==================================================================================== 00:07:01.052 Total 157344/s 614 MiB/s 0 0' 00:07:01.052 00:46:45 -- accel/accel.sh@20 -- # IFS=: 00:07:01.052 00:46:45 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:01.053 00:46:45 -- accel/accel.sh@20 -- # read -r var val 00:07:01.053 00:46:45 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:01.053 00:46:45 -- accel/accel.sh@12 -- # build_accel_config 00:07:01.053 00:46:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:01.053 00:46:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.053 00:46:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.053 00:46:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:01.053 00:46:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:01.053 00:46:45 -- accel/accel.sh@41 -- # local IFS=, 00:07:01.053 00:46:45 -- accel/accel.sh@42 -- # jq -r . 00:07:01.053 [2024-07-23 00:46:45.057775] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:01.053 [2024-07-23 00:46:45.057852] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3288555 ] 00:07:01.053 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.053 [2024-07-23 00:46:45.118522] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.053 [2024-07-23 00:46:45.208537] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.311 00:46:45 -- accel/accel.sh@21 -- # val= 00:07:01.311 00:46:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.311 00:46:45 -- accel/accel.sh@20 -- # IFS=: 00:07:01.311 00:46:45 -- accel/accel.sh@20 -- # read -r var val 00:07:01.311 00:46:45 -- accel/accel.sh@21 -- # val= 00:07:01.311 00:46:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.311 00:46:45 -- accel/accel.sh@20 -- # IFS=: 00:07:01.311 00:46:45 -- accel/accel.sh@20 -- # read -r var val 00:07:01.311 00:46:45 -- accel/accel.sh@21 -- # val=0x1 00:07:01.311 00:46:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.311 00:46:45 -- accel/accel.sh@20 -- # IFS=: 00:07:01.311 00:46:45 -- accel/accel.sh@20 -- # read -r var val 00:07:01.311 00:46:45 -- accel/accel.sh@21 -- # val= 00:07:01.311 00:46:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.311 00:46:45 -- accel/accel.sh@20 -- # IFS=: 00:07:01.311 00:46:45 -- accel/accel.sh@20 -- # read -r var val 00:07:01.311 00:46:45 -- accel/accel.sh@21 -- # val= 00:07:01.311 00:46:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.311 00:46:45 -- accel/accel.sh@20 -- # IFS=: 00:07:01.311 00:46:45 -- accel/accel.sh@20 -- # read -r var val 00:07:01.311 00:46:45 -- accel/accel.sh@21 -- # val=copy_crc32c 00:07:01.311 00:46:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.311 00:46:45 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:07:01.311 00:46:45 -- accel/accel.sh@20 -- # IFS=: 00:07:01.311 00:46:45 -- accel/accel.sh@20 -- # read -r var val 00:07:01.311 00:46:45 -- accel/accel.sh@21 -- # val=0 00:07:01.311 00:46:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.311 00:46:45 -- accel/accel.sh@20 -- # IFS=: 00:07:01.311 00:46:45 -- accel/accel.sh@20 -- # read -r var val 00:07:01.311 00:46:45 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:01.311 00:46:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.311 00:46:45 -- accel/accel.sh@20 -- # IFS=: 00:07:01.311 00:46:45 -- accel/accel.sh@20 -- # read -r var val 00:07:01.311 00:46:45 -- accel/accel.sh@21 -- # val='8192 bytes' 00:07:01.311 00:46:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.311 00:46:45 -- accel/accel.sh@20 -- # IFS=: 00:07:01.311 00:46:45 -- accel/accel.sh@20 -- # read -r var val 00:07:01.311 00:46:45 -- accel/accel.sh@21 -- # val= 00:07:01.311 00:46:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.311 00:46:45 -- accel/accel.sh@20 -- # IFS=: 00:07:01.311 00:46:45 -- accel/accel.sh@20 -- # read -r var val 00:07:01.311 00:46:45 -- accel/accel.sh@21 -- # val=software 00:07:01.311 00:46:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.311 00:46:45 -- accel/accel.sh@23 -- # accel_module=software 00:07:01.311 00:46:45 -- accel/accel.sh@20 -- # IFS=: 00:07:01.311 00:46:45 -- accel/accel.sh@20 -- # read -r var val 00:07:01.311 00:46:45 -- accel/accel.sh@21 -- # val=32 00:07:01.311 00:46:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.311 00:46:45 -- accel/accel.sh@20 -- # IFS=: 00:07:01.311 00:46:45 -- accel/accel.sh@20 -- # read -r var val 00:07:01.311 00:46:45 -- accel/accel.sh@21 -- # val=32 00:07:01.311 00:46:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.311 00:46:45 -- accel/accel.sh@20 -- # IFS=: 00:07:01.311 00:46:45 -- accel/accel.sh@20 -- # read -r var val 00:07:01.311 00:46:45 -- accel/accel.sh@21 -- # val=1 00:07:01.311 00:46:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.311 00:46:45 -- accel/accel.sh@20 -- # IFS=: 00:07:01.311 00:46:45 -- accel/accel.sh@20 -- # read -r var val 00:07:01.311 00:46:45 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:01.311 00:46:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.311 00:46:45 -- accel/accel.sh@20 -- # IFS=: 00:07:01.311 00:46:45 -- accel/accel.sh@20 -- # read -r var val 00:07:01.311 00:46:45 -- accel/accel.sh@21 -- # val=Yes 00:07:01.311 00:46:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.311 00:46:45 -- accel/accel.sh@20 -- # IFS=: 00:07:01.311 00:46:45 -- accel/accel.sh@20 -- # read -r var val 00:07:01.311 00:46:45 -- accel/accel.sh@21 -- # val= 00:07:01.311 00:46:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.311 00:46:45 -- accel/accel.sh@20 -- # IFS=: 00:07:01.311 00:46:45 -- accel/accel.sh@20 -- # read -r var val 00:07:01.311 00:46:45 -- accel/accel.sh@21 -- # val= 00:07:01.311 00:46:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.311 00:46:45 -- accel/accel.sh@20 -- # IFS=: 00:07:01.311 00:46:45 -- accel/accel.sh@20 -- # read -r var val 00:07:02.245 00:46:46 -- accel/accel.sh@21 -- # val= 00:07:02.245 00:46:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.245 00:46:46 -- accel/accel.sh@20 -- # IFS=: 00:07:02.245 00:46:46 -- accel/accel.sh@20 -- # read -r var val 00:07:02.245 00:46:46 -- accel/accel.sh@21 -- # val= 00:07:02.245 00:46:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.245 00:46:46 -- accel/accel.sh@20 -- # IFS=: 00:07:02.245 00:46:46 -- accel/accel.sh@20 -- # read -r var val 00:07:02.245 00:46:46 -- accel/accel.sh@21 -- # val= 00:07:02.245 00:46:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.245 00:46:46 -- accel/accel.sh@20 -- # IFS=: 00:07:02.245 00:46:46 -- accel/accel.sh@20 -- # read -r var val 00:07:02.245 00:46:46 -- accel/accel.sh@21 -- # val= 00:07:02.245 00:46:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.245 00:46:46 -- accel/accel.sh@20 -- # IFS=: 00:07:02.245 00:46:46 -- accel/accel.sh@20 -- # read -r var val 00:07:02.245 00:46:46 -- accel/accel.sh@21 -- # val= 00:07:02.245 00:46:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.245 00:46:46 -- accel/accel.sh@20 -- # IFS=: 00:07:02.245 00:46:46 -- accel/accel.sh@20 -- # read -r var val 00:07:02.245 00:46:46 -- accel/accel.sh@21 -- # val= 00:07:02.245 00:46:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.245 00:46:46 -- accel/accel.sh@20 -- # IFS=: 00:07:02.245 00:46:46 -- accel/accel.sh@20 -- # read -r var val 00:07:02.245 00:46:46 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:02.245 00:46:46 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:07:02.245 00:46:46 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:02.245 00:07:02.245 real 0m2.807s 00:07:02.245 user 0m2.513s 00:07:02.245 sys 0m0.286s 00:07:02.245 00:46:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:02.245 00:46:46 -- common/autotest_common.sh@10 -- # set +x 00:07:02.245 ************************************ 00:07:02.245 END TEST accel_copy_crc32c_C2 00:07:02.245 ************************************ 00:07:02.504 00:46:46 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:02.504 00:46:46 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:02.504 00:46:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:02.504 00:46:46 -- common/autotest_common.sh@10 -- # set +x 00:07:02.504 ************************************ 00:07:02.504 START TEST accel_dualcast 00:07:02.504 ************************************ 00:07:02.504 00:46:46 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dualcast -y 00:07:02.504 00:46:46 -- accel/accel.sh@16 -- # local accel_opc 00:07:02.504 00:46:46 -- accel/accel.sh@17 -- # local accel_module 00:07:02.504 00:46:46 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:07:02.504 00:46:46 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:02.504 00:46:46 -- accel/accel.sh@12 -- # build_accel_config 00:07:02.504 00:46:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:02.504 00:46:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.504 00:46:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.504 00:46:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:02.504 00:46:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:02.504 00:46:46 -- accel/accel.sh@41 -- # local IFS=, 00:07:02.504 00:46:46 -- accel/accel.sh@42 -- # jq -r . 00:07:02.504 [2024-07-23 00:46:46.484664] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:02.504 [2024-07-23 00:46:46.484740] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3288761 ] 00:07:02.504 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.504 [2024-07-23 00:46:46.546596] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.504 [2024-07-23 00:46:46.638389] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.877 00:46:47 -- accel/accel.sh@18 -- # out=' 00:07:03.877 SPDK Configuration: 00:07:03.877 Core mask: 0x1 00:07:03.877 00:07:03.877 Accel Perf Configuration: 00:07:03.877 Workload Type: dualcast 00:07:03.877 Transfer size: 4096 bytes 00:07:03.877 Vector count 1 00:07:03.877 Module: software 00:07:03.877 Queue depth: 32 00:07:03.877 Allocate depth: 32 00:07:03.877 # threads/core: 1 00:07:03.877 Run time: 1 seconds 00:07:03.877 Verify: Yes 00:07:03.877 00:07:03.877 Running for 1 seconds... 00:07:03.877 00:07:03.877 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:03.877 ------------------------------------------------------------------------------------ 00:07:03.877 0,0 301568/s 1178 MiB/s 0 0 00:07:03.877 ==================================================================================== 00:07:03.877 Total 301568/s 1178 MiB/s 0 0' 00:07:03.877 00:46:47 -- accel/accel.sh@20 -- # IFS=: 00:07:03.877 00:46:47 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:03.877 00:46:47 -- accel/accel.sh@20 -- # read -r var val 00:07:03.877 00:46:47 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:03.877 00:46:47 -- accel/accel.sh@12 -- # build_accel_config 00:07:03.877 00:46:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:03.877 00:46:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.877 00:46:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.877 00:46:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:03.877 00:46:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:03.877 00:46:47 -- accel/accel.sh@41 -- # local IFS=, 00:07:03.877 00:46:47 -- accel/accel.sh@42 -- # jq -r . 00:07:03.877 [2024-07-23 00:46:47.878572] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:03.877 [2024-07-23 00:46:47.878669] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3288979 ] 00:07:03.877 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.877 [2024-07-23 00:46:47.939020] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.877 [2024-07-23 00:46:48.029355] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.135 00:46:48 -- accel/accel.sh@21 -- # val= 00:07:04.135 00:46:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.135 00:46:48 -- accel/accel.sh@20 -- # IFS=: 00:07:04.135 00:46:48 -- accel/accel.sh@20 -- # read -r var val 00:07:04.135 00:46:48 -- accel/accel.sh@21 -- # val= 00:07:04.135 00:46:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.135 00:46:48 -- accel/accel.sh@20 -- # IFS=: 00:07:04.135 00:46:48 -- accel/accel.sh@20 -- # read -r var val 00:07:04.135 00:46:48 -- accel/accel.sh@21 -- # val=0x1 00:07:04.135 00:46:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.135 00:46:48 -- accel/accel.sh@20 -- # IFS=: 00:07:04.135 00:46:48 -- accel/accel.sh@20 -- # read -r var val 00:07:04.135 00:46:48 -- accel/accel.sh@21 -- # val= 00:07:04.135 00:46:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.135 00:46:48 -- accel/accel.sh@20 -- # IFS=: 00:07:04.135 00:46:48 -- accel/accel.sh@20 -- # read -r var val 00:07:04.135 00:46:48 -- accel/accel.sh@21 -- # val= 00:07:04.135 00:46:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.135 00:46:48 -- accel/accel.sh@20 -- # IFS=: 00:07:04.135 00:46:48 -- accel/accel.sh@20 -- # read -r var val 00:07:04.135 00:46:48 -- accel/accel.sh@21 -- # val=dualcast 00:07:04.135 00:46:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.135 00:46:48 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:07:04.135 00:46:48 -- accel/accel.sh@20 -- # IFS=: 00:07:04.135 00:46:48 -- accel/accel.sh@20 -- # read -r var val 00:07:04.135 00:46:48 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:04.135 00:46:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.135 00:46:48 -- accel/accel.sh@20 -- # IFS=: 00:07:04.135 00:46:48 -- accel/accel.sh@20 -- # read -r var val 00:07:04.135 00:46:48 -- accel/accel.sh@21 -- # val= 00:07:04.135 00:46:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.135 00:46:48 -- accel/accel.sh@20 -- # IFS=: 00:07:04.136 00:46:48 -- accel/accel.sh@20 -- # read -r var val 00:07:04.136 00:46:48 -- accel/accel.sh@21 -- # val=software 00:07:04.136 00:46:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.136 00:46:48 -- accel/accel.sh@23 -- # accel_module=software 00:07:04.136 00:46:48 -- accel/accel.sh@20 -- # IFS=: 00:07:04.136 00:46:48 -- accel/accel.sh@20 -- # read -r var val 00:07:04.136 00:46:48 -- accel/accel.sh@21 -- # val=32 00:07:04.136 00:46:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.136 00:46:48 -- accel/accel.sh@20 -- # IFS=: 00:07:04.136 00:46:48 -- accel/accel.sh@20 -- # read -r var val 00:07:04.136 00:46:48 -- accel/accel.sh@21 -- # val=32 00:07:04.136 00:46:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.136 00:46:48 -- accel/accel.sh@20 -- # IFS=: 00:07:04.136 00:46:48 -- accel/accel.sh@20 -- # read -r var val 00:07:04.136 00:46:48 -- accel/accel.sh@21 -- # val=1 00:07:04.136 00:46:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.136 00:46:48 -- accel/accel.sh@20 -- # IFS=: 00:07:04.136 00:46:48 -- accel/accel.sh@20 -- # read -r var val 00:07:04.136 00:46:48 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:04.136 00:46:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.136 00:46:48 -- accel/accel.sh@20 -- # IFS=: 00:07:04.136 00:46:48 -- accel/accel.sh@20 -- # read -r var val 00:07:04.136 00:46:48 -- accel/accel.sh@21 -- # val=Yes 00:07:04.136 00:46:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.136 00:46:48 -- accel/accel.sh@20 -- # IFS=: 00:07:04.136 00:46:48 -- accel/accel.sh@20 -- # read -r var val 00:07:04.136 00:46:48 -- accel/accel.sh@21 -- # val= 00:07:04.136 00:46:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.136 00:46:48 -- accel/accel.sh@20 -- # IFS=: 00:07:04.136 00:46:48 -- accel/accel.sh@20 -- # read -r var val 00:07:04.136 00:46:48 -- accel/accel.sh@21 -- # val= 00:07:04.136 00:46:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.136 00:46:48 -- accel/accel.sh@20 -- # IFS=: 00:07:04.136 00:46:48 -- accel/accel.sh@20 -- # read -r var val 00:07:05.070 00:46:49 -- accel/accel.sh@21 -- # val= 00:07:05.070 00:46:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.070 00:46:49 -- accel/accel.sh@20 -- # IFS=: 00:07:05.070 00:46:49 -- accel/accel.sh@20 -- # read -r var val 00:07:05.070 00:46:49 -- accel/accel.sh@21 -- # val= 00:07:05.070 00:46:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.070 00:46:49 -- accel/accel.sh@20 -- # IFS=: 00:07:05.070 00:46:49 -- accel/accel.sh@20 -- # read -r var val 00:07:05.070 00:46:49 -- accel/accel.sh@21 -- # val= 00:07:05.070 00:46:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.070 00:46:49 -- accel/accel.sh@20 -- # IFS=: 00:07:05.070 00:46:49 -- accel/accel.sh@20 -- # read -r var val 00:07:05.070 00:46:49 -- accel/accel.sh@21 -- # val= 00:07:05.070 00:46:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.070 00:46:49 -- accel/accel.sh@20 -- # IFS=: 00:07:05.070 00:46:49 -- accel/accel.sh@20 -- # read -r var val 00:07:05.070 00:46:49 -- accel/accel.sh@21 -- # val= 00:07:05.070 00:46:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.070 00:46:49 -- accel/accel.sh@20 -- # IFS=: 00:07:05.070 00:46:49 -- accel/accel.sh@20 -- # read -r var val 00:07:05.070 00:46:49 -- accel/accel.sh@21 -- # val= 00:07:05.070 00:46:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.070 00:46:49 -- accel/accel.sh@20 -- # IFS=: 00:07:05.070 00:46:49 -- accel/accel.sh@20 -- # read -r var val 00:07:05.070 00:46:49 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:05.070 00:46:49 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:07:05.070 00:46:49 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:05.070 00:07:05.070 real 0m2.795s 00:07:05.070 user 0m2.488s 00:07:05.070 sys 0m0.298s 00:07:05.070 00:46:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:05.070 00:46:49 -- common/autotest_common.sh@10 -- # set +x 00:07:05.070 ************************************ 00:07:05.070 END TEST accel_dualcast 00:07:05.070 ************************************ 00:07:05.329 00:46:49 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:05.329 00:46:49 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:05.329 00:46:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:05.329 00:46:49 -- common/autotest_common.sh@10 -- # set +x 00:07:05.329 ************************************ 00:07:05.329 START TEST accel_compare 00:07:05.329 ************************************ 00:07:05.329 00:46:49 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compare -y 00:07:05.329 00:46:49 -- accel/accel.sh@16 -- # local accel_opc 00:07:05.329 00:46:49 -- accel/accel.sh@17 -- # local accel_module 00:07:05.329 00:46:49 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:07:05.329 00:46:49 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:05.329 00:46:49 -- accel/accel.sh@12 -- # build_accel_config 00:07:05.329 00:46:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:05.329 00:46:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.329 00:46:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.329 00:46:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:05.329 00:46:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:05.329 00:46:49 -- accel/accel.sh@41 -- # local IFS=, 00:07:05.329 00:46:49 -- accel/accel.sh@42 -- # jq -r . 00:07:05.329 [2024-07-23 00:46:49.304702] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:05.329 [2024-07-23 00:46:49.304779] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3289136 ] 00:07:05.329 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.329 [2024-07-23 00:46:49.365942] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.329 [2024-07-23 00:46:49.456778] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.715 00:46:50 -- accel/accel.sh@18 -- # out=' 00:07:06.715 SPDK Configuration: 00:07:06.715 Core mask: 0x1 00:07:06.715 00:07:06.715 Accel Perf Configuration: 00:07:06.715 Workload Type: compare 00:07:06.715 Transfer size: 4096 bytes 00:07:06.715 Vector count 1 00:07:06.715 Module: software 00:07:06.715 Queue depth: 32 00:07:06.715 Allocate depth: 32 00:07:06.715 # threads/core: 1 00:07:06.715 Run time: 1 seconds 00:07:06.715 Verify: Yes 00:07:06.715 00:07:06.715 Running for 1 seconds... 00:07:06.715 00:07:06.715 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:06.715 ------------------------------------------------------------------------------------ 00:07:06.715 0,0 399104/s 1559 MiB/s 0 0 00:07:06.716 ==================================================================================== 00:07:06.716 Total 399104/s 1559 MiB/s 0 0' 00:07:06.716 00:46:50 -- accel/accel.sh@20 -- # IFS=: 00:07:06.716 00:46:50 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:06.716 00:46:50 -- accel/accel.sh@20 -- # read -r var val 00:07:06.716 00:46:50 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:06.716 00:46:50 -- accel/accel.sh@12 -- # build_accel_config 00:07:06.716 00:46:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:06.716 00:46:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.716 00:46:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.716 00:46:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:06.716 00:46:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:06.716 00:46:50 -- accel/accel.sh@41 -- # local IFS=, 00:07:06.716 00:46:50 -- accel/accel.sh@42 -- # jq -r . 00:07:06.716 [2024-07-23 00:46:50.706096] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:06.716 [2024-07-23 00:46:50.706173] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3289282 ] 00:07:06.716 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.716 [2024-07-23 00:46:50.766120] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.716 [2024-07-23 00:46:50.855973] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.716 00:46:50 -- accel/accel.sh@21 -- # val= 00:07:06.973 00:46:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.973 00:46:50 -- accel/accel.sh@20 -- # IFS=: 00:07:06.973 00:46:50 -- accel/accel.sh@20 -- # read -r var val 00:07:06.973 00:46:50 -- accel/accel.sh@21 -- # val= 00:07:06.973 00:46:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.973 00:46:50 -- accel/accel.sh@20 -- # IFS=: 00:07:06.973 00:46:50 -- accel/accel.sh@20 -- # read -r var val 00:07:06.973 00:46:50 -- accel/accel.sh@21 -- # val=0x1 00:07:06.973 00:46:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.973 00:46:50 -- accel/accel.sh@20 -- # IFS=: 00:07:06.973 00:46:50 -- accel/accel.sh@20 -- # read -r var val 00:07:06.973 00:46:50 -- accel/accel.sh@21 -- # val= 00:07:06.973 00:46:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.973 00:46:50 -- accel/accel.sh@20 -- # IFS=: 00:07:06.973 00:46:50 -- accel/accel.sh@20 -- # read -r var val 00:07:06.973 00:46:50 -- accel/accel.sh@21 -- # val= 00:07:06.973 00:46:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.973 00:46:50 -- accel/accel.sh@20 -- # IFS=: 00:07:06.973 00:46:50 -- accel/accel.sh@20 -- # read -r var val 00:07:06.973 00:46:50 -- accel/accel.sh@21 -- # val=compare 00:07:06.973 00:46:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.973 00:46:50 -- accel/accel.sh@24 -- # accel_opc=compare 00:07:06.973 00:46:50 -- accel/accel.sh@20 -- # IFS=: 00:07:06.973 00:46:50 -- accel/accel.sh@20 -- # read -r var val 00:07:06.973 00:46:50 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:06.973 00:46:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.973 00:46:50 -- accel/accel.sh@20 -- # IFS=: 00:07:06.973 00:46:50 -- accel/accel.sh@20 -- # read -r var val 00:07:06.973 00:46:50 -- accel/accel.sh@21 -- # val= 00:07:06.973 00:46:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.974 00:46:50 -- accel/accel.sh@20 -- # IFS=: 00:07:06.974 00:46:50 -- accel/accel.sh@20 -- # read -r var val 00:07:06.974 00:46:50 -- accel/accel.sh@21 -- # val=software 00:07:06.974 00:46:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.974 00:46:50 -- accel/accel.sh@23 -- # accel_module=software 00:07:06.974 00:46:50 -- accel/accel.sh@20 -- # IFS=: 00:07:06.974 00:46:50 -- accel/accel.sh@20 -- # read -r var val 00:07:06.974 00:46:50 -- accel/accel.sh@21 -- # val=32 00:07:06.974 00:46:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.974 00:46:50 -- accel/accel.sh@20 -- # IFS=: 00:07:06.974 00:46:50 -- accel/accel.sh@20 -- # read -r var val 00:07:06.974 00:46:50 -- accel/accel.sh@21 -- # val=32 00:07:06.974 00:46:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.974 00:46:50 -- accel/accel.sh@20 -- # IFS=: 00:07:06.974 00:46:50 -- accel/accel.sh@20 -- # read -r var val 00:07:06.974 00:46:50 -- accel/accel.sh@21 -- # val=1 00:07:06.974 00:46:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.974 00:46:50 -- accel/accel.sh@20 -- # IFS=: 00:07:06.974 00:46:50 -- accel/accel.sh@20 -- # read -r var val 00:07:06.974 00:46:50 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:06.974 00:46:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.974 00:46:50 -- accel/accel.sh@20 -- # IFS=: 00:07:06.974 00:46:50 -- accel/accel.sh@20 -- # read -r var val 00:07:06.974 00:46:50 -- accel/accel.sh@21 -- # val=Yes 00:07:06.974 00:46:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.974 00:46:50 -- accel/accel.sh@20 -- # IFS=: 00:07:06.974 00:46:50 -- accel/accel.sh@20 -- # read -r var val 00:07:06.974 00:46:50 -- accel/accel.sh@21 -- # val= 00:07:06.974 00:46:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.974 00:46:50 -- accel/accel.sh@20 -- # IFS=: 00:07:06.974 00:46:50 -- accel/accel.sh@20 -- # read -r var val 00:07:06.974 00:46:50 -- accel/accel.sh@21 -- # val= 00:07:06.974 00:46:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.974 00:46:50 -- accel/accel.sh@20 -- # IFS=: 00:07:06.974 00:46:50 -- accel/accel.sh@20 -- # read -r var val 00:07:07.907 00:46:52 -- accel/accel.sh@21 -- # val= 00:07:07.907 00:46:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.907 00:46:52 -- accel/accel.sh@20 -- # IFS=: 00:07:07.907 00:46:52 -- accel/accel.sh@20 -- # read -r var val 00:07:07.907 00:46:52 -- accel/accel.sh@21 -- # val= 00:07:07.907 00:46:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.907 00:46:52 -- accel/accel.sh@20 -- # IFS=: 00:07:07.907 00:46:52 -- accel/accel.sh@20 -- # read -r var val 00:07:07.907 00:46:52 -- accel/accel.sh@21 -- # val= 00:07:07.907 00:46:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.907 00:46:52 -- accel/accel.sh@20 -- # IFS=: 00:07:07.907 00:46:52 -- accel/accel.sh@20 -- # read -r var val 00:07:07.907 00:46:52 -- accel/accel.sh@21 -- # val= 00:07:07.907 00:46:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.907 00:46:52 -- accel/accel.sh@20 -- # IFS=: 00:07:07.907 00:46:52 -- accel/accel.sh@20 -- # read -r var val 00:07:07.907 00:46:52 -- accel/accel.sh@21 -- # val= 00:07:07.907 00:46:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.907 00:46:52 -- accel/accel.sh@20 -- # IFS=: 00:07:07.907 00:46:52 -- accel/accel.sh@20 -- # read -r var val 00:07:07.907 00:46:52 -- accel/accel.sh@21 -- # val= 00:07:07.907 00:46:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.907 00:46:52 -- accel/accel.sh@20 -- # IFS=: 00:07:07.907 00:46:52 -- accel/accel.sh@20 -- # read -r var val 00:07:07.907 00:46:52 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:07.907 00:46:52 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:07:07.907 00:46:52 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:07.907 00:07:07.907 real 0m2.802s 00:07:07.907 user 0m2.514s 00:07:07.907 sys 0m0.281s 00:07:07.907 00:46:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.907 00:46:52 -- common/autotest_common.sh@10 -- # set +x 00:07:07.907 ************************************ 00:07:07.907 END TEST accel_compare 00:07:07.907 ************************************ 00:07:08.165 00:46:52 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:08.165 00:46:52 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:08.165 00:46:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:08.165 00:46:52 -- common/autotest_common.sh@10 -- # set +x 00:07:08.165 ************************************ 00:07:08.165 START TEST accel_xor 00:07:08.165 ************************************ 00:07:08.165 00:46:52 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y 00:07:08.165 00:46:52 -- accel/accel.sh@16 -- # local accel_opc 00:07:08.165 00:46:52 -- accel/accel.sh@17 -- # local accel_module 00:07:08.165 00:46:52 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:07:08.165 00:46:52 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:08.165 00:46:52 -- accel/accel.sh@12 -- # build_accel_config 00:07:08.165 00:46:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:08.165 00:46:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.165 00:46:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.165 00:46:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:08.165 00:46:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:08.165 00:46:52 -- accel/accel.sh@41 -- # local IFS=, 00:07:08.165 00:46:52 -- accel/accel.sh@42 -- # jq -r . 00:07:08.165 [2024-07-23 00:46:52.131358] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:08.165 [2024-07-23 00:46:52.131444] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3289454 ] 00:07:08.165 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.165 [2024-07-23 00:46:52.195751] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.165 [2024-07-23 00:46:52.286001] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.536 00:46:53 -- accel/accel.sh@18 -- # out=' 00:07:09.536 SPDK Configuration: 00:07:09.536 Core mask: 0x1 00:07:09.536 00:07:09.536 Accel Perf Configuration: 00:07:09.536 Workload Type: xor 00:07:09.536 Source buffers: 2 00:07:09.536 Transfer size: 4096 bytes 00:07:09.536 Vector count 1 00:07:09.536 Module: software 00:07:09.536 Queue depth: 32 00:07:09.536 Allocate depth: 32 00:07:09.536 # threads/core: 1 00:07:09.536 Run time: 1 seconds 00:07:09.536 Verify: Yes 00:07:09.536 00:07:09.536 Running for 1 seconds... 00:07:09.536 00:07:09.536 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:09.536 ------------------------------------------------------------------------------------ 00:07:09.536 0,0 192960/s 753 MiB/s 0 0 00:07:09.536 ==================================================================================== 00:07:09.536 Total 192960/s 753 MiB/s 0 0' 00:07:09.536 00:46:53 -- accel/accel.sh@20 -- # IFS=: 00:07:09.536 00:46:53 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:09.536 00:46:53 -- accel/accel.sh@20 -- # read -r var val 00:07:09.536 00:46:53 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:09.536 00:46:53 -- accel/accel.sh@12 -- # build_accel_config 00:07:09.536 00:46:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:09.537 00:46:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:09.537 00:46:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:09.537 00:46:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:09.537 00:46:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:09.537 00:46:53 -- accel/accel.sh@41 -- # local IFS=, 00:07:09.537 00:46:53 -- accel/accel.sh@42 -- # jq -r . 00:07:09.537 [2024-07-23 00:46:53.537547] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:09.537 [2024-07-23 00:46:53.537632] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3289701 ] 00:07:09.537 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.537 [2024-07-23 00:46:53.597448] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.537 [2024-07-23 00:46:53.689749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.795 00:46:53 -- accel/accel.sh@21 -- # val= 00:07:09.795 00:46:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.795 00:46:53 -- accel/accel.sh@20 -- # IFS=: 00:07:09.795 00:46:53 -- accel/accel.sh@20 -- # read -r var val 00:07:09.795 00:46:53 -- accel/accel.sh@21 -- # val= 00:07:09.795 00:46:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.795 00:46:53 -- accel/accel.sh@20 -- # IFS=: 00:07:09.795 00:46:53 -- accel/accel.sh@20 -- # read -r var val 00:07:09.795 00:46:53 -- accel/accel.sh@21 -- # val=0x1 00:07:09.795 00:46:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.795 00:46:53 -- accel/accel.sh@20 -- # IFS=: 00:07:09.795 00:46:53 -- accel/accel.sh@20 -- # read -r var val 00:07:09.795 00:46:53 -- accel/accel.sh@21 -- # val= 00:07:09.795 00:46:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.795 00:46:53 -- accel/accel.sh@20 -- # IFS=: 00:07:09.795 00:46:53 -- accel/accel.sh@20 -- # read -r var val 00:07:09.795 00:46:53 -- accel/accel.sh@21 -- # val= 00:07:09.795 00:46:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.795 00:46:53 -- accel/accel.sh@20 -- # IFS=: 00:07:09.795 00:46:53 -- accel/accel.sh@20 -- # read -r var val 00:07:09.795 00:46:53 -- accel/accel.sh@21 -- # val=xor 00:07:09.795 00:46:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.795 00:46:53 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:09.795 00:46:53 -- accel/accel.sh@20 -- # IFS=: 00:07:09.795 00:46:53 -- accel/accel.sh@20 -- # read -r var val 00:07:09.795 00:46:53 -- accel/accel.sh@21 -- # val=2 00:07:09.795 00:46:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.795 00:46:53 -- accel/accel.sh@20 -- # IFS=: 00:07:09.795 00:46:53 -- accel/accel.sh@20 -- # read -r var val 00:07:09.795 00:46:53 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:09.795 00:46:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.795 00:46:53 -- accel/accel.sh@20 -- # IFS=: 00:07:09.795 00:46:53 -- accel/accel.sh@20 -- # read -r var val 00:07:09.795 00:46:53 -- accel/accel.sh@21 -- # val= 00:07:09.795 00:46:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.795 00:46:53 -- accel/accel.sh@20 -- # IFS=: 00:07:09.795 00:46:53 -- accel/accel.sh@20 -- # read -r var val 00:07:09.795 00:46:53 -- accel/accel.sh@21 -- # val=software 00:07:09.795 00:46:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.795 00:46:53 -- accel/accel.sh@23 -- # accel_module=software 00:07:09.795 00:46:53 -- accel/accel.sh@20 -- # IFS=: 00:07:09.795 00:46:53 -- accel/accel.sh@20 -- # read -r var val 00:07:09.795 00:46:53 -- accel/accel.sh@21 -- # val=32 00:07:09.795 00:46:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.795 00:46:53 -- accel/accel.sh@20 -- # IFS=: 00:07:09.795 00:46:53 -- accel/accel.sh@20 -- # read -r var val 00:07:09.795 00:46:53 -- accel/accel.sh@21 -- # val=32 00:07:09.795 00:46:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.795 00:46:53 -- accel/accel.sh@20 -- # IFS=: 00:07:09.795 00:46:53 -- accel/accel.sh@20 -- # read -r var val 00:07:09.795 00:46:53 -- accel/accel.sh@21 -- # val=1 00:07:09.795 00:46:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.795 00:46:53 -- accel/accel.sh@20 -- # IFS=: 00:07:09.795 00:46:53 -- accel/accel.sh@20 -- # read -r var val 00:07:09.795 00:46:53 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:09.795 00:46:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.795 00:46:53 -- accel/accel.sh@20 -- # IFS=: 00:07:09.795 00:46:53 -- accel/accel.sh@20 -- # read -r var val 00:07:09.795 00:46:53 -- accel/accel.sh@21 -- # val=Yes 00:07:09.795 00:46:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.795 00:46:53 -- accel/accel.sh@20 -- # IFS=: 00:07:09.795 00:46:53 -- accel/accel.sh@20 -- # read -r var val 00:07:09.795 00:46:53 -- accel/accel.sh@21 -- # val= 00:07:09.795 00:46:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.795 00:46:53 -- accel/accel.sh@20 -- # IFS=: 00:07:09.795 00:46:53 -- accel/accel.sh@20 -- # read -r var val 00:07:09.795 00:46:53 -- accel/accel.sh@21 -- # val= 00:07:09.795 00:46:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.795 00:46:53 -- accel/accel.sh@20 -- # IFS=: 00:07:09.795 00:46:53 -- accel/accel.sh@20 -- # read -r var val 00:07:10.729 00:46:54 -- accel/accel.sh@21 -- # val= 00:07:10.729 00:46:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.729 00:46:54 -- accel/accel.sh@20 -- # IFS=: 00:07:10.730 00:46:54 -- accel/accel.sh@20 -- # read -r var val 00:07:10.730 00:46:54 -- accel/accel.sh@21 -- # val= 00:07:10.730 00:46:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.730 00:46:54 -- accel/accel.sh@20 -- # IFS=: 00:07:10.730 00:46:54 -- accel/accel.sh@20 -- # read -r var val 00:07:10.730 00:46:54 -- accel/accel.sh@21 -- # val= 00:07:10.730 00:46:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.730 00:46:54 -- accel/accel.sh@20 -- # IFS=: 00:07:10.730 00:46:54 -- accel/accel.sh@20 -- # read -r var val 00:07:10.730 00:46:54 -- accel/accel.sh@21 -- # val= 00:07:10.730 00:46:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.730 00:46:54 -- accel/accel.sh@20 -- # IFS=: 00:07:10.730 00:46:54 -- accel/accel.sh@20 -- # read -r var val 00:07:10.730 00:46:54 -- accel/accel.sh@21 -- # val= 00:07:10.730 00:46:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.730 00:46:54 -- accel/accel.sh@20 -- # IFS=: 00:07:10.730 00:46:54 -- accel/accel.sh@20 -- # read -r var val 00:07:10.730 00:46:54 -- accel/accel.sh@21 -- # val= 00:07:10.730 00:46:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.730 00:46:54 -- accel/accel.sh@20 -- # IFS=: 00:07:10.730 00:46:54 -- accel/accel.sh@20 -- # read -r var val 00:07:10.730 00:46:54 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:10.730 00:46:54 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:10.730 00:46:54 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:10.730 00:07:10.730 real 0m2.803s 00:07:10.730 user 0m2.500s 00:07:10.730 sys 0m0.296s 00:07:10.730 00:46:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:10.730 00:46:54 -- common/autotest_common.sh@10 -- # set +x 00:07:10.730 ************************************ 00:07:10.730 END TEST accel_xor 00:07:10.730 ************************************ 00:07:10.988 00:46:54 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:10.988 00:46:54 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:10.988 00:46:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:10.988 00:46:54 -- common/autotest_common.sh@10 -- # set +x 00:07:10.988 ************************************ 00:07:10.988 START TEST accel_xor 00:07:10.988 ************************************ 00:07:10.988 00:46:54 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y -x 3 00:07:10.988 00:46:54 -- accel/accel.sh@16 -- # local accel_opc 00:07:10.988 00:46:54 -- accel/accel.sh@17 -- # local accel_module 00:07:10.988 00:46:54 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:07:10.988 00:46:54 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:10.988 00:46:54 -- accel/accel.sh@12 -- # build_accel_config 00:07:10.988 00:46:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:10.988 00:46:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.988 00:46:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.988 00:46:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:10.988 00:46:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:10.988 00:46:54 -- accel/accel.sh@41 -- # local IFS=, 00:07:10.988 00:46:54 -- accel/accel.sh@42 -- # jq -r . 00:07:10.988 [2024-07-23 00:46:54.960178] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:10.988 [2024-07-23 00:46:54.960255] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3289863 ] 00:07:10.988 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.989 [2024-07-23 00:46:55.021824] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.989 [2024-07-23 00:46:55.112884] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.362 00:46:56 -- accel/accel.sh@18 -- # out=' 00:07:12.362 SPDK Configuration: 00:07:12.362 Core mask: 0x1 00:07:12.362 00:07:12.362 Accel Perf Configuration: 00:07:12.362 Workload Type: xor 00:07:12.362 Source buffers: 3 00:07:12.362 Transfer size: 4096 bytes 00:07:12.362 Vector count 1 00:07:12.362 Module: software 00:07:12.362 Queue depth: 32 00:07:12.362 Allocate depth: 32 00:07:12.362 # threads/core: 1 00:07:12.362 Run time: 1 seconds 00:07:12.362 Verify: Yes 00:07:12.362 00:07:12.362 Running for 1 seconds... 00:07:12.362 00:07:12.362 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:12.362 ------------------------------------------------------------------------------------ 00:07:12.362 0,0 184736/s 721 MiB/s 0 0 00:07:12.362 ==================================================================================== 00:07:12.362 Total 184736/s 721 MiB/s 0 0' 00:07:12.362 00:46:56 -- accel/accel.sh@20 -- # IFS=: 00:07:12.362 00:46:56 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:12.362 00:46:56 -- accel/accel.sh@20 -- # read -r var val 00:07:12.362 00:46:56 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:12.362 00:46:56 -- accel/accel.sh@12 -- # build_accel_config 00:07:12.362 00:46:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:12.362 00:46:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:12.362 00:46:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:12.362 00:46:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:12.362 00:46:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:12.362 00:46:56 -- accel/accel.sh@41 -- # local IFS=, 00:07:12.362 00:46:56 -- accel/accel.sh@42 -- # jq -r . 00:07:12.362 [2024-07-23 00:46:56.366908] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:12.362 [2024-07-23 00:46:56.366986] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3290003 ] 00:07:12.362 EAL: No free 2048 kB hugepages reported on node 1 00:07:12.362 [2024-07-23 00:46:56.427058] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.362 [2024-07-23 00:46:56.517188] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.620 00:46:56 -- accel/accel.sh@21 -- # val= 00:07:12.620 00:46:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.620 00:46:56 -- accel/accel.sh@20 -- # IFS=: 00:07:12.620 00:46:56 -- accel/accel.sh@20 -- # read -r var val 00:07:12.620 00:46:56 -- accel/accel.sh@21 -- # val= 00:07:12.620 00:46:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.620 00:46:56 -- accel/accel.sh@20 -- # IFS=: 00:07:12.620 00:46:56 -- accel/accel.sh@20 -- # read -r var val 00:07:12.620 00:46:56 -- accel/accel.sh@21 -- # val=0x1 00:07:12.620 00:46:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.620 00:46:56 -- accel/accel.sh@20 -- # IFS=: 00:07:12.620 00:46:56 -- accel/accel.sh@20 -- # read -r var val 00:07:12.620 00:46:56 -- accel/accel.sh@21 -- # val= 00:07:12.620 00:46:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.620 00:46:56 -- accel/accel.sh@20 -- # IFS=: 00:07:12.620 00:46:56 -- accel/accel.sh@20 -- # read -r var val 00:07:12.620 00:46:56 -- accel/accel.sh@21 -- # val= 00:07:12.620 00:46:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.620 00:46:56 -- accel/accel.sh@20 -- # IFS=: 00:07:12.621 00:46:56 -- accel/accel.sh@20 -- # read -r var val 00:07:12.621 00:46:56 -- accel/accel.sh@21 -- # val=xor 00:07:12.621 00:46:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.621 00:46:56 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:12.621 00:46:56 -- accel/accel.sh@20 -- # IFS=: 00:07:12.621 00:46:56 -- accel/accel.sh@20 -- # read -r var val 00:07:12.621 00:46:56 -- accel/accel.sh@21 -- # val=3 00:07:12.621 00:46:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.621 00:46:56 -- accel/accel.sh@20 -- # IFS=: 00:07:12.621 00:46:56 -- accel/accel.sh@20 -- # read -r var val 00:07:12.621 00:46:56 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:12.621 00:46:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.621 00:46:56 -- accel/accel.sh@20 -- # IFS=: 00:07:12.621 00:46:56 -- accel/accel.sh@20 -- # read -r var val 00:07:12.621 00:46:56 -- accel/accel.sh@21 -- # val= 00:07:12.621 00:46:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.621 00:46:56 -- accel/accel.sh@20 -- # IFS=: 00:07:12.621 00:46:56 -- accel/accel.sh@20 -- # read -r var val 00:07:12.621 00:46:56 -- accel/accel.sh@21 -- # val=software 00:07:12.621 00:46:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.621 00:46:56 -- accel/accel.sh@23 -- # accel_module=software 00:07:12.621 00:46:56 -- accel/accel.sh@20 -- # IFS=: 00:07:12.621 00:46:56 -- accel/accel.sh@20 -- # read -r var val 00:07:12.621 00:46:56 -- accel/accel.sh@21 -- # val=32 00:07:12.621 00:46:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.621 00:46:56 -- accel/accel.sh@20 -- # IFS=: 00:07:12.621 00:46:56 -- accel/accel.sh@20 -- # read -r var val 00:07:12.621 00:46:56 -- accel/accel.sh@21 -- # val=32 00:07:12.621 00:46:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.621 00:46:56 -- accel/accel.sh@20 -- # IFS=: 00:07:12.621 00:46:56 -- accel/accel.sh@20 -- # read -r var val 00:07:12.621 00:46:56 -- accel/accel.sh@21 -- # val=1 00:07:12.621 00:46:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.621 00:46:56 -- accel/accel.sh@20 -- # IFS=: 00:07:12.621 00:46:56 -- accel/accel.sh@20 -- # read -r var val 00:07:12.621 00:46:56 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:12.621 00:46:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.621 00:46:56 -- accel/accel.sh@20 -- # IFS=: 00:07:12.621 00:46:56 -- accel/accel.sh@20 -- # read -r var val 00:07:12.621 00:46:56 -- accel/accel.sh@21 -- # val=Yes 00:07:12.621 00:46:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.621 00:46:56 -- accel/accel.sh@20 -- # IFS=: 00:07:12.621 00:46:56 -- accel/accel.sh@20 -- # read -r var val 00:07:12.621 00:46:56 -- accel/accel.sh@21 -- # val= 00:07:12.621 00:46:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.621 00:46:56 -- accel/accel.sh@20 -- # IFS=: 00:07:12.621 00:46:56 -- accel/accel.sh@20 -- # read -r var val 00:07:12.621 00:46:56 -- accel/accel.sh@21 -- # val= 00:07:12.621 00:46:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.621 00:46:56 -- accel/accel.sh@20 -- # IFS=: 00:07:12.621 00:46:56 -- accel/accel.sh@20 -- # read -r var val 00:07:13.556 00:46:57 -- accel/accel.sh@21 -- # val= 00:07:13.556 00:46:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.556 00:46:57 -- accel/accel.sh@20 -- # IFS=: 00:07:13.556 00:46:57 -- accel/accel.sh@20 -- # read -r var val 00:07:13.556 00:46:57 -- accel/accel.sh@21 -- # val= 00:07:13.556 00:46:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.556 00:46:57 -- accel/accel.sh@20 -- # IFS=: 00:07:13.556 00:46:57 -- accel/accel.sh@20 -- # read -r var val 00:07:13.556 00:46:57 -- accel/accel.sh@21 -- # val= 00:07:13.556 00:46:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.556 00:46:57 -- accel/accel.sh@20 -- # IFS=: 00:07:13.556 00:46:57 -- accel/accel.sh@20 -- # read -r var val 00:07:13.556 00:46:57 -- accel/accel.sh@21 -- # val= 00:07:13.556 00:46:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.556 00:46:57 -- accel/accel.sh@20 -- # IFS=: 00:07:13.556 00:46:57 -- accel/accel.sh@20 -- # read -r var val 00:07:13.556 00:46:57 -- accel/accel.sh@21 -- # val= 00:07:13.556 00:46:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.556 00:46:57 -- accel/accel.sh@20 -- # IFS=: 00:07:13.556 00:46:57 -- accel/accel.sh@20 -- # read -r var val 00:07:13.556 00:46:57 -- accel/accel.sh@21 -- # val= 00:07:13.556 00:46:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.556 00:46:57 -- accel/accel.sh@20 -- # IFS=: 00:07:13.556 00:46:57 -- accel/accel.sh@20 -- # read -r var val 00:07:13.556 00:46:57 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:13.556 00:46:57 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:13.556 00:46:57 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:13.556 00:07:13.556 real 0m2.810s 00:07:13.556 user 0m2.508s 00:07:13.556 sys 0m0.294s 00:07:13.556 00:46:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.556 00:46:57 -- common/autotest_common.sh@10 -- # set +x 00:07:13.556 ************************************ 00:07:13.556 END TEST accel_xor 00:07:13.556 ************************************ 00:07:13.815 00:46:57 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:13.815 00:46:57 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:07:13.815 00:46:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:13.815 00:46:57 -- common/autotest_common.sh@10 -- # set +x 00:07:13.815 ************************************ 00:07:13.815 START TEST accel_dif_verify 00:07:13.815 ************************************ 00:07:13.815 00:46:57 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_verify 00:07:13.815 00:46:57 -- accel/accel.sh@16 -- # local accel_opc 00:07:13.815 00:46:57 -- accel/accel.sh@17 -- # local accel_module 00:07:13.815 00:46:57 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:07:13.815 00:46:57 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:13.815 00:46:57 -- accel/accel.sh@12 -- # build_accel_config 00:07:13.815 00:46:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:13.815 00:46:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:13.815 00:46:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.815 00:46:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:13.815 00:46:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:13.815 00:46:57 -- accel/accel.sh@41 -- # local IFS=, 00:07:13.815 00:46:57 -- accel/accel.sh@42 -- # jq -r . 00:07:13.815 [2024-07-23 00:46:57.797704] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:13.815 [2024-07-23 00:46:57.797775] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3290168 ] 00:07:13.815 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.815 [2024-07-23 00:46:57.858491] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.815 [2024-07-23 00:46:57.950733] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.188 00:46:59 -- accel/accel.sh@18 -- # out=' 00:07:15.188 SPDK Configuration: 00:07:15.188 Core mask: 0x1 00:07:15.188 00:07:15.188 Accel Perf Configuration: 00:07:15.188 Workload Type: dif_verify 00:07:15.188 Vector size: 4096 bytes 00:07:15.188 Transfer size: 4096 bytes 00:07:15.188 Block size: 512 bytes 00:07:15.188 Metadata size: 8 bytes 00:07:15.188 Vector count 1 00:07:15.188 Module: software 00:07:15.188 Queue depth: 32 00:07:15.188 Allocate depth: 32 00:07:15.188 # threads/core: 1 00:07:15.188 Run time: 1 seconds 00:07:15.188 Verify: No 00:07:15.188 00:07:15.188 Running for 1 seconds... 00:07:15.188 00:07:15.188 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:15.188 ------------------------------------------------------------------------------------ 00:07:15.188 0,0 81696/s 324 MiB/s 0 0 00:07:15.188 ==================================================================================== 00:07:15.188 Total 81696/s 319 MiB/s 0 0' 00:07:15.188 00:46:59 -- accel/accel.sh@20 -- # IFS=: 00:07:15.188 00:46:59 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:15.188 00:46:59 -- accel/accel.sh@20 -- # read -r var val 00:07:15.188 00:46:59 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:15.188 00:46:59 -- accel/accel.sh@12 -- # build_accel_config 00:07:15.188 00:46:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:15.188 00:46:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:15.188 00:46:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:15.188 00:46:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:15.188 00:46:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:15.188 00:46:59 -- accel/accel.sh@41 -- # local IFS=, 00:07:15.188 00:46:59 -- accel/accel.sh@42 -- # jq -r . 00:07:15.188 [2024-07-23 00:46:59.201020] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:15.188 [2024-07-23 00:46:59.201090] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3290421 ] 00:07:15.188 EAL: No free 2048 kB hugepages reported on node 1 00:07:15.188 [2024-07-23 00:46:59.261751] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.188 [2024-07-23 00:46:59.349996] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.446 00:46:59 -- accel/accel.sh@21 -- # val= 00:07:15.446 00:46:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.446 00:46:59 -- accel/accel.sh@20 -- # IFS=: 00:07:15.446 00:46:59 -- accel/accel.sh@20 -- # read -r var val 00:07:15.446 00:46:59 -- accel/accel.sh@21 -- # val= 00:07:15.446 00:46:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.446 00:46:59 -- accel/accel.sh@20 -- # IFS=: 00:07:15.446 00:46:59 -- accel/accel.sh@20 -- # read -r var val 00:07:15.446 00:46:59 -- accel/accel.sh@21 -- # val=0x1 00:07:15.446 00:46:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.446 00:46:59 -- accel/accel.sh@20 -- # IFS=: 00:07:15.446 00:46:59 -- accel/accel.sh@20 -- # read -r var val 00:07:15.446 00:46:59 -- accel/accel.sh@21 -- # val= 00:07:15.446 00:46:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.446 00:46:59 -- accel/accel.sh@20 -- # IFS=: 00:07:15.446 00:46:59 -- accel/accel.sh@20 -- # read -r var val 00:07:15.446 00:46:59 -- accel/accel.sh@21 -- # val= 00:07:15.446 00:46:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.446 00:46:59 -- accel/accel.sh@20 -- # IFS=: 00:07:15.447 00:46:59 -- accel/accel.sh@20 -- # read -r var val 00:07:15.447 00:46:59 -- accel/accel.sh@21 -- # val=dif_verify 00:07:15.447 00:46:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.447 00:46:59 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:07:15.447 00:46:59 -- accel/accel.sh@20 -- # IFS=: 00:07:15.447 00:46:59 -- accel/accel.sh@20 -- # read -r var val 00:07:15.447 00:46:59 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:15.447 00:46:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.447 00:46:59 -- accel/accel.sh@20 -- # IFS=: 00:07:15.447 00:46:59 -- accel/accel.sh@20 -- # read -r var val 00:07:15.447 00:46:59 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:15.447 00:46:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.447 00:46:59 -- accel/accel.sh@20 -- # IFS=: 00:07:15.447 00:46:59 -- accel/accel.sh@20 -- # read -r var val 00:07:15.447 00:46:59 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:15.447 00:46:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.447 00:46:59 -- accel/accel.sh@20 -- # IFS=: 00:07:15.447 00:46:59 -- accel/accel.sh@20 -- # read -r var val 00:07:15.447 00:46:59 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:15.447 00:46:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.447 00:46:59 -- accel/accel.sh@20 -- # IFS=: 00:07:15.447 00:46:59 -- accel/accel.sh@20 -- # read -r var val 00:07:15.447 00:46:59 -- accel/accel.sh@21 -- # val= 00:07:15.447 00:46:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.447 00:46:59 -- accel/accel.sh@20 -- # IFS=: 00:07:15.447 00:46:59 -- accel/accel.sh@20 -- # read -r var val 00:07:15.447 00:46:59 -- accel/accel.sh@21 -- # val=software 00:07:15.447 00:46:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.447 00:46:59 -- accel/accel.sh@23 -- # accel_module=software 00:07:15.447 00:46:59 -- accel/accel.sh@20 -- # IFS=: 00:07:15.447 00:46:59 -- accel/accel.sh@20 -- # read -r var val 00:07:15.447 00:46:59 -- accel/accel.sh@21 -- # val=32 00:07:15.447 00:46:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.447 00:46:59 -- accel/accel.sh@20 -- # IFS=: 00:07:15.447 00:46:59 -- accel/accel.sh@20 -- # read -r var val 00:07:15.447 00:46:59 -- accel/accel.sh@21 -- # val=32 00:07:15.447 00:46:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.447 00:46:59 -- accel/accel.sh@20 -- # IFS=: 00:07:15.447 00:46:59 -- accel/accel.sh@20 -- # read -r var val 00:07:15.447 00:46:59 -- accel/accel.sh@21 -- # val=1 00:07:15.447 00:46:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.447 00:46:59 -- accel/accel.sh@20 -- # IFS=: 00:07:15.447 00:46:59 -- accel/accel.sh@20 -- # read -r var val 00:07:15.447 00:46:59 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:15.447 00:46:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.447 00:46:59 -- accel/accel.sh@20 -- # IFS=: 00:07:15.447 00:46:59 -- accel/accel.sh@20 -- # read -r var val 00:07:15.447 00:46:59 -- accel/accel.sh@21 -- # val=No 00:07:15.447 00:46:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.447 00:46:59 -- accel/accel.sh@20 -- # IFS=: 00:07:15.447 00:46:59 -- accel/accel.sh@20 -- # read -r var val 00:07:15.447 00:46:59 -- accel/accel.sh@21 -- # val= 00:07:15.447 00:46:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.447 00:46:59 -- accel/accel.sh@20 -- # IFS=: 00:07:15.447 00:46:59 -- accel/accel.sh@20 -- # read -r var val 00:07:15.447 00:46:59 -- accel/accel.sh@21 -- # val= 00:07:15.447 00:46:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.447 00:46:59 -- accel/accel.sh@20 -- # IFS=: 00:07:15.447 00:46:59 -- accel/accel.sh@20 -- # read -r var val 00:07:16.380 00:47:00 -- accel/accel.sh@21 -- # val= 00:07:16.380 00:47:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.380 00:47:00 -- accel/accel.sh@20 -- # IFS=: 00:07:16.380 00:47:00 -- accel/accel.sh@20 -- # read -r var val 00:07:16.380 00:47:00 -- accel/accel.sh@21 -- # val= 00:07:16.380 00:47:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.380 00:47:00 -- accel/accel.sh@20 -- # IFS=: 00:07:16.380 00:47:00 -- accel/accel.sh@20 -- # read -r var val 00:07:16.380 00:47:00 -- accel/accel.sh@21 -- # val= 00:07:16.380 00:47:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.380 00:47:00 -- accel/accel.sh@20 -- # IFS=: 00:07:16.380 00:47:00 -- accel/accel.sh@20 -- # read -r var val 00:07:16.380 00:47:00 -- accel/accel.sh@21 -- # val= 00:07:16.380 00:47:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.380 00:47:00 -- accel/accel.sh@20 -- # IFS=: 00:07:16.380 00:47:00 -- accel/accel.sh@20 -- # read -r var val 00:07:16.380 00:47:00 -- accel/accel.sh@21 -- # val= 00:07:16.380 00:47:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.380 00:47:00 -- accel/accel.sh@20 -- # IFS=: 00:07:16.380 00:47:00 -- accel/accel.sh@20 -- # read -r var val 00:07:16.380 00:47:00 -- accel/accel.sh@21 -- # val= 00:07:16.380 00:47:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.380 00:47:00 -- accel/accel.sh@20 -- # IFS=: 00:07:16.380 00:47:00 -- accel/accel.sh@20 -- # read -r var val 00:07:16.380 00:47:00 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:16.380 00:47:00 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:07:16.380 00:47:00 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:16.380 00:07:16.380 real 0m2.786s 00:07:16.380 user 0m2.501s 00:07:16.380 sys 0m0.279s 00:07:16.380 00:47:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:16.380 00:47:00 -- common/autotest_common.sh@10 -- # set +x 00:07:16.380 ************************************ 00:07:16.380 END TEST accel_dif_verify 00:07:16.380 ************************************ 00:07:16.638 00:47:00 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:16.639 00:47:00 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:07:16.639 00:47:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:16.639 00:47:00 -- common/autotest_common.sh@10 -- # set +x 00:07:16.639 ************************************ 00:07:16.639 START TEST accel_dif_generate 00:07:16.639 ************************************ 00:07:16.639 00:47:00 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate 00:07:16.639 00:47:00 -- accel/accel.sh@16 -- # local accel_opc 00:07:16.639 00:47:00 -- accel/accel.sh@17 -- # local accel_module 00:07:16.639 00:47:00 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:07:16.639 00:47:00 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:16.639 00:47:00 -- accel/accel.sh@12 -- # build_accel_config 00:07:16.639 00:47:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:16.639 00:47:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.639 00:47:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.639 00:47:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:16.639 00:47:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:16.639 00:47:00 -- accel/accel.sh@41 -- # local IFS=, 00:07:16.639 00:47:00 -- accel/accel.sh@42 -- # jq -r . 00:07:16.639 [2024-07-23 00:47:00.611442] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:16.639 [2024-07-23 00:47:00.611534] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3290590 ] 00:07:16.639 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.639 [2024-07-23 00:47:00.674305] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.639 [2024-07-23 00:47:00.763440] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.012 00:47:02 -- accel/accel.sh@18 -- # out=' 00:07:18.012 SPDK Configuration: 00:07:18.012 Core mask: 0x1 00:07:18.012 00:07:18.012 Accel Perf Configuration: 00:07:18.012 Workload Type: dif_generate 00:07:18.012 Vector size: 4096 bytes 00:07:18.012 Transfer size: 4096 bytes 00:07:18.012 Block size: 512 bytes 00:07:18.012 Metadata size: 8 bytes 00:07:18.012 Vector count 1 00:07:18.012 Module: software 00:07:18.012 Queue depth: 32 00:07:18.012 Allocate depth: 32 00:07:18.012 # threads/core: 1 00:07:18.012 Run time: 1 seconds 00:07:18.012 Verify: No 00:07:18.012 00:07:18.012 Running for 1 seconds... 00:07:18.012 00:07:18.012 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:18.012 ------------------------------------------------------------------------------------ 00:07:18.012 0,0 96000/s 380 MiB/s 0 0 00:07:18.012 ==================================================================================== 00:07:18.012 Total 96000/s 375 MiB/s 0 0' 00:07:18.012 00:47:02 -- accel/accel.sh@20 -- # IFS=: 00:07:18.012 00:47:02 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:18.012 00:47:02 -- accel/accel.sh@20 -- # read -r var val 00:07:18.012 00:47:02 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:18.012 00:47:02 -- accel/accel.sh@12 -- # build_accel_config 00:07:18.012 00:47:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:18.012 00:47:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:18.012 00:47:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:18.012 00:47:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:18.012 00:47:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:18.012 00:47:02 -- accel/accel.sh@41 -- # local IFS=, 00:07:18.012 00:47:02 -- accel/accel.sh@42 -- # jq -r . 00:07:18.012 [2024-07-23 00:47:02.020795] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:18.012 [2024-07-23 00:47:02.020874] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3290730 ] 00:07:18.012 EAL: No free 2048 kB hugepages reported on node 1 00:07:18.012 [2024-07-23 00:47:02.081376] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.012 [2024-07-23 00:47:02.174315] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.313 00:47:02 -- accel/accel.sh@21 -- # val= 00:07:18.313 00:47:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.313 00:47:02 -- accel/accel.sh@20 -- # IFS=: 00:07:18.313 00:47:02 -- accel/accel.sh@20 -- # read -r var val 00:07:18.313 00:47:02 -- accel/accel.sh@21 -- # val= 00:07:18.313 00:47:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.313 00:47:02 -- accel/accel.sh@20 -- # IFS=: 00:07:18.313 00:47:02 -- accel/accel.sh@20 -- # read -r var val 00:07:18.313 00:47:02 -- accel/accel.sh@21 -- # val=0x1 00:07:18.313 00:47:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.313 00:47:02 -- accel/accel.sh@20 -- # IFS=: 00:07:18.313 00:47:02 -- accel/accel.sh@20 -- # read -r var val 00:07:18.313 00:47:02 -- accel/accel.sh@21 -- # val= 00:07:18.313 00:47:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.313 00:47:02 -- accel/accel.sh@20 -- # IFS=: 00:07:18.313 00:47:02 -- accel/accel.sh@20 -- # read -r var val 00:07:18.313 00:47:02 -- accel/accel.sh@21 -- # val= 00:07:18.313 00:47:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.313 00:47:02 -- accel/accel.sh@20 -- # IFS=: 00:07:18.313 00:47:02 -- accel/accel.sh@20 -- # read -r var val 00:07:18.313 00:47:02 -- accel/accel.sh@21 -- # val=dif_generate 00:07:18.313 00:47:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.313 00:47:02 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:07:18.313 00:47:02 -- accel/accel.sh@20 -- # IFS=: 00:07:18.313 00:47:02 -- accel/accel.sh@20 -- # read -r var val 00:07:18.313 00:47:02 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:18.313 00:47:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.313 00:47:02 -- accel/accel.sh@20 -- # IFS=: 00:07:18.313 00:47:02 -- accel/accel.sh@20 -- # read -r var val 00:07:18.313 00:47:02 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:18.313 00:47:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.313 00:47:02 -- accel/accel.sh@20 -- # IFS=: 00:07:18.313 00:47:02 -- accel/accel.sh@20 -- # read -r var val 00:07:18.313 00:47:02 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:18.313 00:47:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.313 00:47:02 -- accel/accel.sh@20 -- # IFS=: 00:07:18.313 00:47:02 -- accel/accel.sh@20 -- # read -r var val 00:07:18.313 00:47:02 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:18.313 00:47:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.313 00:47:02 -- accel/accel.sh@20 -- # IFS=: 00:07:18.313 00:47:02 -- accel/accel.sh@20 -- # read -r var val 00:07:18.313 00:47:02 -- accel/accel.sh@21 -- # val= 00:07:18.313 00:47:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.313 00:47:02 -- accel/accel.sh@20 -- # IFS=: 00:07:18.313 00:47:02 -- accel/accel.sh@20 -- # read -r var val 00:07:18.313 00:47:02 -- accel/accel.sh@21 -- # val=software 00:07:18.313 00:47:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.313 00:47:02 -- accel/accel.sh@23 -- # accel_module=software 00:07:18.313 00:47:02 -- accel/accel.sh@20 -- # IFS=: 00:07:18.313 00:47:02 -- accel/accel.sh@20 -- # read -r var val 00:07:18.313 00:47:02 -- accel/accel.sh@21 -- # val=32 00:07:18.313 00:47:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.313 00:47:02 -- accel/accel.sh@20 -- # IFS=: 00:07:18.313 00:47:02 -- accel/accel.sh@20 -- # read -r var val 00:07:18.313 00:47:02 -- accel/accel.sh@21 -- # val=32 00:07:18.313 00:47:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.313 00:47:02 -- accel/accel.sh@20 -- # IFS=: 00:07:18.313 00:47:02 -- accel/accel.sh@20 -- # read -r var val 00:07:18.313 00:47:02 -- accel/accel.sh@21 -- # val=1 00:07:18.313 00:47:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.313 00:47:02 -- accel/accel.sh@20 -- # IFS=: 00:07:18.313 00:47:02 -- accel/accel.sh@20 -- # read -r var val 00:07:18.313 00:47:02 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:18.313 00:47:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.313 00:47:02 -- accel/accel.sh@20 -- # IFS=: 00:07:18.313 00:47:02 -- accel/accel.sh@20 -- # read -r var val 00:07:18.313 00:47:02 -- accel/accel.sh@21 -- # val=No 00:07:18.313 00:47:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.313 00:47:02 -- accel/accel.sh@20 -- # IFS=: 00:07:18.313 00:47:02 -- accel/accel.sh@20 -- # read -r var val 00:07:18.313 00:47:02 -- accel/accel.sh@21 -- # val= 00:07:18.313 00:47:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.313 00:47:02 -- accel/accel.sh@20 -- # IFS=: 00:07:18.313 00:47:02 -- accel/accel.sh@20 -- # read -r var val 00:07:18.313 00:47:02 -- accel/accel.sh@21 -- # val= 00:07:18.313 00:47:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.313 00:47:02 -- accel/accel.sh@20 -- # IFS=: 00:07:18.313 00:47:02 -- accel/accel.sh@20 -- # read -r var val 00:07:19.269 00:47:03 -- accel/accel.sh@21 -- # val= 00:07:19.269 00:47:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.269 00:47:03 -- accel/accel.sh@20 -- # IFS=: 00:07:19.269 00:47:03 -- accel/accel.sh@20 -- # read -r var val 00:07:19.269 00:47:03 -- accel/accel.sh@21 -- # val= 00:07:19.269 00:47:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.269 00:47:03 -- accel/accel.sh@20 -- # IFS=: 00:07:19.269 00:47:03 -- accel/accel.sh@20 -- # read -r var val 00:07:19.269 00:47:03 -- accel/accel.sh@21 -- # val= 00:07:19.269 00:47:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.269 00:47:03 -- accel/accel.sh@20 -- # IFS=: 00:07:19.269 00:47:03 -- accel/accel.sh@20 -- # read -r var val 00:07:19.269 00:47:03 -- accel/accel.sh@21 -- # val= 00:07:19.269 00:47:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.269 00:47:03 -- accel/accel.sh@20 -- # IFS=: 00:07:19.269 00:47:03 -- accel/accel.sh@20 -- # read -r var val 00:07:19.269 00:47:03 -- accel/accel.sh@21 -- # val= 00:07:19.269 00:47:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.269 00:47:03 -- accel/accel.sh@20 -- # IFS=: 00:07:19.269 00:47:03 -- accel/accel.sh@20 -- # read -r var val 00:07:19.269 00:47:03 -- accel/accel.sh@21 -- # val= 00:07:19.269 00:47:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.269 00:47:03 -- accel/accel.sh@20 -- # IFS=: 00:07:19.269 00:47:03 -- accel/accel.sh@20 -- # read -r var val 00:07:19.269 00:47:03 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:19.269 00:47:03 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:07:19.269 00:47:03 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:19.269 00:07:19.269 real 0m2.823s 00:07:19.269 user 0m2.527s 00:07:19.269 sys 0m0.290s 00:07:19.269 00:47:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:19.269 00:47:03 -- common/autotest_common.sh@10 -- # set +x 00:07:19.269 ************************************ 00:07:19.269 END TEST accel_dif_generate 00:07:19.269 ************************************ 00:07:19.269 00:47:03 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:19.269 00:47:03 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:07:19.269 00:47:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:19.269 00:47:03 -- common/autotest_common.sh@10 -- # set +x 00:07:19.269 ************************************ 00:07:19.269 START TEST accel_dif_generate_copy 00:07:19.269 ************************************ 00:07:19.269 00:47:03 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate_copy 00:07:19.269 00:47:03 -- accel/accel.sh@16 -- # local accel_opc 00:07:19.269 00:47:03 -- accel/accel.sh@17 -- # local accel_module 00:07:19.269 00:47:03 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:07:19.269 00:47:03 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:19.269 00:47:03 -- accel/accel.sh@12 -- # build_accel_config 00:07:19.269 00:47:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:19.269 00:47:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.269 00:47:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.269 00:47:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:19.269 00:47:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:19.269 00:47:03 -- accel/accel.sh@41 -- # local IFS=, 00:07:19.269 00:47:03 -- accel/accel.sh@42 -- # jq -r . 00:07:19.269 [2024-07-23 00:47:03.459268] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:19.269 [2024-07-23 00:47:03.459353] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3290885 ] 00:07:19.527 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.527 [2024-07-23 00:47:03.523690] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.527 [2024-07-23 00:47:03.616734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.900 00:47:04 -- accel/accel.sh@18 -- # out=' 00:07:20.900 SPDK Configuration: 00:07:20.900 Core mask: 0x1 00:07:20.900 00:07:20.900 Accel Perf Configuration: 00:07:20.900 Workload Type: dif_generate_copy 00:07:20.900 Vector size: 4096 bytes 00:07:20.900 Transfer size: 4096 bytes 00:07:20.900 Vector count 1 00:07:20.900 Module: software 00:07:20.900 Queue depth: 32 00:07:20.900 Allocate depth: 32 00:07:20.900 # threads/core: 1 00:07:20.900 Run time: 1 seconds 00:07:20.900 Verify: No 00:07:20.900 00:07:20.900 Running for 1 seconds... 00:07:20.900 00:07:20.900 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:20.900 ------------------------------------------------------------------------------------ 00:07:20.900 0,0 76096/s 301 MiB/s 0 0 00:07:20.900 ==================================================================================== 00:07:20.900 Total 76096/s 297 MiB/s 0 0' 00:07:20.900 00:47:04 -- accel/accel.sh@20 -- # IFS=: 00:07:20.900 00:47:04 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:20.900 00:47:04 -- accel/accel.sh@20 -- # read -r var val 00:07:20.900 00:47:04 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:20.900 00:47:04 -- accel/accel.sh@12 -- # build_accel_config 00:07:20.900 00:47:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:20.900 00:47:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:20.900 00:47:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:20.900 00:47:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:20.900 00:47:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:20.900 00:47:04 -- accel/accel.sh@41 -- # local IFS=, 00:07:20.900 00:47:04 -- accel/accel.sh@42 -- # jq -r . 00:07:20.900 [2024-07-23 00:47:04.865639] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:20.900 [2024-07-23 00:47:04.865722] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3291150 ] 00:07:20.900 EAL: No free 2048 kB hugepages reported on node 1 00:07:20.900 [2024-07-23 00:47:04.927280] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.900 [2024-07-23 00:47:05.019014] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.900 00:47:05 -- accel/accel.sh@21 -- # val= 00:07:20.900 00:47:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.900 00:47:05 -- accel/accel.sh@20 -- # IFS=: 00:07:20.900 00:47:05 -- accel/accel.sh@20 -- # read -r var val 00:07:20.900 00:47:05 -- accel/accel.sh@21 -- # val= 00:07:20.900 00:47:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.900 00:47:05 -- accel/accel.sh@20 -- # IFS=: 00:07:20.900 00:47:05 -- accel/accel.sh@20 -- # read -r var val 00:07:20.900 00:47:05 -- accel/accel.sh@21 -- # val=0x1 00:07:20.900 00:47:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.900 00:47:05 -- accel/accel.sh@20 -- # IFS=: 00:07:20.900 00:47:05 -- accel/accel.sh@20 -- # read -r var val 00:07:20.900 00:47:05 -- accel/accel.sh@21 -- # val= 00:07:20.900 00:47:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.900 00:47:05 -- accel/accel.sh@20 -- # IFS=: 00:07:20.900 00:47:05 -- accel/accel.sh@20 -- # read -r var val 00:07:20.900 00:47:05 -- accel/accel.sh@21 -- # val= 00:07:20.900 00:47:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.900 00:47:05 -- accel/accel.sh@20 -- # IFS=: 00:07:20.900 00:47:05 -- accel/accel.sh@20 -- # read -r var val 00:07:20.900 00:47:05 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:07:20.900 00:47:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.900 00:47:05 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:07:20.900 00:47:05 -- accel/accel.sh@20 -- # IFS=: 00:07:20.900 00:47:05 -- accel/accel.sh@20 -- # read -r var val 00:07:20.900 00:47:05 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:20.900 00:47:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.900 00:47:05 -- accel/accel.sh@20 -- # IFS=: 00:07:20.900 00:47:05 -- accel/accel.sh@20 -- # read -r var val 00:07:20.900 00:47:05 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:20.900 00:47:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.900 00:47:05 -- accel/accel.sh@20 -- # IFS=: 00:07:20.900 00:47:05 -- accel/accel.sh@20 -- # read -r var val 00:07:20.900 00:47:05 -- accel/accel.sh@21 -- # val= 00:07:20.900 00:47:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.900 00:47:05 -- accel/accel.sh@20 -- # IFS=: 00:07:20.900 00:47:05 -- accel/accel.sh@20 -- # read -r var val 00:07:20.900 00:47:05 -- accel/accel.sh@21 -- # val=software 00:07:20.900 00:47:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.900 00:47:05 -- accel/accel.sh@23 -- # accel_module=software 00:07:20.900 00:47:05 -- accel/accel.sh@20 -- # IFS=: 00:07:20.900 00:47:05 -- accel/accel.sh@20 -- # read -r var val 00:07:20.900 00:47:05 -- accel/accel.sh@21 -- # val=32 00:07:20.900 00:47:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.900 00:47:05 -- accel/accel.sh@20 -- # IFS=: 00:07:20.900 00:47:05 -- accel/accel.sh@20 -- # read -r var val 00:07:20.900 00:47:05 -- accel/accel.sh@21 -- # val=32 00:07:20.900 00:47:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.900 00:47:05 -- accel/accel.sh@20 -- # IFS=: 00:07:20.900 00:47:05 -- accel/accel.sh@20 -- # read -r var val 00:07:20.900 00:47:05 -- accel/accel.sh@21 -- # val=1 00:07:20.900 00:47:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.900 00:47:05 -- accel/accel.sh@20 -- # IFS=: 00:07:20.900 00:47:05 -- accel/accel.sh@20 -- # read -r var val 00:07:20.900 00:47:05 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:20.900 00:47:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.900 00:47:05 -- accel/accel.sh@20 -- # IFS=: 00:07:20.900 00:47:05 -- accel/accel.sh@20 -- # read -r var val 00:07:20.900 00:47:05 -- accel/accel.sh@21 -- # val=No 00:07:20.900 00:47:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.900 00:47:05 -- accel/accel.sh@20 -- # IFS=: 00:07:20.900 00:47:05 -- accel/accel.sh@20 -- # read -r var val 00:07:20.900 00:47:05 -- accel/accel.sh@21 -- # val= 00:07:20.900 00:47:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.900 00:47:05 -- accel/accel.sh@20 -- # IFS=: 00:07:20.900 00:47:05 -- accel/accel.sh@20 -- # read -r var val 00:07:20.900 00:47:05 -- accel/accel.sh@21 -- # val= 00:07:20.900 00:47:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.900 00:47:05 -- accel/accel.sh@20 -- # IFS=: 00:07:20.900 00:47:05 -- accel/accel.sh@20 -- # read -r var val 00:07:22.273 00:47:06 -- accel/accel.sh@21 -- # val= 00:07:22.273 00:47:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.273 00:47:06 -- accel/accel.sh@20 -- # IFS=: 00:07:22.273 00:47:06 -- accel/accel.sh@20 -- # read -r var val 00:07:22.273 00:47:06 -- accel/accel.sh@21 -- # val= 00:07:22.273 00:47:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.273 00:47:06 -- accel/accel.sh@20 -- # IFS=: 00:07:22.273 00:47:06 -- accel/accel.sh@20 -- # read -r var val 00:07:22.274 00:47:06 -- accel/accel.sh@21 -- # val= 00:07:22.274 00:47:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.274 00:47:06 -- accel/accel.sh@20 -- # IFS=: 00:07:22.274 00:47:06 -- accel/accel.sh@20 -- # read -r var val 00:07:22.274 00:47:06 -- accel/accel.sh@21 -- # val= 00:07:22.274 00:47:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.274 00:47:06 -- accel/accel.sh@20 -- # IFS=: 00:07:22.274 00:47:06 -- accel/accel.sh@20 -- # read -r var val 00:07:22.274 00:47:06 -- accel/accel.sh@21 -- # val= 00:07:22.274 00:47:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.274 00:47:06 -- accel/accel.sh@20 -- # IFS=: 00:07:22.274 00:47:06 -- accel/accel.sh@20 -- # read -r var val 00:07:22.274 00:47:06 -- accel/accel.sh@21 -- # val= 00:07:22.274 00:47:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.274 00:47:06 -- accel/accel.sh@20 -- # IFS=: 00:07:22.274 00:47:06 -- accel/accel.sh@20 -- # read -r var val 00:07:22.274 00:47:06 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:22.274 00:47:06 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:07:22.274 00:47:06 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:22.274 00:07:22.274 real 0m2.806s 00:07:22.274 user 0m2.505s 00:07:22.274 sys 0m0.292s 00:07:22.274 00:47:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:22.274 00:47:06 -- common/autotest_common.sh@10 -- # set +x 00:07:22.274 ************************************ 00:07:22.274 END TEST accel_dif_generate_copy 00:07:22.274 ************************************ 00:07:22.274 00:47:06 -- accel/accel.sh@107 -- # [[ y == y ]] 00:07:22.274 00:47:06 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:22.274 00:47:06 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:07:22.274 00:47:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:22.274 00:47:06 -- common/autotest_common.sh@10 -- # set +x 00:07:22.274 ************************************ 00:07:22.274 START TEST accel_comp 00:07:22.274 ************************************ 00:07:22.274 00:47:06 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:22.274 00:47:06 -- accel/accel.sh@16 -- # local accel_opc 00:07:22.274 00:47:06 -- accel/accel.sh@17 -- # local accel_module 00:07:22.274 00:47:06 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:22.274 00:47:06 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:22.274 00:47:06 -- accel/accel.sh@12 -- # build_accel_config 00:07:22.274 00:47:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:22.274 00:47:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:22.274 00:47:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:22.274 00:47:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:22.274 00:47:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:22.274 00:47:06 -- accel/accel.sh@41 -- # local IFS=, 00:07:22.274 00:47:06 -- accel/accel.sh@42 -- # jq -r . 00:07:22.274 [2024-07-23 00:47:06.290973] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:22.274 [2024-07-23 00:47:06.291050] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3291312 ] 00:07:22.274 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.274 [2024-07-23 00:47:06.355157] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.274 [2024-07-23 00:47:06.448210] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.647 00:47:07 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:23.647 00:07:23.647 SPDK Configuration: 00:07:23.647 Core mask: 0x1 00:07:23.647 00:07:23.647 Accel Perf Configuration: 00:07:23.647 Workload Type: compress 00:07:23.647 Transfer size: 4096 bytes 00:07:23.647 Vector count 1 00:07:23.647 Module: software 00:07:23.647 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:23.647 Queue depth: 32 00:07:23.647 Allocate depth: 32 00:07:23.647 # threads/core: 1 00:07:23.647 Run time: 1 seconds 00:07:23.647 Verify: No 00:07:23.647 00:07:23.647 Running for 1 seconds... 00:07:23.647 00:07:23.647 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:23.647 ------------------------------------------------------------------------------------ 00:07:23.647 0,0 32384/s 134 MiB/s 0 0 00:07:23.647 ==================================================================================== 00:07:23.647 Total 32384/s 126 MiB/s 0 0' 00:07:23.647 00:47:07 -- accel/accel.sh@20 -- # IFS=: 00:07:23.647 00:47:07 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:23.647 00:47:07 -- accel/accel.sh@20 -- # read -r var val 00:07:23.647 00:47:07 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:23.647 00:47:07 -- accel/accel.sh@12 -- # build_accel_config 00:07:23.647 00:47:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:23.647 00:47:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:23.647 00:47:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:23.647 00:47:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:23.647 00:47:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:23.647 00:47:07 -- accel/accel.sh@41 -- # local IFS=, 00:07:23.647 00:47:07 -- accel/accel.sh@42 -- # jq -r . 00:07:23.647 [2024-07-23 00:47:07.706088] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:23.647 [2024-07-23 00:47:07.706171] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3291459 ] 00:07:23.647 EAL: No free 2048 kB hugepages reported on node 1 00:07:23.647 [2024-07-23 00:47:07.770424] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.906 [2024-07-23 00:47:07.864520] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.906 00:47:07 -- accel/accel.sh@21 -- # val= 00:07:23.906 00:47:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.906 00:47:07 -- accel/accel.sh@20 -- # IFS=: 00:07:23.906 00:47:07 -- accel/accel.sh@20 -- # read -r var val 00:07:23.906 00:47:07 -- accel/accel.sh@21 -- # val= 00:07:23.906 00:47:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.906 00:47:07 -- accel/accel.sh@20 -- # IFS=: 00:07:23.906 00:47:07 -- accel/accel.sh@20 -- # read -r var val 00:07:23.906 00:47:07 -- accel/accel.sh@21 -- # val= 00:07:23.906 00:47:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.906 00:47:07 -- accel/accel.sh@20 -- # IFS=: 00:07:23.906 00:47:07 -- accel/accel.sh@20 -- # read -r var val 00:07:23.906 00:47:07 -- accel/accel.sh@21 -- # val=0x1 00:07:23.906 00:47:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.906 00:47:07 -- accel/accel.sh@20 -- # IFS=: 00:07:23.906 00:47:07 -- accel/accel.sh@20 -- # read -r var val 00:07:23.906 00:47:07 -- accel/accel.sh@21 -- # val= 00:07:23.906 00:47:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.906 00:47:07 -- accel/accel.sh@20 -- # IFS=: 00:07:23.906 00:47:07 -- accel/accel.sh@20 -- # read -r var val 00:07:23.906 00:47:07 -- accel/accel.sh@21 -- # val= 00:07:23.906 00:47:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.906 00:47:07 -- accel/accel.sh@20 -- # IFS=: 00:07:23.906 00:47:07 -- accel/accel.sh@20 -- # read -r var val 00:07:23.906 00:47:07 -- accel/accel.sh@21 -- # val=compress 00:07:23.906 00:47:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.906 00:47:07 -- accel/accel.sh@24 -- # accel_opc=compress 00:07:23.906 00:47:07 -- accel/accel.sh@20 -- # IFS=: 00:07:23.906 00:47:07 -- accel/accel.sh@20 -- # read -r var val 00:07:23.906 00:47:07 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:23.906 00:47:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.906 00:47:07 -- accel/accel.sh@20 -- # IFS=: 00:07:23.906 00:47:07 -- accel/accel.sh@20 -- # read -r var val 00:07:23.906 00:47:07 -- accel/accel.sh@21 -- # val= 00:07:23.906 00:47:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.906 00:47:07 -- accel/accel.sh@20 -- # IFS=: 00:07:23.906 00:47:07 -- accel/accel.sh@20 -- # read -r var val 00:07:23.906 00:47:07 -- accel/accel.sh@21 -- # val=software 00:07:23.906 00:47:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.906 00:47:07 -- accel/accel.sh@23 -- # accel_module=software 00:07:23.906 00:47:07 -- accel/accel.sh@20 -- # IFS=: 00:07:23.906 00:47:07 -- accel/accel.sh@20 -- # read -r var val 00:07:23.906 00:47:07 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:23.906 00:47:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.906 00:47:07 -- accel/accel.sh@20 -- # IFS=: 00:07:23.906 00:47:07 -- accel/accel.sh@20 -- # read -r var val 00:07:23.906 00:47:07 -- accel/accel.sh@21 -- # val=32 00:07:23.906 00:47:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.906 00:47:07 -- accel/accel.sh@20 -- # IFS=: 00:07:23.906 00:47:07 -- accel/accel.sh@20 -- # read -r var val 00:07:23.906 00:47:07 -- accel/accel.sh@21 -- # val=32 00:07:23.906 00:47:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.906 00:47:07 -- accel/accel.sh@20 -- # IFS=: 00:07:23.906 00:47:07 -- accel/accel.sh@20 -- # read -r var val 00:07:23.906 00:47:07 -- accel/accel.sh@21 -- # val=1 00:07:23.906 00:47:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.906 00:47:07 -- accel/accel.sh@20 -- # IFS=: 00:07:23.906 00:47:07 -- accel/accel.sh@20 -- # read -r var val 00:07:23.906 00:47:07 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:23.906 00:47:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.906 00:47:07 -- accel/accel.sh@20 -- # IFS=: 00:07:23.906 00:47:07 -- accel/accel.sh@20 -- # read -r var val 00:07:23.906 00:47:07 -- accel/accel.sh@21 -- # val=No 00:07:23.906 00:47:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.906 00:47:07 -- accel/accel.sh@20 -- # IFS=: 00:07:23.906 00:47:07 -- accel/accel.sh@20 -- # read -r var val 00:07:23.906 00:47:07 -- accel/accel.sh@21 -- # val= 00:07:23.906 00:47:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.906 00:47:07 -- accel/accel.sh@20 -- # IFS=: 00:07:23.906 00:47:07 -- accel/accel.sh@20 -- # read -r var val 00:07:23.906 00:47:07 -- accel/accel.sh@21 -- # val= 00:07:23.906 00:47:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.906 00:47:07 -- accel/accel.sh@20 -- # IFS=: 00:07:23.906 00:47:07 -- accel/accel.sh@20 -- # read -r var val 00:07:25.280 00:47:09 -- accel/accel.sh@21 -- # val= 00:07:25.280 00:47:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.280 00:47:09 -- accel/accel.sh@20 -- # IFS=: 00:07:25.280 00:47:09 -- accel/accel.sh@20 -- # read -r var val 00:07:25.280 00:47:09 -- accel/accel.sh@21 -- # val= 00:07:25.280 00:47:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.280 00:47:09 -- accel/accel.sh@20 -- # IFS=: 00:07:25.280 00:47:09 -- accel/accel.sh@20 -- # read -r var val 00:07:25.280 00:47:09 -- accel/accel.sh@21 -- # val= 00:07:25.280 00:47:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.280 00:47:09 -- accel/accel.sh@20 -- # IFS=: 00:07:25.280 00:47:09 -- accel/accel.sh@20 -- # read -r var val 00:07:25.280 00:47:09 -- accel/accel.sh@21 -- # val= 00:07:25.280 00:47:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.280 00:47:09 -- accel/accel.sh@20 -- # IFS=: 00:07:25.280 00:47:09 -- accel/accel.sh@20 -- # read -r var val 00:07:25.280 00:47:09 -- accel/accel.sh@21 -- # val= 00:07:25.280 00:47:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.280 00:47:09 -- accel/accel.sh@20 -- # IFS=: 00:07:25.280 00:47:09 -- accel/accel.sh@20 -- # read -r var val 00:07:25.280 00:47:09 -- accel/accel.sh@21 -- # val= 00:07:25.280 00:47:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.280 00:47:09 -- accel/accel.sh@20 -- # IFS=: 00:07:25.280 00:47:09 -- accel/accel.sh@20 -- # read -r var val 00:07:25.280 00:47:09 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:25.280 00:47:09 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:07:25.280 00:47:09 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:25.280 00:07:25.280 real 0m2.835s 00:07:25.280 user 0m2.526s 00:07:25.280 sys 0m0.302s 00:07:25.280 00:47:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:25.280 00:47:09 -- common/autotest_common.sh@10 -- # set +x 00:07:25.280 ************************************ 00:07:25.280 END TEST accel_comp 00:07:25.280 ************************************ 00:07:25.280 00:47:09 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:25.280 00:47:09 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:25.280 00:47:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:25.280 00:47:09 -- common/autotest_common.sh@10 -- # set +x 00:07:25.280 ************************************ 00:07:25.280 START TEST accel_decomp 00:07:25.280 ************************************ 00:07:25.280 00:47:09 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:25.280 00:47:09 -- accel/accel.sh@16 -- # local accel_opc 00:07:25.280 00:47:09 -- accel/accel.sh@17 -- # local accel_module 00:07:25.280 00:47:09 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:25.280 00:47:09 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:25.280 00:47:09 -- accel/accel.sh@12 -- # build_accel_config 00:07:25.280 00:47:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:25.280 00:47:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:25.280 00:47:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.280 00:47:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:25.280 00:47:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:25.280 00:47:09 -- accel/accel.sh@41 -- # local IFS=, 00:07:25.280 00:47:09 -- accel/accel.sh@42 -- # jq -r . 00:07:25.280 [2024-07-23 00:47:09.152068] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:25.280 [2024-07-23 00:47:09.152148] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3291612 ] 00:07:25.280 EAL: No free 2048 kB hugepages reported on node 1 00:07:25.280 [2024-07-23 00:47:09.213640] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.280 [2024-07-23 00:47:09.307266] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.654 00:47:10 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:26.654 00:07:26.654 SPDK Configuration: 00:07:26.654 Core mask: 0x1 00:07:26.654 00:07:26.654 Accel Perf Configuration: 00:07:26.654 Workload Type: decompress 00:07:26.654 Transfer size: 4096 bytes 00:07:26.654 Vector count 1 00:07:26.655 Module: software 00:07:26.655 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:26.655 Queue depth: 32 00:07:26.655 Allocate depth: 32 00:07:26.655 # threads/core: 1 00:07:26.655 Run time: 1 seconds 00:07:26.655 Verify: Yes 00:07:26.655 00:07:26.655 Running for 1 seconds... 00:07:26.655 00:07:26.655 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:26.655 ------------------------------------------------------------------------------------ 00:07:26.655 0,0 55616/s 102 MiB/s 0 0 00:07:26.655 ==================================================================================== 00:07:26.655 Total 55616/s 217 MiB/s 0 0' 00:07:26.655 00:47:10 -- accel/accel.sh@20 -- # IFS=: 00:07:26.655 00:47:10 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:26.655 00:47:10 -- accel/accel.sh@20 -- # read -r var val 00:07:26.655 00:47:10 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:26.655 00:47:10 -- accel/accel.sh@12 -- # build_accel_config 00:07:26.655 00:47:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:26.655 00:47:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:26.655 00:47:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:26.655 00:47:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:26.655 00:47:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:26.655 00:47:10 -- accel/accel.sh@41 -- # local IFS=, 00:07:26.655 00:47:10 -- accel/accel.sh@42 -- # jq -r . 00:07:26.655 [2024-07-23 00:47:10.562504] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:26.655 [2024-07-23 00:47:10.562586] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3291869 ] 00:07:26.655 EAL: No free 2048 kB hugepages reported on node 1 00:07:26.655 [2024-07-23 00:47:10.622652] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.655 [2024-07-23 00:47:10.715474] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.655 00:47:10 -- accel/accel.sh@21 -- # val= 00:07:26.655 00:47:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.655 00:47:10 -- accel/accel.sh@20 -- # IFS=: 00:07:26.655 00:47:10 -- accel/accel.sh@20 -- # read -r var val 00:07:26.655 00:47:10 -- accel/accel.sh@21 -- # val= 00:07:26.655 00:47:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.655 00:47:10 -- accel/accel.sh@20 -- # IFS=: 00:07:26.655 00:47:10 -- accel/accel.sh@20 -- # read -r var val 00:07:26.655 00:47:10 -- accel/accel.sh@21 -- # val= 00:07:26.655 00:47:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.655 00:47:10 -- accel/accel.sh@20 -- # IFS=: 00:07:26.655 00:47:10 -- accel/accel.sh@20 -- # read -r var val 00:07:26.655 00:47:10 -- accel/accel.sh@21 -- # val=0x1 00:07:26.655 00:47:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.655 00:47:10 -- accel/accel.sh@20 -- # IFS=: 00:07:26.655 00:47:10 -- accel/accel.sh@20 -- # read -r var val 00:07:26.655 00:47:10 -- accel/accel.sh@21 -- # val= 00:07:26.655 00:47:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.655 00:47:10 -- accel/accel.sh@20 -- # IFS=: 00:07:26.655 00:47:10 -- accel/accel.sh@20 -- # read -r var val 00:07:26.655 00:47:10 -- accel/accel.sh@21 -- # val= 00:07:26.655 00:47:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.655 00:47:10 -- accel/accel.sh@20 -- # IFS=: 00:07:26.655 00:47:10 -- accel/accel.sh@20 -- # read -r var val 00:07:26.655 00:47:10 -- accel/accel.sh@21 -- # val=decompress 00:07:26.655 00:47:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.655 00:47:10 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:26.655 00:47:10 -- accel/accel.sh@20 -- # IFS=: 00:07:26.655 00:47:10 -- accel/accel.sh@20 -- # read -r var val 00:07:26.655 00:47:10 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:26.655 00:47:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.655 00:47:10 -- accel/accel.sh@20 -- # IFS=: 00:07:26.655 00:47:10 -- accel/accel.sh@20 -- # read -r var val 00:07:26.655 00:47:10 -- accel/accel.sh@21 -- # val= 00:07:26.655 00:47:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.655 00:47:10 -- accel/accel.sh@20 -- # IFS=: 00:07:26.655 00:47:10 -- accel/accel.sh@20 -- # read -r var val 00:07:26.655 00:47:10 -- accel/accel.sh@21 -- # val=software 00:07:26.655 00:47:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.655 00:47:10 -- accel/accel.sh@23 -- # accel_module=software 00:07:26.655 00:47:10 -- accel/accel.sh@20 -- # IFS=: 00:07:26.655 00:47:10 -- accel/accel.sh@20 -- # read -r var val 00:07:26.655 00:47:10 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:26.655 00:47:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.655 00:47:10 -- accel/accel.sh@20 -- # IFS=: 00:07:26.655 00:47:10 -- accel/accel.sh@20 -- # read -r var val 00:07:26.655 00:47:10 -- accel/accel.sh@21 -- # val=32 00:07:26.655 00:47:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.655 00:47:10 -- accel/accel.sh@20 -- # IFS=: 00:07:26.655 00:47:10 -- accel/accel.sh@20 -- # read -r var val 00:07:26.655 00:47:10 -- accel/accel.sh@21 -- # val=32 00:07:26.655 00:47:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.655 00:47:10 -- accel/accel.sh@20 -- # IFS=: 00:07:26.655 00:47:10 -- accel/accel.sh@20 -- # read -r var val 00:07:26.655 00:47:10 -- accel/accel.sh@21 -- # val=1 00:07:26.655 00:47:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.655 00:47:10 -- accel/accel.sh@20 -- # IFS=: 00:07:26.655 00:47:10 -- accel/accel.sh@20 -- # read -r var val 00:07:26.655 00:47:10 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:26.655 00:47:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.655 00:47:10 -- accel/accel.sh@20 -- # IFS=: 00:07:26.655 00:47:10 -- accel/accel.sh@20 -- # read -r var val 00:07:26.655 00:47:10 -- accel/accel.sh@21 -- # val=Yes 00:07:26.655 00:47:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.655 00:47:10 -- accel/accel.sh@20 -- # IFS=: 00:07:26.655 00:47:10 -- accel/accel.sh@20 -- # read -r var val 00:07:26.655 00:47:10 -- accel/accel.sh@21 -- # val= 00:07:26.655 00:47:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.655 00:47:10 -- accel/accel.sh@20 -- # IFS=: 00:07:26.655 00:47:10 -- accel/accel.sh@20 -- # read -r var val 00:07:26.655 00:47:10 -- accel/accel.sh@21 -- # val= 00:07:26.655 00:47:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.655 00:47:10 -- accel/accel.sh@20 -- # IFS=: 00:07:26.655 00:47:10 -- accel/accel.sh@20 -- # read -r var val 00:07:28.028 00:47:11 -- accel/accel.sh@21 -- # val= 00:07:28.028 00:47:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.028 00:47:11 -- accel/accel.sh@20 -- # IFS=: 00:07:28.028 00:47:11 -- accel/accel.sh@20 -- # read -r var val 00:07:28.028 00:47:11 -- accel/accel.sh@21 -- # val= 00:07:28.028 00:47:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.028 00:47:11 -- accel/accel.sh@20 -- # IFS=: 00:07:28.028 00:47:11 -- accel/accel.sh@20 -- # read -r var val 00:07:28.028 00:47:11 -- accel/accel.sh@21 -- # val= 00:07:28.028 00:47:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.028 00:47:11 -- accel/accel.sh@20 -- # IFS=: 00:07:28.028 00:47:11 -- accel/accel.sh@20 -- # read -r var val 00:07:28.028 00:47:11 -- accel/accel.sh@21 -- # val= 00:07:28.028 00:47:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.028 00:47:11 -- accel/accel.sh@20 -- # IFS=: 00:07:28.028 00:47:11 -- accel/accel.sh@20 -- # read -r var val 00:07:28.028 00:47:11 -- accel/accel.sh@21 -- # val= 00:07:28.028 00:47:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.028 00:47:11 -- accel/accel.sh@20 -- # IFS=: 00:07:28.028 00:47:11 -- accel/accel.sh@20 -- # read -r var val 00:07:28.028 00:47:11 -- accel/accel.sh@21 -- # val= 00:07:28.028 00:47:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.028 00:47:11 -- accel/accel.sh@20 -- # IFS=: 00:07:28.028 00:47:11 -- accel/accel.sh@20 -- # read -r var val 00:07:28.028 00:47:11 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:28.028 00:47:11 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:28.028 00:47:11 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:28.028 00:07:28.028 real 0m2.817s 00:07:28.028 user 0m2.506s 00:07:28.028 sys 0m0.303s 00:07:28.028 00:47:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:28.028 00:47:11 -- common/autotest_common.sh@10 -- # set +x 00:07:28.028 ************************************ 00:07:28.028 END TEST accel_decomp 00:07:28.028 ************************************ 00:07:28.028 00:47:11 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:28.028 00:47:11 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:07:28.028 00:47:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:28.028 00:47:11 -- common/autotest_common.sh@10 -- # set +x 00:07:28.028 ************************************ 00:07:28.028 START TEST accel_decmop_full 00:07:28.028 ************************************ 00:07:28.028 00:47:11 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:28.028 00:47:11 -- accel/accel.sh@16 -- # local accel_opc 00:07:28.028 00:47:11 -- accel/accel.sh@17 -- # local accel_module 00:07:28.028 00:47:11 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:28.028 00:47:11 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:28.028 00:47:11 -- accel/accel.sh@12 -- # build_accel_config 00:07:28.028 00:47:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:28.028 00:47:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:28.028 00:47:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:28.028 00:47:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:28.028 00:47:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:28.028 00:47:11 -- accel/accel.sh@41 -- # local IFS=, 00:07:28.028 00:47:11 -- accel/accel.sh@42 -- # jq -r . 00:07:28.028 [2024-07-23 00:47:11.997080] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:28.028 [2024-07-23 00:47:11.997164] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3292035 ] 00:07:28.028 EAL: No free 2048 kB hugepages reported on node 1 00:07:28.028 [2024-07-23 00:47:12.057673] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.028 [2024-07-23 00:47:12.148410] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.400 00:47:13 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:29.400 00:07:29.400 SPDK Configuration: 00:07:29.400 Core mask: 0x1 00:07:29.400 00:07:29.400 Accel Perf Configuration: 00:07:29.400 Workload Type: decompress 00:07:29.400 Transfer size: 111250 bytes 00:07:29.400 Vector count 1 00:07:29.400 Module: software 00:07:29.400 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:29.400 Queue depth: 32 00:07:29.400 Allocate depth: 32 00:07:29.400 # threads/core: 1 00:07:29.400 Run time: 1 seconds 00:07:29.400 Verify: Yes 00:07:29.400 00:07:29.400 Running for 1 seconds... 00:07:29.400 00:07:29.400 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:29.400 ------------------------------------------------------------------------------------ 00:07:29.400 0,0 3808/s 157 MiB/s 0 0 00:07:29.400 ==================================================================================== 00:07:29.400 Total 3808/s 404 MiB/s 0 0' 00:07:29.400 00:47:13 -- accel/accel.sh@20 -- # IFS=: 00:07:29.400 00:47:13 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:29.400 00:47:13 -- accel/accel.sh@20 -- # read -r var val 00:07:29.400 00:47:13 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:29.400 00:47:13 -- accel/accel.sh@12 -- # build_accel_config 00:07:29.400 00:47:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:29.400 00:47:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:29.400 00:47:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:29.400 00:47:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:29.400 00:47:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:29.400 00:47:13 -- accel/accel.sh@41 -- # local IFS=, 00:07:29.400 00:47:13 -- accel/accel.sh@42 -- # jq -r . 00:07:29.400 [2024-07-23 00:47:13.414662] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:29.400 [2024-07-23 00:47:13.414747] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3292179 ] 00:07:29.400 EAL: No free 2048 kB hugepages reported on node 1 00:07:29.400 [2024-07-23 00:47:13.474966] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.400 [2024-07-23 00:47:13.568368] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.657 00:47:13 -- accel/accel.sh@21 -- # val= 00:07:29.657 00:47:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.657 00:47:13 -- accel/accel.sh@20 -- # IFS=: 00:07:29.657 00:47:13 -- accel/accel.sh@20 -- # read -r var val 00:07:29.657 00:47:13 -- accel/accel.sh@21 -- # val= 00:07:29.657 00:47:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.657 00:47:13 -- accel/accel.sh@20 -- # IFS=: 00:07:29.657 00:47:13 -- accel/accel.sh@20 -- # read -r var val 00:07:29.657 00:47:13 -- accel/accel.sh@21 -- # val= 00:07:29.657 00:47:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.657 00:47:13 -- accel/accel.sh@20 -- # IFS=: 00:07:29.657 00:47:13 -- accel/accel.sh@20 -- # read -r var val 00:07:29.657 00:47:13 -- accel/accel.sh@21 -- # val=0x1 00:07:29.657 00:47:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.657 00:47:13 -- accel/accel.sh@20 -- # IFS=: 00:07:29.657 00:47:13 -- accel/accel.sh@20 -- # read -r var val 00:07:29.657 00:47:13 -- accel/accel.sh@21 -- # val= 00:07:29.657 00:47:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.657 00:47:13 -- accel/accel.sh@20 -- # IFS=: 00:07:29.657 00:47:13 -- accel/accel.sh@20 -- # read -r var val 00:07:29.657 00:47:13 -- accel/accel.sh@21 -- # val= 00:07:29.657 00:47:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.657 00:47:13 -- accel/accel.sh@20 -- # IFS=: 00:07:29.657 00:47:13 -- accel/accel.sh@20 -- # read -r var val 00:07:29.657 00:47:13 -- accel/accel.sh@21 -- # val=decompress 00:07:29.657 00:47:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.657 00:47:13 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:29.657 00:47:13 -- accel/accel.sh@20 -- # IFS=: 00:07:29.657 00:47:13 -- accel/accel.sh@20 -- # read -r var val 00:07:29.657 00:47:13 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:29.657 00:47:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.657 00:47:13 -- accel/accel.sh@20 -- # IFS=: 00:07:29.657 00:47:13 -- accel/accel.sh@20 -- # read -r var val 00:07:29.657 00:47:13 -- accel/accel.sh@21 -- # val= 00:07:29.657 00:47:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.657 00:47:13 -- accel/accel.sh@20 -- # IFS=: 00:07:29.657 00:47:13 -- accel/accel.sh@20 -- # read -r var val 00:07:29.657 00:47:13 -- accel/accel.sh@21 -- # val=software 00:07:29.658 00:47:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.658 00:47:13 -- accel/accel.sh@23 -- # accel_module=software 00:07:29.658 00:47:13 -- accel/accel.sh@20 -- # IFS=: 00:07:29.658 00:47:13 -- accel/accel.sh@20 -- # read -r var val 00:07:29.658 00:47:13 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:29.658 00:47:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.658 00:47:13 -- accel/accel.sh@20 -- # IFS=: 00:07:29.658 00:47:13 -- accel/accel.sh@20 -- # read -r var val 00:07:29.658 00:47:13 -- accel/accel.sh@21 -- # val=32 00:07:29.658 00:47:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.658 00:47:13 -- accel/accel.sh@20 -- # IFS=: 00:07:29.658 00:47:13 -- accel/accel.sh@20 -- # read -r var val 00:07:29.658 00:47:13 -- accel/accel.sh@21 -- # val=32 00:07:29.658 00:47:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.658 00:47:13 -- accel/accel.sh@20 -- # IFS=: 00:07:29.658 00:47:13 -- accel/accel.sh@20 -- # read -r var val 00:07:29.658 00:47:13 -- accel/accel.sh@21 -- # val=1 00:07:29.658 00:47:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.658 00:47:13 -- accel/accel.sh@20 -- # IFS=: 00:07:29.658 00:47:13 -- accel/accel.sh@20 -- # read -r var val 00:07:29.658 00:47:13 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:29.658 00:47:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.658 00:47:13 -- accel/accel.sh@20 -- # IFS=: 00:07:29.658 00:47:13 -- accel/accel.sh@20 -- # read -r var val 00:07:29.658 00:47:13 -- accel/accel.sh@21 -- # val=Yes 00:07:29.658 00:47:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.658 00:47:13 -- accel/accel.sh@20 -- # IFS=: 00:07:29.658 00:47:13 -- accel/accel.sh@20 -- # read -r var val 00:07:29.658 00:47:13 -- accel/accel.sh@21 -- # val= 00:07:29.658 00:47:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.658 00:47:13 -- accel/accel.sh@20 -- # IFS=: 00:07:29.658 00:47:13 -- accel/accel.sh@20 -- # read -r var val 00:07:29.658 00:47:13 -- accel/accel.sh@21 -- # val= 00:07:29.658 00:47:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.658 00:47:13 -- accel/accel.sh@20 -- # IFS=: 00:07:29.658 00:47:13 -- accel/accel.sh@20 -- # read -r var val 00:07:31.031 00:47:14 -- accel/accel.sh@21 -- # val= 00:07:31.031 00:47:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.031 00:47:14 -- accel/accel.sh@20 -- # IFS=: 00:07:31.031 00:47:14 -- accel/accel.sh@20 -- # read -r var val 00:07:31.031 00:47:14 -- accel/accel.sh@21 -- # val= 00:07:31.031 00:47:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.031 00:47:14 -- accel/accel.sh@20 -- # IFS=: 00:07:31.031 00:47:14 -- accel/accel.sh@20 -- # read -r var val 00:07:31.031 00:47:14 -- accel/accel.sh@21 -- # val= 00:07:31.031 00:47:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.031 00:47:14 -- accel/accel.sh@20 -- # IFS=: 00:07:31.031 00:47:14 -- accel/accel.sh@20 -- # read -r var val 00:07:31.031 00:47:14 -- accel/accel.sh@21 -- # val= 00:07:31.031 00:47:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.031 00:47:14 -- accel/accel.sh@20 -- # IFS=: 00:07:31.031 00:47:14 -- accel/accel.sh@20 -- # read -r var val 00:07:31.031 00:47:14 -- accel/accel.sh@21 -- # val= 00:07:31.031 00:47:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.031 00:47:14 -- accel/accel.sh@20 -- # IFS=: 00:07:31.031 00:47:14 -- accel/accel.sh@20 -- # read -r var val 00:07:31.031 00:47:14 -- accel/accel.sh@21 -- # val= 00:07:31.031 00:47:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.031 00:47:14 -- accel/accel.sh@20 -- # IFS=: 00:07:31.031 00:47:14 -- accel/accel.sh@20 -- # read -r var val 00:07:31.031 00:47:14 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:31.031 00:47:14 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:31.031 00:47:14 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:31.031 00:07:31.031 real 0m2.847s 00:07:31.031 user 0m2.556s 00:07:31.031 sys 0m0.284s 00:07:31.031 00:47:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:31.031 00:47:14 -- common/autotest_common.sh@10 -- # set +x 00:07:31.031 ************************************ 00:07:31.031 END TEST accel_decmop_full 00:07:31.031 ************************************ 00:07:31.031 00:47:14 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:31.031 00:47:14 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:07:31.031 00:47:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:31.031 00:47:14 -- common/autotest_common.sh@10 -- # set +x 00:07:31.031 ************************************ 00:07:31.031 START TEST accel_decomp_mcore 00:07:31.031 ************************************ 00:07:31.031 00:47:14 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:31.031 00:47:14 -- accel/accel.sh@16 -- # local accel_opc 00:07:31.031 00:47:14 -- accel/accel.sh@17 -- # local accel_module 00:07:31.031 00:47:14 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:31.031 00:47:14 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:31.031 00:47:14 -- accel/accel.sh@12 -- # build_accel_config 00:07:31.031 00:47:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:31.031 00:47:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:31.031 00:47:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:31.031 00:47:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:31.031 00:47:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:31.031 00:47:14 -- accel/accel.sh@41 -- # local IFS=, 00:07:31.031 00:47:14 -- accel/accel.sh@42 -- # jq -r . 00:07:31.031 [2024-07-23 00:47:14.868002] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:31.031 [2024-07-23 00:47:14.868082] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3292340 ] 00:07:31.031 EAL: No free 2048 kB hugepages reported on node 1 00:07:31.031 [2024-07-23 00:47:14.933361] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:31.031 [2024-07-23 00:47:15.028593] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:31.031 [2024-07-23 00:47:15.028647] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:31.031 [2024-07-23 00:47:15.028675] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:31.031 [2024-07-23 00:47:15.028678] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.403 00:47:16 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:32.403 00:07:32.403 SPDK Configuration: 00:07:32.403 Core mask: 0xf 00:07:32.403 00:07:32.403 Accel Perf Configuration: 00:07:32.403 Workload Type: decompress 00:07:32.403 Transfer size: 4096 bytes 00:07:32.403 Vector count 1 00:07:32.403 Module: software 00:07:32.403 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:32.403 Queue depth: 32 00:07:32.403 Allocate depth: 32 00:07:32.403 # threads/core: 1 00:07:32.403 Run time: 1 seconds 00:07:32.403 Verify: Yes 00:07:32.403 00:07:32.403 Running for 1 seconds... 00:07:32.403 00:07:32.403 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:32.403 ------------------------------------------------------------------------------------ 00:07:32.403 0,0 50432/s 92 MiB/s 0 0 00:07:32.403 3,0 50976/s 93 MiB/s 0 0 00:07:32.403 2,0 50944/s 93 MiB/s 0 0 00:07:32.403 1,0 50720/s 93 MiB/s 0 0 00:07:32.403 ==================================================================================== 00:07:32.403 Total 203072/s 793 MiB/s 0 0' 00:07:32.403 00:47:16 -- accel/accel.sh@20 -- # IFS=: 00:07:32.403 00:47:16 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:32.403 00:47:16 -- accel/accel.sh@20 -- # read -r var val 00:07:32.403 00:47:16 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:32.403 00:47:16 -- accel/accel.sh@12 -- # build_accel_config 00:07:32.403 00:47:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:32.403 00:47:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:32.403 00:47:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:32.403 00:47:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:32.403 00:47:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:32.403 00:47:16 -- accel/accel.sh@41 -- # local IFS=, 00:07:32.403 00:47:16 -- accel/accel.sh@42 -- # jq -r . 00:07:32.403 [2024-07-23 00:47:16.291571] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:32.403 [2024-07-23 00:47:16.291661] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3292600 ] 00:07:32.403 EAL: No free 2048 kB hugepages reported on node 1 00:07:32.403 [2024-07-23 00:47:16.356141] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:32.403 [2024-07-23 00:47:16.451608] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:32.403 [2024-07-23 00:47:16.451651] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:32.403 [2024-07-23 00:47:16.451680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:32.403 [2024-07-23 00:47:16.451683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.403 00:47:16 -- accel/accel.sh@21 -- # val= 00:07:32.403 00:47:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.403 00:47:16 -- accel/accel.sh@20 -- # IFS=: 00:07:32.403 00:47:16 -- accel/accel.sh@20 -- # read -r var val 00:07:32.403 00:47:16 -- accel/accel.sh@21 -- # val= 00:07:32.403 00:47:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.403 00:47:16 -- accel/accel.sh@20 -- # IFS=: 00:07:32.403 00:47:16 -- accel/accel.sh@20 -- # read -r var val 00:07:32.403 00:47:16 -- accel/accel.sh@21 -- # val= 00:07:32.403 00:47:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.403 00:47:16 -- accel/accel.sh@20 -- # IFS=: 00:07:32.403 00:47:16 -- accel/accel.sh@20 -- # read -r var val 00:07:32.403 00:47:16 -- accel/accel.sh@21 -- # val=0xf 00:07:32.404 00:47:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.404 00:47:16 -- accel/accel.sh@20 -- # IFS=: 00:07:32.404 00:47:16 -- accel/accel.sh@20 -- # read -r var val 00:07:32.404 00:47:16 -- accel/accel.sh@21 -- # val= 00:07:32.404 00:47:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.404 00:47:16 -- accel/accel.sh@20 -- # IFS=: 00:07:32.404 00:47:16 -- accel/accel.sh@20 -- # read -r var val 00:07:32.404 00:47:16 -- accel/accel.sh@21 -- # val= 00:07:32.404 00:47:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.404 00:47:16 -- accel/accel.sh@20 -- # IFS=: 00:07:32.404 00:47:16 -- accel/accel.sh@20 -- # read -r var val 00:07:32.404 00:47:16 -- accel/accel.sh@21 -- # val=decompress 00:07:32.404 00:47:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.404 00:47:16 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:32.404 00:47:16 -- accel/accel.sh@20 -- # IFS=: 00:07:32.404 00:47:16 -- accel/accel.sh@20 -- # read -r var val 00:07:32.404 00:47:16 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:32.404 00:47:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.404 00:47:16 -- accel/accel.sh@20 -- # IFS=: 00:07:32.404 00:47:16 -- accel/accel.sh@20 -- # read -r var val 00:07:32.404 00:47:16 -- accel/accel.sh@21 -- # val= 00:07:32.404 00:47:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.404 00:47:16 -- accel/accel.sh@20 -- # IFS=: 00:07:32.404 00:47:16 -- accel/accel.sh@20 -- # read -r var val 00:07:32.404 00:47:16 -- accel/accel.sh@21 -- # val=software 00:07:32.404 00:47:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.404 00:47:16 -- accel/accel.sh@23 -- # accel_module=software 00:07:32.404 00:47:16 -- accel/accel.sh@20 -- # IFS=: 00:07:32.404 00:47:16 -- accel/accel.sh@20 -- # read -r var val 00:07:32.404 00:47:16 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:32.404 00:47:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.404 00:47:16 -- accel/accel.sh@20 -- # IFS=: 00:07:32.404 00:47:16 -- accel/accel.sh@20 -- # read -r var val 00:07:32.404 00:47:16 -- accel/accel.sh@21 -- # val=32 00:07:32.404 00:47:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.404 00:47:16 -- accel/accel.sh@20 -- # IFS=: 00:07:32.404 00:47:16 -- accel/accel.sh@20 -- # read -r var val 00:07:32.404 00:47:16 -- accel/accel.sh@21 -- # val=32 00:07:32.404 00:47:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.404 00:47:16 -- accel/accel.sh@20 -- # IFS=: 00:07:32.404 00:47:16 -- accel/accel.sh@20 -- # read -r var val 00:07:32.404 00:47:16 -- accel/accel.sh@21 -- # val=1 00:07:32.404 00:47:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.404 00:47:16 -- accel/accel.sh@20 -- # IFS=: 00:07:32.404 00:47:16 -- accel/accel.sh@20 -- # read -r var val 00:07:32.404 00:47:16 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:32.404 00:47:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.404 00:47:16 -- accel/accel.sh@20 -- # IFS=: 00:07:32.404 00:47:16 -- accel/accel.sh@20 -- # read -r var val 00:07:32.404 00:47:16 -- accel/accel.sh@21 -- # val=Yes 00:07:32.404 00:47:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.404 00:47:16 -- accel/accel.sh@20 -- # IFS=: 00:07:32.404 00:47:16 -- accel/accel.sh@20 -- # read -r var val 00:07:32.404 00:47:16 -- accel/accel.sh@21 -- # val= 00:07:32.404 00:47:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.404 00:47:16 -- accel/accel.sh@20 -- # IFS=: 00:07:32.404 00:47:16 -- accel/accel.sh@20 -- # read -r var val 00:07:32.404 00:47:16 -- accel/accel.sh@21 -- # val= 00:07:32.404 00:47:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.404 00:47:16 -- accel/accel.sh@20 -- # IFS=: 00:07:32.404 00:47:16 -- accel/accel.sh@20 -- # read -r var val 00:07:33.777 00:47:17 -- accel/accel.sh@21 -- # val= 00:07:33.777 00:47:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.777 00:47:17 -- accel/accel.sh@20 -- # IFS=: 00:07:33.777 00:47:17 -- accel/accel.sh@20 -- # read -r var val 00:07:33.777 00:47:17 -- accel/accel.sh@21 -- # val= 00:07:33.777 00:47:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.777 00:47:17 -- accel/accel.sh@20 -- # IFS=: 00:07:33.777 00:47:17 -- accel/accel.sh@20 -- # read -r var val 00:07:33.777 00:47:17 -- accel/accel.sh@21 -- # val= 00:07:33.777 00:47:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.777 00:47:17 -- accel/accel.sh@20 -- # IFS=: 00:07:33.777 00:47:17 -- accel/accel.sh@20 -- # read -r var val 00:07:33.777 00:47:17 -- accel/accel.sh@21 -- # val= 00:07:33.777 00:47:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.777 00:47:17 -- accel/accel.sh@20 -- # IFS=: 00:07:33.777 00:47:17 -- accel/accel.sh@20 -- # read -r var val 00:07:33.777 00:47:17 -- accel/accel.sh@21 -- # val= 00:07:33.777 00:47:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.777 00:47:17 -- accel/accel.sh@20 -- # IFS=: 00:07:33.777 00:47:17 -- accel/accel.sh@20 -- # read -r var val 00:07:33.777 00:47:17 -- accel/accel.sh@21 -- # val= 00:07:33.777 00:47:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.777 00:47:17 -- accel/accel.sh@20 -- # IFS=: 00:07:33.777 00:47:17 -- accel/accel.sh@20 -- # read -r var val 00:07:33.777 00:47:17 -- accel/accel.sh@21 -- # val= 00:07:33.777 00:47:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.777 00:47:17 -- accel/accel.sh@20 -- # IFS=: 00:07:33.777 00:47:17 -- accel/accel.sh@20 -- # read -r var val 00:07:33.777 00:47:17 -- accel/accel.sh@21 -- # val= 00:07:33.777 00:47:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.777 00:47:17 -- accel/accel.sh@20 -- # IFS=: 00:07:33.777 00:47:17 -- accel/accel.sh@20 -- # read -r var val 00:07:33.777 00:47:17 -- accel/accel.sh@21 -- # val= 00:07:33.777 00:47:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.777 00:47:17 -- accel/accel.sh@20 -- # IFS=: 00:07:33.777 00:47:17 -- accel/accel.sh@20 -- # read -r var val 00:07:33.777 00:47:17 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:33.777 00:47:17 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:33.777 00:47:17 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:33.777 00:07:33.777 real 0m2.840s 00:07:33.777 user 0m9.445s 00:07:33.777 sys 0m0.303s 00:07:33.777 00:47:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:33.777 00:47:17 -- common/autotest_common.sh@10 -- # set +x 00:07:33.777 ************************************ 00:07:33.777 END TEST accel_decomp_mcore 00:07:33.777 ************************************ 00:07:33.777 00:47:17 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:33.777 00:47:17 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:07:33.777 00:47:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:33.777 00:47:17 -- common/autotest_common.sh@10 -- # set +x 00:07:33.777 ************************************ 00:07:33.777 START TEST accel_decomp_full_mcore 00:07:33.777 ************************************ 00:07:33.777 00:47:17 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:33.777 00:47:17 -- accel/accel.sh@16 -- # local accel_opc 00:07:33.777 00:47:17 -- accel/accel.sh@17 -- # local accel_module 00:07:33.777 00:47:17 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:33.777 00:47:17 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:33.777 00:47:17 -- accel/accel.sh@12 -- # build_accel_config 00:07:33.777 00:47:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:33.777 00:47:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:33.777 00:47:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:33.777 00:47:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:33.777 00:47:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:33.777 00:47:17 -- accel/accel.sh@41 -- # local IFS=, 00:07:33.777 00:47:17 -- accel/accel.sh@42 -- # jq -r . 00:07:33.777 [2024-07-23 00:47:17.734741] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:33.777 [2024-07-23 00:47:17.734819] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3292769 ] 00:07:33.777 EAL: No free 2048 kB hugepages reported on node 1 00:07:33.777 [2024-07-23 00:47:17.798806] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:33.777 [2024-07-23 00:47:17.892647] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:33.777 [2024-07-23 00:47:17.892704] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:33.777 [2024-07-23 00:47:17.892758] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:33.777 [2024-07-23 00:47:17.892761] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.148 00:47:19 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:35.148 00:07:35.149 SPDK Configuration: 00:07:35.149 Core mask: 0xf 00:07:35.149 00:07:35.149 Accel Perf Configuration: 00:07:35.149 Workload Type: decompress 00:07:35.149 Transfer size: 111250 bytes 00:07:35.149 Vector count 1 00:07:35.149 Module: software 00:07:35.149 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:35.149 Queue depth: 32 00:07:35.149 Allocate depth: 32 00:07:35.149 # threads/core: 1 00:07:35.149 Run time: 1 seconds 00:07:35.149 Verify: Yes 00:07:35.149 00:07:35.149 Running for 1 seconds... 00:07:35.149 00:07:35.149 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:35.149 ------------------------------------------------------------------------------------ 00:07:35.149 0,0 3776/s 155 MiB/s 0 0 00:07:35.149 3,0 3776/s 155 MiB/s 0 0 00:07:35.149 2,0 3776/s 155 MiB/s 0 0 00:07:35.149 1,0 3808/s 157 MiB/s 0 0 00:07:35.149 ==================================================================================== 00:07:35.149 Total 15136/s 1605 MiB/s 0 0' 00:07:35.149 00:47:19 -- accel/accel.sh@20 -- # IFS=: 00:07:35.149 00:47:19 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:35.149 00:47:19 -- accel/accel.sh@20 -- # read -r var val 00:07:35.149 00:47:19 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:35.149 00:47:19 -- accel/accel.sh@12 -- # build_accel_config 00:07:35.149 00:47:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:35.149 00:47:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:35.149 00:47:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:35.149 00:47:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:35.149 00:47:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:35.149 00:47:19 -- accel/accel.sh@41 -- # local IFS=, 00:07:35.149 00:47:19 -- accel/accel.sh@42 -- # jq -r . 00:07:35.149 [2024-07-23 00:47:19.164832] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:35.149 [2024-07-23 00:47:19.164914] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3292915 ] 00:07:35.149 EAL: No free 2048 kB hugepages reported on node 1 00:07:35.149 [2024-07-23 00:47:19.227874] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:35.149 [2024-07-23 00:47:19.324961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:35.149 [2024-07-23 00:47:19.325012] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:35.149 [2024-07-23 00:47:19.325068] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:35.149 [2024-07-23 00:47:19.325071] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.407 00:47:19 -- accel/accel.sh@21 -- # val= 00:07:35.407 00:47:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.407 00:47:19 -- accel/accel.sh@20 -- # IFS=: 00:07:35.407 00:47:19 -- accel/accel.sh@20 -- # read -r var val 00:07:35.407 00:47:19 -- accel/accel.sh@21 -- # val= 00:07:35.407 00:47:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.407 00:47:19 -- accel/accel.sh@20 -- # IFS=: 00:07:35.407 00:47:19 -- accel/accel.sh@20 -- # read -r var val 00:07:35.407 00:47:19 -- accel/accel.sh@21 -- # val= 00:07:35.407 00:47:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.407 00:47:19 -- accel/accel.sh@20 -- # IFS=: 00:07:35.407 00:47:19 -- accel/accel.sh@20 -- # read -r var val 00:07:35.407 00:47:19 -- accel/accel.sh@21 -- # val=0xf 00:07:35.407 00:47:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.407 00:47:19 -- accel/accel.sh@20 -- # IFS=: 00:07:35.407 00:47:19 -- accel/accel.sh@20 -- # read -r var val 00:07:35.407 00:47:19 -- accel/accel.sh@21 -- # val= 00:07:35.407 00:47:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.407 00:47:19 -- accel/accel.sh@20 -- # IFS=: 00:07:35.407 00:47:19 -- accel/accel.sh@20 -- # read -r var val 00:07:35.407 00:47:19 -- accel/accel.sh@21 -- # val= 00:07:35.407 00:47:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.407 00:47:19 -- accel/accel.sh@20 -- # IFS=: 00:07:35.407 00:47:19 -- accel/accel.sh@20 -- # read -r var val 00:07:35.407 00:47:19 -- accel/accel.sh@21 -- # val=decompress 00:07:35.407 00:47:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.407 00:47:19 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:35.407 00:47:19 -- accel/accel.sh@20 -- # IFS=: 00:07:35.407 00:47:19 -- accel/accel.sh@20 -- # read -r var val 00:07:35.407 00:47:19 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:35.407 00:47:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.407 00:47:19 -- accel/accel.sh@20 -- # IFS=: 00:07:35.407 00:47:19 -- accel/accel.sh@20 -- # read -r var val 00:07:35.407 00:47:19 -- accel/accel.sh@21 -- # val= 00:07:35.407 00:47:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.407 00:47:19 -- accel/accel.sh@20 -- # IFS=: 00:07:35.407 00:47:19 -- accel/accel.sh@20 -- # read -r var val 00:07:35.407 00:47:19 -- accel/accel.sh@21 -- # val=software 00:07:35.407 00:47:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.407 00:47:19 -- accel/accel.sh@23 -- # accel_module=software 00:07:35.407 00:47:19 -- accel/accel.sh@20 -- # IFS=: 00:07:35.407 00:47:19 -- accel/accel.sh@20 -- # read -r var val 00:07:35.407 00:47:19 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:35.407 00:47:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.407 00:47:19 -- accel/accel.sh@20 -- # IFS=: 00:07:35.407 00:47:19 -- accel/accel.sh@20 -- # read -r var val 00:07:35.407 00:47:19 -- accel/accel.sh@21 -- # val=32 00:07:35.407 00:47:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.407 00:47:19 -- accel/accel.sh@20 -- # IFS=: 00:07:35.407 00:47:19 -- accel/accel.sh@20 -- # read -r var val 00:07:35.407 00:47:19 -- accel/accel.sh@21 -- # val=32 00:07:35.407 00:47:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.407 00:47:19 -- accel/accel.sh@20 -- # IFS=: 00:07:35.407 00:47:19 -- accel/accel.sh@20 -- # read -r var val 00:07:35.407 00:47:19 -- accel/accel.sh@21 -- # val=1 00:07:35.407 00:47:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.407 00:47:19 -- accel/accel.sh@20 -- # IFS=: 00:07:35.407 00:47:19 -- accel/accel.sh@20 -- # read -r var val 00:07:35.407 00:47:19 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:35.407 00:47:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.407 00:47:19 -- accel/accel.sh@20 -- # IFS=: 00:07:35.407 00:47:19 -- accel/accel.sh@20 -- # read -r var val 00:07:35.407 00:47:19 -- accel/accel.sh@21 -- # val=Yes 00:07:35.407 00:47:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.407 00:47:19 -- accel/accel.sh@20 -- # IFS=: 00:07:35.407 00:47:19 -- accel/accel.sh@20 -- # read -r var val 00:07:35.407 00:47:19 -- accel/accel.sh@21 -- # val= 00:07:35.407 00:47:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.407 00:47:19 -- accel/accel.sh@20 -- # IFS=: 00:07:35.407 00:47:19 -- accel/accel.sh@20 -- # read -r var val 00:07:35.407 00:47:19 -- accel/accel.sh@21 -- # val= 00:07:35.407 00:47:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.407 00:47:19 -- accel/accel.sh@20 -- # IFS=: 00:07:35.407 00:47:19 -- accel/accel.sh@20 -- # read -r var val 00:07:36.781 00:47:20 -- accel/accel.sh@21 -- # val= 00:07:36.781 00:47:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.781 00:47:20 -- accel/accel.sh@20 -- # IFS=: 00:07:36.781 00:47:20 -- accel/accel.sh@20 -- # read -r var val 00:07:36.781 00:47:20 -- accel/accel.sh@21 -- # val= 00:07:36.781 00:47:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.781 00:47:20 -- accel/accel.sh@20 -- # IFS=: 00:07:36.781 00:47:20 -- accel/accel.sh@20 -- # read -r var val 00:07:36.781 00:47:20 -- accel/accel.sh@21 -- # val= 00:07:36.781 00:47:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.781 00:47:20 -- accel/accel.sh@20 -- # IFS=: 00:07:36.781 00:47:20 -- accel/accel.sh@20 -- # read -r var val 00:07:36.781 00:47:20 -- accel/accel.sh@21 -- # val= 00:07:36.781 00:47:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.781 00:47:20 -- accel/accel.sh@20 -- # IFS=: 00:07:36.781 00:47:20 -- accel/accel.sh@20 -- # read -r var val 00:07:36.781 00:47:20 -- accel/accel.sh@21 -- # val= 00:07:36.781 00:47:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.781 00:47:20 -- accel/accel.sh@20 -- # IFS=: 00:07:36.781 00:47:20 -- accel/accel.sh@20 -- # read -r var val 00:07:36.781 00:47:20 -- accel/accel.sh@21 -- # val= 00:07:36.781 00:47:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.781 00:47:20 -- accel/accel.sh@20 -- # IFS=: 00:07:36.781 00:47:20 -- accel/accel.sh@20 -- # read -r var val 00:07:36.781 00:47:20 -- accel/accel.sh@21 -- # val= 00:07:36.781 00:47:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.781 00:47:20 -- accel/accel.sh@20 -- # IFS=: 00:07:36.781 00:47:20 -- accel/accel.sh@20 -- # read -r var val 00:07:36.781 00:47:20 -- accel/accel.sh@21 -- # val= 00:07:36.781 00:47:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.781 00:47:20 -- accel/accel.sh@20 -- # IFS=: 00:07:36.781 00:47:20 -- accel/accel.sh@20 -- # read -r var val 00:07:36.781 00:47:20 -- accel/accel.sh@21 -- # val= 00:07:36.781 00:47:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.781 00:47:20 -- accel/accel.sh@20 -- # IFS=: 00:07:36.781 00:47:20 -- accel/accel.sh@20 -- # read -r var val 00:07:36.781 00:47:20 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:36.781 00:47:20 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:36.781 00:47:20 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:36.781 00:07:36.781 real 0m2.871s 00:07:36.781 user 0m9.553s 00:07:36.781 sys 0m0.314s 00:07:36.781 00:47:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:36.781 00:47:20 -- common/autotest_common.sh@10 -- # set +x 00:07:36.781 ************************************ 00:07:36.781 END TEST accel_decomp_full_mcore 00:07:36.781 ************************************ 00:07:36.781 00:47:20 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:36.781 00:47:20 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:07:36.781 00:47:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:36.781 00:47:20 -- common/autotest_common.sh@10 -- # set +x 00:07:36.781 ************************************ 00:07:36.781 START TEST accel_decomp_mthread 00:07:36.781 ************************************ 00:07:36.781 00:47:20 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:36.781 00:47:20 -- accel/accel.sh@16 -- # local accel_opc 00:07:36.781 00:47:20 -- accel/accel.sh@17 -- # local accel_module 00:07:36.781 00:47:20 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:36.781 00:47:20 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:36.781 00:47:20 -- accel/accel.sh@12 -- # build_accel_config 00:07:36.781 00:47:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:36.781 00:47:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:36.781 00:47:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:36.781 00:47:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:36.781 00:47:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:36.781 00:47:20 -- accel/accel.sh@41 -- # local IFS=, 00:07:36.781 00:47:20 -- accel/accel.sh@42 -- # jq -r . 00:07:36.781 [2024-07-23 00:47:20.631989] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:36.781 [2024-07-23 00:47:20.632077] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3293114 ] 00:07:36.781 EAL: No free 2048 kB hugepages reported on node 1 00:07:36.781 [2024-07-23 00:47:20.695071] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.781 [2024-07-23 00:47:20.789302] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.186 00:47:22 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:38.186 00:07:38.186 SPDK Configuration: 00:07:38.186 Core mask: 0x1 00:07:38.186 00:07:38.186 Accel Perf Configuration: 00:07:38.186 Workload Type: decompress 00:07:38.186 Transfer size: 4096 bytes 00:07:38.186 Vector count 1 00:07:38.186 Module: software 00:07:38.186 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:38.186 Queue depth: 32 00:07:38.186 Allocate depth: 32 00:07:38.186 # threads/core: 2 00:07:38.186 Run time: 1 seconds 00:07:38.186 Verify: Yes 00:07:38.186 00:07:38.186 Running for 1 seconds... 00:07:38.186 00:07:38.186 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:38.186 ------------------------------------------------------------------------------------ 00:07:38.186 0,1 28096/s 51 MiB/s 0 0 00:07:38.186 0,0 28000/s 51 MiB/s 0 0 00:07:38.186 ==================================================================================== 00:07:38.186 Total 56096/s 219 MiB/s 0 0' 00:07:38.186 00:47:22 -- accel/accel.sh@20 -- # IFS=: 00:07:38.186 00:47:22 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:38.186 00:47:22 -- accel/accel.sh@20 -- # read -r var val 00:07:38.186 00:47:22 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:38.186 00:47:22 -- accel/accel.sh@12 -- # build_accel_config 00:07:38.186 00:47:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:38.186 00:47:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:38.186 00:47:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:38.186 00:47:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:38.186 00:47:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:38.186 00:47:22 -- accel/accel.sh@41 -- # local IFS=, 00:07:38.186 00:47:22 -- accel/accel.sh@42 -- # jq -r . 00:07:38.186 [2024-07-23 00:47:22.052486] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:38.186 [2024-07-23 00:47:22.052568] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3293338 ] 00:07:38.186 EAL: No free 2048 kB hugepages reported on node 1 00:07:38.186 [2024-07-23 00:47:22.118821] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.186 [2024-07-23 00:47:22.210911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.186 00:47:22 -- accel/accel.sh@21 -- # val= 00:07:38.186 00:47:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.186 00:47:22 -- accel/accel.sh@20 -- # IFS=: 00:07:38.186 00:47:22 -- accel/accel.sh@20 -- # read -r var val 00:07:38.186 00:47:22 -- accel/accel.sh@21 -- # val= 00:07:38.186 00:47:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.186 00:47:22 -- accel/accel.sh@20 -- # IFS=: 00:07:38.186 00:47:22 -- accel/accel.sh@20 -- # read -r var val 00:07:38.186 00:47:22 -- accel/accel.sh@21 -- # val= 00:07:38.186 00:47:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.186 00:47:22 -- accel/accel.sh@20 -- # IFS=: 00:07:38.186 00:47:22 -- accel/accel.sh@20 -- # read -r var val 00:07:38.186 00:47:22 -- accel/accel.sh@21 -- # val=0x1 00:07:38.186 00:47:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.186 00:47:22 -- accel/accel.sh@20 -- # IFS=: 00:07:38.186 00:47:22 -- accel/accel.sh@20 -- # read -r var val 00:07:38.186 00:47:22 -- accel/accel.sh@21 -- # val= 00:07:38.186 00:47:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.186 00:47:22 -- accel/accel.sh@20 -- # IFS=: 00:07:38.186 00:47:22 -- accel/accel.sh@20 -- # read -r var val 00:07:38.186 00:47:22 -- accel/accel.sh@21 -- # val= 00:07:38.186 00:47:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.186 00:47:22 -- accel/accel.sh@20 -- # IFS=: 00:07:38.186 00:47:22 -- accel/accel.sh@20 -- # read -r var val 00:07:38.186 00:47:22 -- accel/accel.sh@21 -- # val=decompress 00:07:38.186 00:47:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.186 00:47:22 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:38.186 00:47:22 -- accel/accel.sh@20 -- # IFS=: 00:07:38.186 00:47:22 -- accel/accel.sh@20 -- # read -r var val 00:07:38.186 00:47:22 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:38.186 00:47:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.186 00:47:22 -- accel/accel.sh@20 -- # IFS=: 00:07:38.186 00:47:22 -- accel/accel.sh@20 -- # read -r var val 00:07:38.186 00:47:22 -- accel/accel.sh@21 -- # val= 00:07:38.186 00:47:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.186 00:47:22 -- accel/accel.sh@20 -- # IFS=: 00:07:38.186 00:47:22 -- accel/accel.sh@20 -- # read -r var val 00:07:38.186 00:47:22 -- accel/accel.sh@21 -- # val=software 00:07:38.186 00:47:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.186 00:47:22 -- accel/accel.sh@23 -- # accel_module=software 00:07:38.186 00:47:22 -- accel/accel.sh@20 -- # IFS=: 00:07:38.186 00:47:22 -- accel/accel.sh@20 -- # read -r var val 00:07:38.187 00:47:22 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:38.187 00:47:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.187 00:47:22 -- accel/accel.sh@20 -- # IFS=: 00:07:38.187 00:47:22 -- accel/accel.sh@20 -- # read -r var val 00:07:38.187 00:47:22 -- accel/accel.sh@21 -- # val=32 00:07:38.187 00:47:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.187 00:47:22 -- accel/accel.sh@20 -- # IFS=: 00:07:38.187 00:47:22 -- accel/accel.sh@20 -- # read -r var val 00:07:38.187 00:47:22 -- accel/accel.sh@21 -- # val=32 00:07:38.187 00:47:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.187 00:47:22 -- accel/accel.sh@20 -- # IFS=: 00:07:38.187 00:47:22 -- accel/accel.sh@20 -- # read -r var val 00:07:38.187 00:47:22 -- accel/accel.sh@21 -- # val=2 00:07:38.187 00:47:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.187 00:47:22 -- accel/accel.sh@20 -- # IFS=: 00:07:38.187 00:47:22 -- accel/accel.sh@20 -- # read -r var val 00:07:38.187 00:47:22 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:38.187 00:47:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.187 00:47:22 -- accel/accel.sh@20 -- # IFS=: 00:07:38.187 00:47:22 -- accel/accel.sh@20 -- # read -r var val 00:07:38.187 00:47:22 -- accel/accel.sh@21 -- # val=Yes 00:07:38.187 00:47:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.187 00:47:22 -- accel/accel.sh@20 -- # IFS=: 00:07:38.187 00:47:22 -- accel/accel.sh@20 -- # read -r var val 00:07:38.187 00:47:22 -- accel/accel.sh@21 -- # val= 00:07:38.187 00:47:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.187 00:47:22 -- accel/accel.sh@20 -- # IFS=: 00:07:38.187 00:47:22 -- accel/accel.sh@20 -- # read -r var val 00:07:38.187 00:47:22 -- accel/accel.sh@21 -- # val= 00:07:38.187 00:47:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.187 00:47:22 -- accel/accel.sh@20 -- # IFS=: 00:07:38.187 00:47:22 -- accel/accel.sh@20 -- # read -r var val 00:07:39.558 00:47:23 -- accel/accel.sh@21 -- # val= 00:07:39.558 00:47:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.558 00:47:23 -- accel/accel.sh@20 -- # IFS=: 00:07:39.558 00:47:23 -- accel/accel.sh@20 -- # read -r var val 00:07:39.558 00:47:23 -- accel/accel.sh@21 -- # val= 00:07:39.558 00:47:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.558 00:47:23 -- accel/accel.sh@20 -- # IFS=: 00:07:39.558 00:47:23 -- accel/accel.sh@20 -- # read -r var val 00:07:39.558 00:47:23 -- accel/accel.sh@21 -- # val= 00:07:39.558 00:47:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.558 00:47:23 -- accel/accel.sh@20 -- # IFS=: 00:07:39.558 00:47:23 -- accel/accel.sh@20 -- # read -r var val 00:07:39.558 00:47:23 -- accel/accel.sh@21 -- # val= 00:07:39.558 00:47:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.558 00:47:23 -- accel/accel.sh@20 -- # IFS=: 00:07:39.558 00:47:23 -- accel/accel.sh@20 -- # read -r var val 00:07:39.558 00:47:23 -- accel/accel.sh@21 -- # val= 00:07:39.558 00:47:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.558 00:47:23 -- accel/accel.sh@20 -- # IFS=: 00:07:39.558 00:47:23 -- accel/accel.sh@20 -- # read -r var val 00:07:39.558 00:47:23 -- accel/accel.sh@21 -- # val= 00:07:39.558 00:47:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.558 00:47:23 -- accel/accel.sh@20 -- # IFS=: 00:07:39.558 00:47:23 -- accel/accel.sh@20 -- # read -r var val 00:07:39.558 00:47:23 -- accel/accel.sh@21 -- # val= 00:07:39.558 00:47:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.558 00:47:23 -- accel/accel.sh@20 -- # IFS=: 00:07:39.558 00:47:23 -- accel/accel.sh@20 -- # read -r var val 00:07:39.558 00:47:23 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:39.558 00:47:23 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:39.558 00:47:23 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:39.558 00:07:39.558 real 0m2.835s 00:07:39.558 user 0m2.534s 00:07:39.558 sys 0m0.293s 00:07:39.558 00:47:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:39.558 00:47:23 -- common/autotest_common.sh@10 -- # set +x 00:07:39.558 ************************************ 00:07:39.558 END TEST accel_decomp_mthread 00:07:39.558 ************************************ 00:07:39.558 00:47:23 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:39.558 00:47:23 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:07:39.558 00:47:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:39.558 00:47:23 -- common/autotest_common.sh@10 -- # set +x 00:07:39.558 ************************************ 00:07:39.558 START TEST accel_deomp_full_mthread 00:07:39.558 ************************************ 00:07:39.558 00:47:23 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:39.558 00:47:23 -- accel/accel.sh@16 -- # local accel_opc 00:07:39.558 00:47:23 -- accel/accel.sh@17 -- # local accel_module 00:07:39.558 00:47:23 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:39.558 00:47:23 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:39.558 00:47:23 -- accel/accel.sh@12 -- # build_accel_config 00:07:39.558 00:47:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:39.558 00:47:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:39.558 00:47:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:39.558 00:47:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:39.558 00:47:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:39.558 00:47:23 -- accel/accel.sh@41 -- # local IFS=, 00:07:39.559 00:47:23 -- accel/accel.sh@42 -- # jq -r . 00:07:39.559 [2024-07-23 00:47:23.494802] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:39.559 [2024-07-23 00:47:23.494880] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3293502 ] 00:07:39.559 EAL: No free 2048 kB hugepages reported on node 1 00:07:39.559 [2024-07-23 00:47:23.571777] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.559 [2024-07-23 00:47:23.675624] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.934 00:47:24 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:40.934 00:07:40.934 SPDK Configuration: 00:07:40.934 Core mask: 0x1 00:07:40.934 00:07:40.934 Accel Perf Configuration: 00:07:40.934 Workload Type: decompress 00:07:40.934 Transfer size: 111250 bytes 00:07:40.934 Vector count 1 00:07:40.934 Module: software 00:07:40.934 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:40.934 Queue depth: 32 00:07:40.934 Allocate depth: 32 00:07:40.934 # threads/core: 2 00:07:40.934 Run time: 1 seconds 00:07:40.934 Verify: Yes 00:07:40.934 00:07:40.934 Running for 1 seconds... 00:07:40.934 00:07:40.934 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:40.934 ------------------------------------------------------------------------------------ 00:07:40.934 0,1 1952/s 80 MiB/s 0 0 00:07:40.934 0,0 1920/s 79 MiB/s 0 0 00:07:40.934 ==================================================================================== 00:07:40.934 Total 3872/s 410 MiB/s 0 0' 00:07:40.934 00:47:24 -- accel/accel.sh@20 -- # IFS=: 00:07:40.934 00:47:24 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:40.934 00:47:24 -- accel/accel.sh@20 -- # read -r var val 00:07:40.934 00:47:24 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:40.934 00:47:24 -- accel/accel.sh@12 -- # build_accel_config 00:07:40.934 00:47:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:40.934 00:47:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:40.934 00:47:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:40.934 00:47:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:40.934 00:47:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:40.934 00:47:24 -- accel/accel.sh@41 -- # local IFS=, 00:07:40.934 00:47:24 -- accel/accel.sh@42 -- # jq -r . 00:07:40.934 [2024-07-23 00:47:24.962106] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:40.934 [2024-07-23 00:47:24.962188] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3293647 ] 00:07:40.934 EAL: No free 2048 kB hugepages reported on node 1 00:07:40.934 [2024-07-23 00:47:25.026429] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.934 [2024-07-23 00:47:25.120148] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.193 00:47:25 -- accel/accel.sh@21 -- # val= 00:07:41.193 00:47:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.193 00:47:25 -- accel/accel.sh@20 -- # IFS=: 00:07:41.193 00:47:25 -- accel/accel.sh@20 -- # read -r var val 00:07:41.193 00:47:25 -- accel/accel.sh@21 -- # val= 00:07:41.193 00:47:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.193 00:47:25 -- accel/accel.sh@20 -- # IFS=: 00:07:41.193 00:47:25 -- accel/accel.sh@20 -- # read -r var val 00:07:41.193 00:47:25 -- accel/accel.sh@21 -- # val= 00:07:41.193 00:47:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.193 00:47:25 -- accel/accel.sh@20 -- # IFS=: 00:07:41.193 00:47:25 -- accel/accel.sh@20 -- # read -r var val 00:07:41.193 00:47:25 -- accel/accel.sh@21 -- # val=0x1 00:07:41.193 00:47:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.193 00:47:25 -- accel/accel.sh@20 -- # IFS=: 00:07:41.193 00:47:25 -- accel/accel.sh@20 -- # read -r var val 00:07:41.193 00:47:25 -- accel/accel.sh@21 -- # val= 00:07:41.193 00:47:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.193 00:47:25 -- accel/accel.sh@20 -- # IFS=: 00:07:41.193 00:47:25 -- accel/accel.sh@20 -- # read -r var val 00:07:41.193 00:47:25 -- accel/accel.sh@21 -- # val= 00:07:41.193 00:47:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.193 00:47:25 -- accel/accel.sh@20 -- # IFS=: 00:07:41.193 00:47:25 -- accel/accel.sh@20 -- # read -r var val 00:07:41.193 00:47:25 -- accel/accel.sh@21 -- # val=decompress 00:07:41.193 00:47:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.193 00:47:25 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:41.193 00:47:25 -- accel/accel.sh@20 -- # IFS=: 00:07:41.193 00:47:25 -- accel/accel.sh@20 -- # read -r var val 00:07:41.193 00:47:25 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:41.193 00:47:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.193 00:47:25 -- accel/accel.sh@20 -- # IFS=: 00:07:41.193 00:47:25 -- accel/accel.sh@20 -- # read -r var val 00:07:41.193 00:47:25 -- accel/accel.sh@21 -- # val= 00:07:41.193 00:47:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.193 00:47:25 -- accel/accel.sh@20 -- # IFS=: 00:07:41.193 00:47:25 -- accel/accel.sh@20 -- # read -r var val 00:07:41.193 00:47:25 -- accel/accel.sh@21 -- # val=software 00:07:41.193 00:47:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.193 00:47:25 -- accel/accel.sh@23 -- # accel_module=software 00:07:41.193 00:47:25 -- accel/accel.sh@20 -- # IFS=: 00:07:41.193 00:47:25 -- accel/accel.sh@20 -- # read -r var val 00:07:41.193 00:47:25 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:41.193 00:47:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.193 00:47:25 -- accel/accel.sh@20 -- # IFS=: 00:07:41.193 00:47:25 -- accel/accel.sh@20 -- # read -r var val 00:07:41.193 00:47:25 -- accel/accel.sh@21 -- # val=32 00:07:41.193 00:47:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.193 00:47:25 -- accel/accel.sh@20 -- # IFS=: 00:07:41.193 00:47:25 -- accel/accel.sh@20 -- # read -r var val 00:07:41.193 00:47:25 -- accel/accel.sh@21 -- # val=32 00:07:41.193 00:47:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.193 00:47:25 -- accel/accel.sh@20 -- # IFS=: 00:07:41.193 00:47:25 -- accel/accel.sh@20 -- # read -r var val 00:07:41.193 00:47:25 -- accel/accel.sh@21 -- # val=2 00:07:41.193 00:47:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.193 00:47:25 -- accel/accel.sh@20 -- # IFS=: 00:07:41.193 00:47:25 -- accel/accel.sh@20 -- # read -r var val 00:07:41.193 00:47:25 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:41.193 00:47:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.193 00:47:25 -- accel/accel.sh@20 -- # IFS=: 00:07:41.193 00:47:25 -- accel/accel.sh@20 -- # read -r var val 00:07:41.193 00:47:25 -- accel/accel.sh@21 -- # val=Yes 00:07:41.193 00:47:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.193 00:47:25 -- accel/accel.sh@20 -- # IFS=: 00:07:41.193 00:47:25 -- accel/accel.sh@20 -- # read -r var val 00:07:41.193 00:47:25 -- accel/accel.sh@21 -- # val= 00:07:41.193 00:47:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.193 00:47:25 -- accel/accel.sh@20 -- # IFS=: 00:07:41.193 00:47:25 -- accel/accel.sh@20 -- # read -r var val 00:07:41.194 00:47:25 -- accel/accel.sh@21 -- # val= 00:07:41.194 00:47:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.194 00:47:25 -- accel/accel.sh@20 -- # IFS=: 00:07:41.194 00:47:25 -- accel/accel.sh@20 -- # read -r var val 00:07:42.568 00:47:26 -- accel/accel.sh@21 -- # val= 00:07:42.568 00:47:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.568 00:47:26 -- accel/accel.sh@20 -- # IFS=: 00:07:42.568 00:47:26 -- accel/accel.sh@20 -- # read -r var val 00:07:42.568 00:47:26 -- accel/accel.sh@21 -- # val= 00:07:42.568 00:47:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.568 00:47:26 -- accel/accel.sh@20 -- # IFS=: 00:07:42.568 00:47:26 -- accel/accel.sh@20 -- # read -r var val 00:07:42.568 00:47:26 -- accel/accel.sh@21 -- # val= 00:07:42.568 00:47:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.568 00:47:26 -- accel/accel.sh@20 -- # IFS=: 00:07:42.568 00:47:26 -- accel/accel.sh@20 -- # read -r var val 00:07:42.568 00:47:26 -- accel/accel.sh@21 -- # val= 00:07:42.568 00:47:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.568 00:47:26 -- accel/accel.sh@20 -- # IFS=: 00:07:42.568 00:47:26 -- accel/accel.sh@20 -- # read -r var val 00:07:42.568 00:47:26 -- accel/accel.sh@21 -- # val= 00:07:42.568 00:47:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.568 00:47:26 -- accel/accel.sh@20 -- # IFS=: 00:07:42.568 00:47:26 -- accel/accel.sh@20 -- # read -r var val 00:07:42.568 00:47:26 -- accel/accel.sh@21 -- # val= 00:07:42.568 00:47:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.568 00:47:26 -- accel/accel.sh@20 -- # IFS=: 00:07:42.568 00:47:26 -- accel/accel.sh@20 -- # read -r var val 00:07:42.568 00:47:26 -- accel/accel.sh@21 -- # val= 00:07:42.568 00:47:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.568 00:47:26 -- accel/accel.sh@20 -- # IFS=: 00:07:42.568 00:47:26 -- accel/accel.sh@20 -- # read -r var val 00:07:42.568 00:47:26 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:42.568 00:47:26 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:42.568 00:47:26 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:42.568 00:07:42.568 real 0m2.913s 00:07:42.568 user 0m2.606s 00:07:42.568 sys 0m0.298s 00:07:42.568 00:47:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:42.568 00:47:26 -- common/autotest_common.sh@10 -- # set +x 00:07:42.568 ************************************ 00:07:42.568 END TEST accel_deomp_full_mthread 00:07:42.568 ************************************ 00:07:42.568 00:47:26 -- accel/accel.sh@116 -- # [[ n == y ]] 00:07:42.568 00:47:26 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:42.568 00:47:26 -- accel/accel.sh@129 -- # build_accel_config 00:07:42.568 00:47:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:42.568 00:47:26 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:42.568 00:47:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:42.568 00:47:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:42.568 00:47:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:42.568 00:47:26 -- common/autotest_common.sh@10 -- # set +x 00:07:42.568 00:47:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:42.568 00:47:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:42.568 00:47:26 -- accel/accel.sh@41 -- # local IFS=, 00:07:42.568 00:47:26 -- accel/accel.sh@42 -- # jq -r . 00:07:42.568 ************************************ 00:07:42.568 START TEST accel_dif_functional_tests 00:07:42.568 ************************************ 00:07:42.568 00:47:26 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:42.569 [2024-07-23 00:47:26.457076] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:42.569 [2024-07-23 00:47:26.457162] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3293922 ] 00:07:42.569 EAL: No free 2048 kB hugepages reported on node 1 00:07:42.569 [2024-07-23 00:47:26.523145] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:42.569 [2024-07-23 00:47:26.619028] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:42.569 [2024-07-23 00:47:26.619071] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:42.569 [2024-07-23 00:47:26.619074] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.569 00:07:42.569 00:07:42.569 CUnit - A unit testing framework for C - Version 2.1-3 00:07:42.569 http://cunit.sourceforge.net/ 00:07:42.569 00:07:42.569 00:07:42.569 Suite: accel_dif 00:07:42.569 Test: verify: DIF generated, GUARD check ...passed 00:07:42.569 Test: verify: DIF generated, APPTAG check ...passed 00:07:42.569 Test: verify: DIF generated, REFTAG check ...passed 00:07:42.569 Test: verify: DIF not generated, GUARD check ...[2024-07-23 00:47:26.716936] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:42.569 [2024-07-23 00:47:26.717002] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:42.569 passed 00:07:42.569 Test: verify: DIF not generated, APPTAG check ...[2024-07-23 00:47:26.717046] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:42.569 [2024-07-23 00:47:26.717084] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:42.569 passed 00:07:42.569 Test: verify: DIF not generated, REFTAG check ...[2024-07-23 00:47:26.717117] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:42.569 [2024-07-23 00:47:26.717147] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:42.569 passed 00:07:42.569 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:42.569 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-23 00:47:26.717214] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:42.569 passed 00:07:42.569 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:42.569 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:42.569 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:42.569 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-23 00:47:26.717370] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:42.569 passed 00:07:42.569 Test: generate copy: DIF generated, GUARD check ...passed 00:07:42.569 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:42.569 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:42.569 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:42.569 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:42.569 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:42.569 Test: generate copy: iovecs-len validate ...[2024-07-23 00:47:26.717631] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:42.569 passed 00:07:42.569 Test: generate copy: buffer alignment validate ...passed 00:07:42.569 00:07:42.569 Run Summary: Type Total Ran Passed Failed Inactive 00:07:42.569 suites 1 1 n/a 0 0 00:07:42.569 tests 20 20 20 0 0 00:07:42.569 asserts 204 204 204 0 n/a 00:07:42.569 00:07:42.569 Elapsed time = 0.003 seconds 00:07:42.827 00:07:42.827 real 0m0.513s 00:07:42.827 user 0m0.799s 00:07:42.827 sys 0m0.178s 00:07:42.827 00:47:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:42.827 00:47:26 -- common/autotest_common.sh@10 -- # set +x 00:07:42.827 ************************************ 00:07:42.827 END TEST accel_dif_functional_tests 00:07:42.827 ************************************ 00:07:42.827 00:07:42.827 real 0m59.924s 00:07:42.827 user 1m7.647s 00:07:42.827 sys 0m7.312s 00:07:42.827 00:47:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:42.827 00:47:26 -- common/autotest_common.sh@10 -- # set +x 00:07:42.827 ************************************ 00:07:42.827 END TEST accel 00:07:42.827 ************************************ 00:07:42.827 00:47:26 -- spdk/autotest.sh@190 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:42.827 00:47:26 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:42.827 00:47:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:42.827 00:47:26 -- common/autotest_common.sh@10 -- # set +x 00:07:42.827 ************************************ 00:07:42.827 START TEST accel_rpc 00:07:42.827 ************************************ 00:07:42.827 00:47:26 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:42.827 * Looking for test storage... 00:07:42.827 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:42.827 00:47:27 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:42.827 00:47:27 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=3293991 00:07:42.827 00:47:27 -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:43.086 00:47:27 -- accel/accel_rpc.sh@15 -- # waitforlisten 3293991 00:07:43.086 00:47:27 -- common/autotest_common.sh@819 -- # '[' -z 3293991 ']' 00:07:43.086 00:47:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:43.086 00:47:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:43.086 00:47:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:43.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:43.086 00:47:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:43.086 00:47:27 -- common/autotest_common.sh@10 -- # set +x 00:07:43.086 [2024-07-23 00:47:27.077446] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:43.086 [2024-07-23 00:47:27.077575] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3293991 ] 00:07:43.086 EAL: No free 2048 kB hugepages reported on node 1 00:07:43.086 [2024-07-23 00:47:27.144012] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.086 [2024-07-23 00:47:27.235977] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:43.086 [2024-07-23 00:47:27.236174] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.345 00:47:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:43.345 00:47:27 -- common/autotest_common.sh@852 -- # return 0 00:07:43.345 00:47:27 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:43.345 00:47:27 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:43.345 00:47:27 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:43.345 00:47:27 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:43.345 00:47:27 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:43.345 00:47:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:43.345 00:47:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:43.345 00:47:27 -- common/autotest_common.sh@10 -- # set +x 00:07:43.345 ************************************ 00:07:43.345 START TEST accel_assign_opcode 00:07:43.345 ************************************ 00:07:43.345 00:47:27 -- common/autotest_common.sh@1104 -- # accel_assign_opcode_test_suite 00:07:43.345 00:47:27 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:43.345 00:47:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:43.345 00:47:27 -- common/autotest_common.sh@10 -- # set +x 00:07:43.345 [2024-07-23 00:47:27.320837] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:43.345 00:47:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:43.345 00:47:27 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:43.345 00:47:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:43.345 00:47:27 -- common/autotest_common.sh@10 -- # set +x 00:07:43.345 [2024-07-23 00:47:27.328831] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:43.345 00:47:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:43.345 00:47:27 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:43.345 00:47:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:43.345 00:47:27 -- common/autotest_common.sh@10 -- # set +x 00:07:43.603 00:47:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:43.603 00:47:27 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:43.603 00:47:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:43.603 00:47:27 -- common/autotest_common.sh@10 -- # set +x 00:07:43.603 00:47:27 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:43.603 00:47:27 -- accel/accel_rpc.sh@42 -- # grep software 00:07:43.603 00:47:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:43.603 software 00:07:43.603 00:07:43.603 real 0m0.294s 00:07:43.603 user 0m0.040s 00:07:43.603 sys 0m0.007s 00:07:43.603 00:47:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:43.603 00:47:27 -- common/autotest_common.sh@10 -- # set +x 00:07:43.603 ************************************ 00:07:43.603 END TEST accel_assign_opcode 00:07:43.603 ************************************ 00:07:43.603 00:47:27 -- accel/accel_rpc.sh@55 -- # killprocess 3293991 00:07:43.603 00:47:27 -- common/autotest_common.sh@926 -- # '[' -z 3293991 ']' 00:07:43.603 00:47:27 -- common/autotest_common.sh@930 -- # kill -0 3293991 00:07:43.603 00:47:27 -- common/autotest_common.sh@931 -- # uname 00:07:43.603 00:47:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:43.603 00:47:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3293991 00:07:43.603 00:47:27 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:43.603 00:47:27 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:43.603 00:47:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3293991' 00:07:43.603 killing process with pid 3293991 00:07:43.603 00:47:27 -- common/autotest_common.sh@945 -- # kill 3293991 00:07:43.603 00:47:27 -- common/autotest_common.sh@950 -- # wait 3293991 00:07:44.170 00:07:44.170 real 0m1.092s 00:07:44.170 user 0m1.036s 00:07:44.170 sys 0m0.418s 00:07:44.170 00:47:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:44.170 00:47:28 -- common/autotest_common.sh@10 -- # set +x 00:07:44.170 ************************************ 00:07:44.170 END TEST accel_rpc 00:07:44.170 ************************************ 00:07:44.170 00:47:28 -- spdk/autotest.sh@191 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:44.170 00:47:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:44.170 00:47:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:44.170 00:47:28 -- common/autotest_common.sh@10 -- # set +x 00:07:44.170 ************************************ 00:07:44.170 START TEST app_cmdline 00:07:44.170 ************************************ 00:07:44.170 00:47:28 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:44.170 * Looking for test storage... 00:07:44.170 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:44.170 00:47:28 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:44.170 00:47:28 -- app/cmdline.sh@17 -- # spdk_tgt_pid=3294194 00:07:44.170 00:47:28 -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:44.170 00:47:28 -- app/cmdline.sh@18 -- # waitforlisten 3294194 00:07:44.170 00:47:28 -- common/autotest_common.sh@819 -- # '[' -z 3294194 ']' 00:07:44.170 00:47:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:44.170 00:47:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:44.170 00:47:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:44.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:44.170 00:47:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:44.170 00:47:28 -- common/autotest_common.sh@10 -- # set +x 00:07:44.170 [2024-07-23 00:47:28.199903] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:44.170 [2024-07-23 00:47:28.199995] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3294194 ] 00:07:44.170 EAL: No free 2048 kB hugepages reported on node 1 00:07:44.170 [2024-07-23 00:47:28.269840] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.170 [2024-07-23 00:47:28.360008] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:44.170 [2024-07-23 00:47:28.360189] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.102 00:47:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:45.102 00:47:29 -- common/autotest_common.sh@852 -- # return 0 00:07:45.102 00:47:29 -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:45.359 { 00:07:45.359 "version": "SPDK v24.01.1-pre git sha1 dbef7efac", 00:07:45.359 "fields": { 00:07:45.359 "major": 24, 00:07:45.359 "minor": 1, 00:07:45.359 "patch": 1, 00:07:45.359 "suffix": "-pre", 00:07:45.359 "commit": "dbef7efac" 00:07:45.359 } 00:07:45.359 } 00:07:45.359 00:47:29 -- app/cmdline.sh@22 -- # expected_methods=() 00:07:45.359 00:47:29 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:45.359 00:47:29 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:45.359 00:47:29 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:45.359 00:47:29 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:45.359 00:47:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:45.359 00:47:29 -- common/autotest_common.sh@10 -- # set +x 00:07:45.359 00:47:29 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:45.359 00:47:29 -- app/cmdline.sh@26 -- # sort 00:07:45.359 00:47:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:45.359 00:47:29 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:45.359 00:47:29 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:45.359 00:47:29 -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:45.359 00:47:29 -- common/autotest_common.sh@640 -- # local es=0 00:07:45.359 00:47:29 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:45.359 00:47:29 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:45.359 00:47:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:45.359 00:47:29 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:45.359 00:47:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:45.359 00:47:29 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:45.359 00:47:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:45.359 00:47:29 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:45.359 00:47:29 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:45.359 00:47:29 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:45.616 request: 00:07:45.616 { 00:07:45.616 "method": "env_dpdk_get_mem_stats", 00:07:45.616 "req_id": 1 00:07:45.616 } 00:07:45.616 Got JSON-RPC error response 00:07:45.616 response: 00:07:45.616 { 00:07:45.616 "code": -32601, 00:07:45.616 "message": "Method not found" 00:07:45.616 } 00:07:45.616 00:47:29 -- common/autotest_common.sh@643 -- # es=1 00:07:45.616 00:47:29 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:45.616 00:47:29 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:45.616 00:47:29 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:45.616 00:47:29 -- app/cmdline.sh@1 -- # killprocess 3294194 00:07:45.616 00:47:29 -- common/autotest_common.sh@926 -- # '[' -z 3294194 ']' 00:07:45.616 00:47:29 -- common/autotest_common.sh@930 -- # kill -0 3294194 00:07:45.616 00:47:29 -- common/autotest_common.sh@931 -- # uname 00:07:45.616 00:47:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:45.616 00:47:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3294194 00:07:45.616 00:47:29 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:45.616 00:47:29 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:45.616 00:47:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3294194' 00:07:45.616 killing process with pid 3294194 00:07:45.616 00:47:29 -- common/autotest_common.sh@945 -- # kill 3294194 00:07:45.616 00:47:29 -- common/autotest_common.sh@950 -- # wait 3294194 00:07:46.183 00:07:46.183 real 0m2.084s 00:07:46.183 user 0m2.633s 00:07:46.183 sys 0m0.518s 00:07:46.183 00:47:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:46.183 00:47:30 -- common/autotest_common.sh@10 -- # set +x 00:07:46.183 ************************************ 00:07:46.183 END TEST app_cmdline 00:07:46.183 ************************************ 00:07:46.183 00:47:30 -- spdk/autotest.sh@192 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:46.183 00:47:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:46.183 00:47:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:46.183 00:47:30 -- common/autotest_common.sh@10 -- # set +x 00:07:46.183 ************************************ 00:07:46.183 START TEST version 00:07:46.183 ************************************ 00:07:46.183 00:47:30 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:46.183 * Looking for test storage... 00:07:46.183 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:46.183 00:47:30 -- app/version.sh@17 -- # get_header_version major 00:07:46.183 00:47:30 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:46.183 00:47:30 -- app/version.sh@14 -- # cut -f2 00:07:46.183 00:47:30 -- app/version.sh@14 -- # tr -d '"' 00:07:46.183 00:47:30 -- app/version.sh@17 -- # major=24 00:07:46.183 00:47:30 -- app/version.sh@18 -- # get_header_version minor 00:07:46.183 00:47:30 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:46.183 00:47:30 -- app/version.sh@14 -- # cut -f2 00:07:46.183 00:47:30 -- app/version.sh@14 -- # tr -d '"' 00:07:46.183 00:47:30 -- app/version.sh@18 -- # minor=1 00:07:46.183 00:47:30 -- app/version.sh@19 -- # get_header_version patch 00:07:46.183 00:47:30 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:46.183 00:47:30 -- app/version.sh@14 -- # cut -f2 00:07:46.183 00:47:30 -- app/version.sh@14 -- # tr -d '"' 00:07:46.183 00:47:30 -- app/version.sh@19 -- # patch=1 00:07:46.183 00:47:30 -- app/version.sh@20 -- # get_header_version suffix 00:07:46.183 00:47:30 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:46.183 00:47:30 -- app/version.sh@14 -- # cut -f2 00:07:46.183 00:47:30 -- app/version.sh@14 -- # tr -d '"' 00:07:46.183 00:47:30 -- app/version.sh@20 -- # suffix=-pre 00:07:46.183 00:47:30 -- app/version.sh@22 -- # version=24.1 00:07:46.183 00:47:30 -- app/version.sh@25 -- # (( patch != 0 )) 00:07:46.183 00:47:30 -- app/version.sh@25 -- # version=24.1.1 00:07:46.183 00:47:30 -- app/version.sh@28 -- # version=24.1.1rc0 00:07:46.183 00:47:30 -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:46.183 00:47:30 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:46.183 00:47:30 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:07:46.183 00:47:30 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:07:46.183 00:07:46.183 real 0m0.101s 00:07:46.183 user 0m0.055s 00:07:46.183 sys 0m0.068s 00:07:46.183 00:47:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:46.183 00:47:30 -- common/autotest_common.sh@10 -- # set +x 00:07:46.183 ************************************ 00:07:46.183 END TEST version 00:07:46.183 ************************************ 00:07:46.183 00:47:30 -- spdk/autotest.sh@194 -- # '[' 0 -eq 1 ']' 00:07:46.183 00:47:30 -- spdk/autotest.sh@204 -- # uname -s 00:07:46.183 00:47:30 -- spdk/autotest.sh@204 -- # [[ Linux == Linux ]] 00:07:46.183 00:47:30 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:07:46.183 00:47:30 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:07:46.183 00:47:30 -- spdk/autotest.sh@217 -- # '[' 0 -eq 1 ']' 00:07:46.183 00:47:30 -- spdk/autotest.sh@264 -- # '[' 0 -eq 1 ']' 00:07:46.183 00:47:30 -- spdk/autotest.sh@268 -- # timing_exit lib 00:07:46.183 00:47:30 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:46.183 00:47:30 -- common/autotest_common.sh@10 -- # set +x 00:07:46.183 00:47:30 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:46.183 00:47:30 -- spdk/autotest.sh@278 -- # '[' 0 -eq 1 ']' 00:07:46.183 00:47:30 -- spdk/autotest.sh@287 -- # '[' 1 -eq 1 ']' 00:07:46.183 00:47:30 -- spdk/autotest.sh@288 -- # export NET_TYPE 00:07:46.183 00:47:30 -- spdk/autotest.sh@291 -- # '[' tcp = rdma ']' 00:07:46.183 00:47:30 -- spdk/autotest.sh@294 -- # '[' tcp = tcp ']' 00:07:46.183 00:47:30 -- spdk/autotest.sh@295 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:46.183 00:47:30 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:46.183 00:47:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:46.183 00:47:30 -- common/autotest_common.sh@10 -- # set +x 00:07:46.183 ************************************ 00:07:46.183 START TEST nvmf_tcp 00:07:46.183 ************************************ 00:07:46.183 00:47:30 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:46.442 * Looking for test storage... 00:07:46.442 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:46.442 00:47:30 -- nvmf/nvmf.sh@10 -- # uname -s 00:07:46.442 00:47:30 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:46.442 00:47:30 -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:46.442 00:47:30 -- nvmf/common.sh@7 -- # uname -s 00:07:46.442 00:47:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:46.442 00:47:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:46.442 00:47:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:46.442 00:47:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:46.442 00:47:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:46.442 00:47:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:46.442 00:47:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:46.442 00:47:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:46.442 00:47:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:46.442 00:47:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:46.442 00:47:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:46.442 00:47:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:46.442 00:47:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:46.442 00:47:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:46.442 00:47:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:46.442 00:47:30 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:46.442 00:47:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:46.442 00:47:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:46.442 00:47:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:46.442 00:47:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.442 00:47:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.443 00:47:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.443 00:47:30 -- paths/export.sh@5 -- # export PATH 00:07:46.443 00:47:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.443 00:47:30 -- nvmf/common.sh@46 -- # : 0 00:07:46.443 00:47:30 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:46.443 00:47:30 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:46.443 00:47:30 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:46.443 00:47:30 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:46.443 00:47:30 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:46.443 00:47:30 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:46.443 00:47:30 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:46.443 00:47:30 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:46.443 00:47:30 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:46.443 00:47:30 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:46.443 00:47:30 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:46.443 00:47:30 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:46.443 00:47:30 -- common/autotest_common.sh@10 -- # set +x 00:07:46.443 00:47:30 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:46.443 00:47:30 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:46.443 00:47:30 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:46.443 00:47:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:46.443 00:47:30 -- common/autotest_common.sh@10 -- # set +x 00:07:46.443 ************************************ 00:07:46.443 START TEST nvmf_example 00:07:46.443 ************************************ 00:07:46.443 00:47:30 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:46.443 * Looking for test storage... 00:07:46.443 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:46.443 00:47:30 -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:46.443 00:47:30 -- nvmf/common.sh@7 -- # uname -s 00:07:46.443 00:47:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:46.443 00:47:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:46.443 00:47:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:46.443 00:47:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:46.443 00:47:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:46.443 00:47:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:46.443 00:47:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:46.443 00:47:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:46.443 00:47:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:46.443 00:47:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:46.443 00:47:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:46.443 00:47:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:46.443 00:47:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:46.443 00:47:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:46.443 00:47:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:46.443 00:47:30 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:46.443 00:47:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:46.443 00:47:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:46.443 00:47:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:46.443 00:47:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.443 00:47:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.443 00:47:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.443 00:47:30 -- paths/export.sh@5 -- # export PATH 00:07:46.443 00:47:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.443 00:47:30 -- nvmf/common.sh@46 -- # : 0 00:07:46.443 00:47:30 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:46.443 00:47:30 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:46.443 00:47:30 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:46.443 00:47:30 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:46.443 00:47:30 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:46.443 00:47:30 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:46.443 00:47:30 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:46.443 00:47:30 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:46.443 00:47:30 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:46.443 00:47:30 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:46.443 00:47:30 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:46.443 00:47:30 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:46.443 00:47:30 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:46.443 00:47:30 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:46.443 00:47:30 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:46.443 00:47:30 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:46.443 00:47:30 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:46.443 00:47:30 -- common/autotest_common.sh@10 -- # set +x 00:07:46.443 00:47:30 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:46.443 00:47:30 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:46.443 00:47:30 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:46.443 00:47:30 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:46.443 00:47:30 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:46.443 00:47:30 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:46.443 00:47:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:46.443 00:47:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:46.443 00:47:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:46.443 00:47:30 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:07:46.443 00:47:30 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:07:46.443 00:47:30 -- nvmf/common.sh@284 -- # xtrace_disable 00:07:46.443 00:47:30 -- common/autotest_common.sh@10 -- # set +x 00:07:48.346 00:47:32 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:48.346 00:47:32 -- nvmf/common.sh@290 -- # pci_devs=() 00:07:48.346 00:47:32 -- nvmf/common.sh@290 -- # local -a pci_devs 00:07:48.346 00:47:32 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:07:48.346 00:47:32 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:07:48.346 00:47:32 -- nvmf/common.sh@292 -- # pci_drivers=() 00:07:48.346 00:47:32 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:07:48.346 00:47:32 -- nvmf/common.sh@294 -- # net_devs=() 00:07:48.346 00:47:32 -- nvmf/common.sh@294 -- # local -ga net_devs 00:07:48.346 00:47:32 -- nvmf/common.sh@295 -- # e810=() 00:07:48.346 00:47:32 -- nvmf/common.sh@295 -- # local -ga e810 00:07:48.346 00:47:32 -- nvmf/common.sh@296 -- # x722=() 00:07:48.346 00:47:32 -- nvmf/common.sh@296 -- # local -ga x722 00:07:48.346 00:47:32 -- nvmf/common.sh@297 -- # mlx=() 00:07:48.346 00:47:32 -- nvmf/common.sh@297 -- # local -ga mlx 00:07:48.346 00:47:32 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:48.346 00:47:32 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:48.346 00:47:32 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:48.346 00:47:32 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:48.346 00:47:32 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:48.346 00:47:32 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:48.346 00:47:32 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:48.346 00:47:32 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:48.346 00:47:32 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:48.346 00:47:32 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:48.346 00:47:32 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:48.346 00:47:32 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:07:48.346 00:47:32 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:07:48.346 00:47:32 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:07:48.346 00:47:32 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:07:48.346 00:47:32 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:07:48.346 00:47:32 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:07:48.346 00:47:32 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:48.346 00:47:32 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:48.346 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:48.346 00:47:32 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:48.346 00:47:32 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:48.346 00:47:32 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:48.346 00:47:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:48.346 00:47:32 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:48.346 00:47:32 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:48.346 00:47:32 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:48.346 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:48.346 00:47:32 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:48.346 00:47:32 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:48.346 00:47:32 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:48.346 00:47:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:48.346 00:47:32 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:48.346 00:47:32 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:07:48.346 00:47:32 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:07:48.346 00:47:32 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:07:48.346 00:47:32 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:48.346 00:47:32 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:48.346 00:47:32 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:48.346 00:47:32 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:48.346 00:47:32 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:48.346 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:48.346 00:47:32 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:48.346 00:47:32 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:48.346 00:47:32 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:48.346 00:47:32 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:48.346 00:47:32 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:48.346 00:47:32 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:48.346 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:48.346 00:47:32 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:48.346 00:47:32 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:07:48.346 00:47:32 -- nvmf/common.sh@402 -- # is_hw=yes 00:07:48.346 00:47:32 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:07:48.347 00:47:32 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:07:48.347 00:47:32 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:07:48.347 00:47:32 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:48.347 00:47:32 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:48.347 00:47:32 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:48.347 00:47:32 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:07:48.347 00:47:32 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:48.347 00:47:32 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:48.347 00:47:32 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:07:48.347 00:47:32 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:48.347 00:47:32 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:48.347 00:47:32 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:07:48.347 00:47:32 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:07:48.347 00:47:32 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:07:48.347 00:47:32 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:48.347 00:47:32 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:48.347 00:47:32 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:48.347 00:47:32 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:07:48.347 00:47:32 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:48.605 00:47:32 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:48.605 00:47:32 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:48.605 00:47:32 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:07:48.605 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:48.605 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.119 ms 00:07:48.605 00:07:48.605 --- 10.0.0.2 ping statistics --- 00:07:48.605 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:48.605 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:07:48.605 00:47:32 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:48.605 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:48.605 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:07:48.605 00:07:48.605 --- 10.0.0.1 ping statistics --- 00:07:48.605 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:48.605 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:07:48.605 00:47:32 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:48.605 00:47:32 -- nvmf/common.sh@410 -- # return 0 00:07:48.605 00:47:32 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:48.605 00:47:32 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:48.605 00:47:32 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:48.605 00:47:32 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:48.605 00:47:32 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:48.605 00:47:32 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:48.605 00:47:32 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:48.605 00:47:32 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:48.605 00:47:32 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:48.605 00:47:32 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:48.605 00:47:32 -- common/autotest_common.sh@10 -- # set +x 00:07:48.605 00:47:32 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:48.605 00:47:32 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:48.605 00:47:32 -- target/nvmf_example.sh@34 -- # nvmfpid=3296229 00:07:48.605 00:47:32 -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:48.605 00:47:32 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:48.605 00:47:32 -- target/nvmf_example.sh@36 -- # waitforlisten 3296229 00:07:48.605 00:47:32 -- common/autotest_common.sh@819 -- # '[' -z 3296229 ']' 00:07:48.605 00:47:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:48.605 00:47:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:48.605 00:47:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:48.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:48.605 00:47:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:48.605 00:47:32 -- common/autotest_common.sh@10 -- # set +x 00:07:48.605 EAL: No free 2048 kB hugepages reported on node 1 00:07:49.539 00:47:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:49.539 00:47:33 -- common/autotest_common.sh@852 -- # return 0 00:07:49.539 00:47:33 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:49.539 00:47:33 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:49.539 00:47:33 -- common/autotest_common.sh@10 -- # set +x 00:07:49.539 00:47:33 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:49.539 00:47:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:49.539 00:47:33 -- common/autotest_common.sh@10 -- # set +x 00:07:49.539 00:47:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:49.539 00:47:33 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:49.539 00:47:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:49.539 00:47:33 -- common/autotest_common.sh@10 -- # set +x 00:07:49.539 00:47:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:49.539 00:47:33 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:49.539 00:47:33 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:49.539 00:47:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:49.539 00:47:33 -- common/autotest_common.sh@10 -- # set +x 00:07:49.539 00:47:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:49.539 00:47:33 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:49.539 00:47:33 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:49.539 00:47:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:49.539 00:47:33 -- common/autotest_common.sh@10 -- # set +x 00:07:49.539 00:47:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:49.539 00:47:33 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:49.539 00:47:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:49.539 00:47:33 -- common/autotest_common.sh@10 -- # set +x 00:07:49.539 00:47:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:49.539 00:47:33 -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:07:49.539 00:47:33 -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:49.539 EAL: No free 2048 kB hugepages reported on node 1 00:08:01.739 Initializing NVMe Controllers 00:08:01.739 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:01.739 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:01.739 Initialization complete. Launching workers. 00:08:01.739 ======================================================== 00:08:01.739 Latency(us) 00:08:01.739 Device Information : IOPS MiB/s Average min max 00:08:01.739 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15453.83 60.37 4140.89 882.38 19175.40 00:08:01.739 ======================================================== 00:08:01.739 Total : 15453.83 60.37 4140.89 882.38 19175.40 00:08:01.739 00:08:01.739 00:47:43 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:08:01.739 00:47:43 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:08:01.739 00:47:43 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:01.739 00:47:43 -- nvmf/common.sh@116 -- # sync 00:08:01.739 00:47:43 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:01.739 00:47:43 -- nvmf/common.sh@119 -- # set +e 00:08:01.739 00:47:43 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:01.739 00:47:43 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:01.739 rmmod nvme_tcp 00:08:01.739 rmmod nvme_fabrics 00:08:01.739 rmmod nvme_keyring 00:08:01.739 00:47:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:01.739 00:47:43 -- nvmf/common.sh@123 -- # set -e 00:08:01.739 00:47:43 -- nvmf/common.sh@124 -- # return 0 00:08:01.739 00:47:43 -- nvmf/common.sh@477 -- # '[' -n 3296229 ']' 00:08:01.739 00:47:43 -- nvmf/common.sh@478 -- # killprocess 3296229 00:08:01.739 00:47:43 -- common/autotest_common.sh@926 -- # '[' -z 3296229 ']' 00:08:01.739 00:47:43 -- common/autotest_common.sh@930 -- # kill -0 3296229 00:08:01.739 00:47:43 -- common/autotest_common.sh@931 -- # uname 00:08:01.739 00:47:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:01.739 00:47:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3296229 00:08:01.739 00:47:43 -- common/autotest_common.sh@932 -- # process_name=nvmf 00:08:01.739 00:47:43 -- common/autotest_common.sh@936 -- # '[' nvmf = sudo ']' 00:08:01.739 00:47:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3296229' 00:08:01.739 killing process with pid 3296229 00:08:01.739 00:47:43 -- common/autotest_common.sh@945 -- # kill 3296229 00:08:01.739 00:47:43 -- common/autotest_common.sh@950 -- # wait 3296229 00:08:01.739 nvmf threads initialize successfully 00:08:01.739 bdev subsystem init successfully 00:08:01.739 created a nvmf target service 00:08:01.739 create targets's poll groups done 00:08:01.739 all subsystems of target started 00:08:01.739 nvmf target is running 00:08:01.739 all subsystems of target stopped 00:08:01.739 destroy targets's poll groups done 00:08:01.739 destroyed the nvmf target service 00:08:01.739 bdev subsystem finish successfully 00:08:01.739 nvmf threads destroy successfully 00:08:01.739 00:47:44 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:01.739 00:47:44 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:01.739 00:47:44 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:01.739 00:47:44 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:01.739 00:47:44 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:01.739 00:47:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:01.739 00:47:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:01.739 00:47:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:02.312 00:47:46 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:08:02.312 00:47:46 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:08:02.312 00:47:46 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:02.312 00:47:46 -- common/autotest_common.sh@10 -- # set +x 00:08:02.312 00:08:02.312 real 0m15.819s 00:08:02.312 user 0m44.971s 00:08:02.312 sys 0m3.203s 00:08:02.312 00:47:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:02.312 00:47:46 -- common/autotest_common.sh@10 -- # set +x 00:08:02.312 ************************************ 00:08:02.312 END TEST nvmf_example 00:08:02.312 ************************************ 00:08:02.312 00:47:46 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:08:02.312 00:47:46 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:02.312 00:47:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:02.312 00:47:46 -- common/autotest_common.sh@10 -- # set +x 00:08:02.312 ************************************ 00:08:02.312 START TEST nvmf_filesystem 00:08:02.312 ************************************ 00:08:02.312 00:47:46 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:08:02.312 * Looking for test storage... 00:08:02.312 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:02.312 00:47:46 -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:08:02.312 00:47:46 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:08:02.312 00:47:46 -- common/autotest_common.sh@34 -- # set -e 00:08:02.312 00:47:46 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:08:02.312 00:47:46 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:08:02.312 00:47:46 -- common/autotest_common.sh@38 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:08:02.312 00:47:46 -- common/autotest_common.sh@39 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:08:02.312 00:47:46 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:02.312 00:47:46 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:08:02.312 00:47:46 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:02.312 00:47:46 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:02.312 00:47:46 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:08:02.312 00:47:46 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:02.312 00:47:46 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:02.312 00:47:46 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:02.312 00:47:46 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:02.312 00:47:46 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:02.312 00:47:46 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:02.312 00:47:46 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:02.312 00:47:46 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:02.312 00:47:46 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:02.312 00:47:46 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:02.312 00:47:46 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:02.312 00:47:46 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:08:02.312 00:47:46 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:02.312 00:47:46 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:08:02.312 00:47:46 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:08:02.312 00:47:46 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:08:02.312 00:47:46 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:08:02.312 00:47:46 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:02.312 00:47:46 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:08:02.312 00:47:46 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:08:02.312 00:47:46 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:02.312 00:47:46 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:02.312 00:47:46 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:08:02.312 00:47:46 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:08:02.312 00:47:46 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:08:02.312 00:47:46 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:08:02.312 00:47:46 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:08:02.312 00:47:46 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:08:02.312 00:47:46 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:08:02.312 00:47:46 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:08:02.312 00:47:46 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:08:02.312 00:47:46 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:08:02.312 00:47:46 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:08:02.312 00:47:46 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:08:02.312 00:47:46 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:08:02.312 00:47:46 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:08:02.312 00:47:46 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:08:02.312 00:47:46 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:08:02.312 00:47:46 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:02.312 00:47:46 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:08:02.312 00:47:46 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:08:02.313 00:47:46 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:08:02.313 00:47:46 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:02.313 00:47:46 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:08:02.313 00:47:46 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:08:02.313 00:47:46 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=y 00:08:02.313 00:47:46 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:08:02.313 00:47:46 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:08:02.313 00:47:46 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:08:02.313 00:47:46 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:08:02.313 00:47:46 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:08:02.313 00:47:46 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:08:02.313 00:47:46 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:08:02.313 00:47:46 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:08:02.313 00:47:46 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:08:02.313 00:47:46 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:08:02.313 00:47:46 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:08:02.313 00:47:46 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:08:02.313 00:47:46 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:08:02.313 00:47:46 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:08:02.313 00:47:46 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:02.313 00:47:46 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:08:02.313 00:47:46 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:08:02.313 00:47:46 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:08:02.313 00:47:46 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:08:02.313 00:47:46 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:08:02.313 00:47:46 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:08:02.313 00:47:46 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:08:02.313 00:47:46 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:08:02.313 00:47:46 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:08:02.313 00:47:46 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:08:02.313 00:47:46 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:02.313 00:47:46 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:08:02.313 00:47:46 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:08:02.313 00:47:46 -- common/autotest_common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:08:02.313 00:47:46 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:08:02.313 00:47:46 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:08:02.313 00:47:46 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:08:02.313 00:47:46 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:02.313 00:47:46 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:02.313 00:47:46 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:02.313 00:47:46 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:02.313 00:47:46 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:08:02.313 00:47:46 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:08:02.313 00:47:46 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:08:02.313 00:47:46 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:08:02.313 00:47:46 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:08:02.313 00:47:46 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:08:02.313 00:47:46 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:08:02.313 00:47:46 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:08:02.313 #define SPDK_CONFIG_H 00:08:02.313 #define SPDK_CONFIG_APPS 1 00:08:02.313 #define SPDK_CONFIG_ARCH native 00:08:02.313 #undef SPDK_CONFIG_ASAN 00:08:02.313 #undef SPDK_CONFIG_AVAHI 00:08:02.313 #undef SPDK_CONFIG_CET 00:08:02.313 #define SPDK_CONFIG_COVERAGE 1 00:08:02.313 #define SPDK_CONFIG_CROSS_PREFIX 00:08:02.313 #undef SPDK_CONFIG_CRYPTO 00:08:02.313 #undef SPDK_CONFIG_CRYPTO_MLX5 00:08:02.313 #undef SPDK_CONFIG_CUSTOMOCF 00:08:02.313 #undef SPDK_CONFIG_DAOS 00:08:02.313 #define SPDK_CONFIG_DAOS_DIR 00:08:02.313 #define SPDK_CONFIG_DEBUG 1 00:08:02.313 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:08:02.313 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:08:02.313 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:08:02.313 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:08:02.313 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:08:02.313 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:08:02.313 #define SPDK_CONFIG_EXAMPLES 1 00:08:02.313 #undef SPDK_CONFIG_FC 00:08:02.313 #define SPDK_CONFIG_FC_PATH 00:08:02.313 #define SPDK_CONFIG_FIO_PLUGIN 1 00:08:02.313 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:08:02.313 #undef SPDK_CONFIG_FUSE 00:08:02.313 #undef SPDK_CONFIG_FUZZER 00:08:02.313 #define SPDK_CONFIG_FUZZER_LIB 00:08:02.313 #undef SPDK_CONFIG_GOLANG 00:08:02.313 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:08:02.313 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:08:02.313 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:08:02.313 #undef SPDK_CONFIG_HAVE_LIBBSD 00:08:02.313 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:08:02.313 #define SPDK_CONFIG_IDXD 1 00:08:02.313 #define SPDK_CONFIG_IDXD_KERNEL 1 00:08:02.313 #undef SPDK_CONFIG_IPSEC_MB 00:08:02.313 #define SPDK_CONFIG_IPSEC_MB_DIR 00:08:02.313 #define SPDK_CONFIG_ISAL 1 00:08:02.313 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:08:02.313 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:08:02.313 #define SPDK_CONFIG_LIBDIR 00:08:02.313 #undef SPDK_CONFIG_LTO 00:08:02.313 #define SPDK_CONFIG_MAX_LCORES 00:08:02.313 #define SPDK_CONFIG_NVME_CUSE 1 00:08:02.313 #undef SPDK_CONFIG_OCF 00:08:02.313 #define SPDK_CONFIG_OCF_PATH 00:08:02.313 #define SPDK_CONFIG_OPENSSL_PATH 00:08:02.313 #undef SPDK_CONFIG_PGO_CAPTURE 00:08:02.313 #undef SPDK_CONFIG_PGO_USE 00:08:02.313 #define SPDK_CONFIG_PREFIX /usr/local 00:08:02.313 #undef SPDK_CONFIG_RAID5F 00:08:02.313 #undef SPDK_CONFIG_RBD 00:08:02.313 #define SPDK_CONFIG_RDMA 1 00:08:02.313 #define SPDK_CONFIG_RDMA_PROV verbs 00:08:02.313 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:08:02.313 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:08:02.313 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:08:02.313 #define SPDK_CONFIG_SHARED 1 00:08:02.313 #undef SPDK_CONFIG_SMA 00:08:02.313 #define SPDK_CONFIG_TESTS 1 00:08:02.313 #undef SPDK_CONFIG_TSAN 00:08:02.313 #define SPDK_CONFIG_UBLK 1 00:08:02.313 #define SPDK_CONFIG_UBSAN 1 00:08:02.313 #undef SPDK_CONFIG_UNIT_TESTS 00:08:02.313 #undef SPDK_CONFIG_URING 00:08:02.313 #define SPDK_CONFIG_URING_PATH 00:08:02.313 #undef SPDK_CONFIG_URING_ZNS 00:08:02.313 #undef SPDK_CONFIG_USDT 00:08:02.313 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:08:02.313 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:08:02.313 #define SPDK_CONFIG_VFIO_USER 1 00:08:02.313 #define SPDK_CONFIG_VFIO_USER_DIR 00:08:02.313 #define SPDK_CONFIG_VHOST 1 00:08:02.313 #define SPDK_CONFIG_VIRTIO 1 00:08:02.313 #undef SPDK_CONFIG_VTUNE 00:08:02.313 #define SPDK_CONFIG_VTUNE_DIR 00:08:02.313 #define SPDK_CONFIG_WERROR 1 00:08:02.313 #define SPDK_CONFIG_WPDK_DIR 00:08:02.313 #undef SPDK_CONFIG_XNVME 00:08:02.313 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:08:02.313 00:47:46 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:08:02.313 00:47:46 -- common/autotest_common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:02.313 00:47:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:02.313 00:47:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:02.313 00:47:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:02.313 00:47:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.313 00:47:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.313 00:47:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.313 00:47:46 -- paths/export.sh@5 -- # export PATH 00:08:02.313 00:47:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.313 00:47:46 -- common/autotest_common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:08:02.313 00:47:46 -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:08:02.313 00:47:46 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:08:02.313 00:47:46 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:08:02.314 00:47:46 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:08:02.314 00:47:46 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:02.314 00:47:46 -- pm/common@16 -- # TEST_TAG=N/A 00:08:02.314 00:47:46 -- pm/common@17 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:08:02.314 00:47:46 -- common/autotest_common.sh@52 -- # : 1 00:08:02.314 00:47:46 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:08:02.314 00:47:46 -- common/autotest_common.sh@56 -- # : 0 00:08:02.314 00:47:46 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:08:02.314 00:47:46 -- common/autotest_common.sh@58 -- # : 0 00:08:02.314 00:47:46 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:08:02.314 00:47:46 -- common/autotest_common.sh@60 -- # : 1 00:08:02.314 00:47:46 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:08:02.314 00:47:46 -- common/autotest_common.sh@62 -- # : 0 00:08:02.314 00:47:46 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:08:02.314 00:47:46 -- common/autotest_common.sh@64 -- # : 00:08:02.314 00:47:46 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:08:02.314 00:47:46 -- common/autotest_common.sh@66 -- # : 0 00:08:02.314 00:47:46 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:08:02.314 00:47:46 -- common/autotest_common.sh@68 -- # : 0 00:08:02.314 00:47:46 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:08:02.314 00:47:46 -- common/autotest_common.sh@70 -- # : 0 00:08:02.314 00:47:46 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:08:02.314 00:47:46 -- common/autotest_common.sh@72 -- # : 0 00:08:02.314 00:47:46 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:08:02.314 00:47:46 -- common/autotest_common.sh@74 -- # : 0 00:08:02.314 00:47:46 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:08:02.314 00:47:46 -- common/autotest_common.sh@76 -- # : 0 00:08:02.314 00:47:46 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:08:02.314 00:47:46 -- common/autotest_common.sh@78 -- # : 0 00:08:02.314 00:47:46 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:08:02.314 00:47:46 -- common/autotest_common.sh@80 -- # : 1 00:08:02.314 00:47:46 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:08:02.314 00:47:46 -- common/autotest_common.sh@82 -- # : 0 00:08:02.314 00:47:46 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:08:02.314 00:47:46 -- common/autotest_common.sh@84 -- # : 0 00:08:02.314 00:47:46 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:08:02.314 00:47:46 -- common/autotest_common.sh@86 -- # : 1 00:08:02.314 00:47:46 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:08:02.314 00:47:46 -- common/autotest_common.sh@88 -- # : 1 00:08:02.314 00:47:46 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:08:02.314 00:47:46 -- common/autotest_common.sh@90 -- # : 0 00:08:02.314 00:47:46 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:08:02.314 00:47:46 -- common/autotest_common.sh@92 -- # : 0 00:08:02.314 00:47:46 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:08:02.314 00:47:46 -- common/autotest_common.sh@94 -- # : 0 00:08:02.314 00:47:46 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:08:02.314 00:47:46 -- common/autotest_common.sh@96 -- # : tcp 00:08:02.314 00:47:46 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:08:02.314 00:47:46 -- common/autotest_common.sh@98 -- # : 0 00:08:02.314 00:47:46 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:08:02.314 00:47:46 -- common/autotest_common.sh@100 -- # : 0 00:08:02.314 00:47:46 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:08:02.314 00:47:46 -- common/autotest_common.sh@102 -- # : 0 00:08:02.314 00:47:46 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:08:02.314 00:47:46 -- common/autotest_common.sh@104 -- # : 0 00:08:02.314 00:47:46 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:08:02.314 00:47:46 -- common/autotest_common.sh@106 -- # : 0 00:08:02.314 00:47:46 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:08:02.314 00:47:46 -- common/autotest_common.sh@108 -- # : 0 00:08:02.314 00:47:46 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:08:02.314 00:47:46 -- common/autotest_common.sh@110 -- # : 0 00:08:02.314 00:47:46 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:08:02.314 00:47:46 -- common/autotest_common.sh@112 -- # : 0 00:08:02.314 00:47:46 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:08:02.314 00:47:46 -- common/autotest_common.sh@114 -- # : 0 00:08:02.314 00:47:46 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:08:02.314 00:47:46 -- common/autotest_common.sh@116 -- # : 1 00:08:02.314 00:47:46 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:08:02.314 00:47:46 -- common/autotest_common.sh@118 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:08:02.314 00:47:46 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:08:02.314 00:47:46 -- common/autotest_common.sh@120 -- # : 0 00:08:02.314 00:47:46 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:08:02.314 00:47:46 -- common/autotest_common.sh@122 -- # : 0 00:08:02.314 00:47:46 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:08:02.314 00:47:46 -- common/autotest_common.sh@124 -- # : 0 00:08:02.314 00:47:46 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:08:02.314 00:47:46 -- common/autotest_common.sh@126 -- # : 0 00:08:02.314 00:47:46 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:08:02.314 00:47:46 -- common/autotest_common.sh@128 -- # : 0 00:08:02.314 00:47:46 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:08:02.314 00:47:46 -- common/autotest_common.sh@130 -- # : 0 00:08:02.314 00:47:46 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:08:02.314 00:47:46 -- common/autotest_common.sh@132 -- # : v22.11.4 00:08:02.314 00:47:46 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:08:02.314 00:47:46 -- common/autotest_common.sh@134 -- # : true 00:08:02.314 00:47:46 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:08:02.314 00:47:46 -- common/autotest_common.sh@136 -- # : 0 00:08:02.314 00:47:46 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:08:02.314 00:47:46 -- common/autotest_common.sh@138 -- # : 0 00:08:02.314 00:47:46 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:08:02.314 00:47:46 -- common/autotest_common.sh@140 -- # : 0 00:08:02.314 00:47:46 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:08:02.314 00:47:46 -- common/autotest_common.sh@142 -- # : 0 00:08:02.314 00:47:46 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:08:02.314 00:47:46 -- common/autotest_common.sh@144 -- # : 0 00:08:02.314 00:47:46 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:08:02.314 00:47:46 -- common/autotest_common.sh@146 -- # : 0 00:08:02.314 00:47:46 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:08:02.314 00:47:46 -- common/autotest_common.sh@148 -- # : e810 00:08:02.314 00:47:46 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:08:02.314 00:47:46 -- common/autotest_common.sh@150 -- # : 0 00:08:02.314 00:47:46 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:08:02.314 00:47:46 -- common/autotest_common.sh@152 -- # : 0 00:08:02.314 00:47:46 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:08:02.314 00:47:46 -- common/autotest_common.sh@154 -- # : 0 00:08:02.314 00:47:46 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:08:02.314 00:47:46 -- common/autotest_common.sh@156 -- # : 0 00:08:02.314 00:47:46 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:08:02.314 00:47:46 -- common/autotest_common.sh@158 -- # : 0 00:08:02.314 00:47:46 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:08:02.314 00:47:46 -- common/autotest_common.sh@160 -- # : 0 00:08:02.314 00:47:46 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:08:02.314 00:47:46 -- common/autotest_common.sh@163 -- # : 00:08:02.314 00:47:46 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:08:02.314 00:47:46 -- common/autotest_common.sh@165 -- # : 0 00:08:02.314 00:47:46 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:08:02.314 00:47:46 -- common/autotest_common.sh@167 -- # : 0 00:08:02.314 00:47:46 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:08:02.314 00:47:46 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:08:02.314 00:47:46 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:08:02.314 00:47:46 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:08:02.314 00:47:46 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:08:02.314 00:47:46 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:02.314 00:47:46 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:02.314 00:47:46 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:02.315 00:47:46 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:02.315 00:47:46 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:08:02.315 00:47:46 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:08:02.315 00:47:46 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:02.315 00:47:46 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:02.315 00:47:46 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:08:02.315 00:47:46 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:08:02.315 00:47:46 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:02.315 00:47:46 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:02.315 00:47:46 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:02.315 00:47:46 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:02.315 00:47:46 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:08:02.315 00:47:46 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:08:02.315 00:47:46 -- common/autotest_common.sh@196 -- # cat 00:08:02.315 00:47:46 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:08:02.315 00:47:46 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:02.315 00:47:46 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:02.315 00:47:46 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:02.315 00:47:46 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:02.315 00:47:46 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:08:02.315 00:47:46 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:08:02.315 00:47:46 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:02.315 00:47:46 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:02.315 00:47:46 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:02.315 00:47:46 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:02.315 00:47:46 -- common/autotest_common.sh@239 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:02.315 00:47:46 -- common/autotest_common.sh@239 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:02.315 00:47:46 -- common/autotest_common.sh@240 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:02.315 00:47:46 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:02.315 00:47:46 -- common/autotest_common.sh@242 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:02.315 00:47:46 -- common/autotest_common.sh@242 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:02.315 00:47:46 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:02.315 00:47:46 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:02.315 00:47:46 -- common/autotest_common.sh@248 -- # '[' 0 -eq 0 ']' 00:08:02.315 00:47:46 -- common/autotest_common.sh@249 -- # export valgrind= 00:08:02.315 00:47:46 -- common/autotest_common.sh@249 -- # valgrind= 00:08:02.315 00:47:46 -- common/autotest_common.sh@255 -- # uname -s 00:08:02.315 00:47:46 -- common/autotest_common.sh@255 -- # '[' Linux = Linux ']' 00:08:02.315 00:47:46 -- common/autotest_common.sh@256 -- # HUGEMEM=4096 00:08:02.315 00:47:46 -- common/autotest_common.sh@257 -- # export CLEAR_HUGE=yes 00:08:02.315 00:47:46 -- common/autotest_common.sh@257 -- # CLEAR_HUGE=yes 00:08:02.315 00:47:46 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:08:02.315 00:47:46 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:08:02.315 00:47:46 -- common/autotest_common.sh@265 -- # MAKE=make 00:08:02.315 00:47:46 -- common/autotest_common.sh@266 -- # MAKEFLAGS=-j48 00:08:02.315 00:47:46 -- common/autotest_common.sh@282 -- # export HUGEMEM=4096 00:08:02.315 00:47:46 -- common/autotest_common.sh@282 -- # HUGEMEM=4096 00:08:02.315 00:47:46 -- common/autotest_common.sh@284 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:08:02.315 00:47:46 -- common/autotest_common.sh@289 -- # NO_HUGE=() 00:08:02.315 00:47:46 -- common/autotest_common.sh@290 -- # TEST_MODE= 00:08:02.315 00:47:46 -- common/autotest_common.sh@291 -- # for i in "$@" 00:08:02.315 00:47:46 -- common/autotest_common.sh@292 -- # case "$i" in 00:08:02.315 00:47:46 -- common/autotest_common.sh@297 -- # TEST_TRANSPORT=tcp 00:08:02.315 00:47:46 -- common/autotest_common.sh@309 -- # [[ -z 3297981 ]] 00:08:02.315 00:47:46 -- common/autotest_common.sh@309 -- # kill -0 3297981 00:08:02.315 00:47:46 -- common/autotest_common.sh@1665 -- # set_test_storage 2147483648 00:08:02.315 00:47:46 -- common/autotest_common.sh@319 -- # [[ -v testdir ]] 00:08:02.315 00:47:46 -- common/autotest_common.sh@321 -- # local requested_size=2147483648 00:08:02.315 00:47:46 -- common/autotest_common.sh@322 -- # local mount target_dir 00:08:02.315 00:47:46 -- common/autotest_common.sh@324 -- # local -A mounts fss sizes avails uses 00:08:02.315 00:47:46 -- common/autotest_common.sh@325 -- # local source fs size avail mount use 00:08:02.315 00:47:46 -- common/autotest_common.sh@327 -- # local storage_fallback storage_candidates 00:08:02.315 00:47:46 -- common/autotest_common.sh@329 -- # mktemp -udt spdk.XXXXXX 00:08:02.315 00:47:46 -- common/autotest_common.sh@329 -- # storage_fallback=/tmp/spdk.4QRW3B 00:08:02.315 00:47:46 -- common/autotest_common.sh@334 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:08:02.315 00:47:46 -- common/autotest_common.sh@336 -- # [[ -n '' ]] 00:08:02.315 00:47:46 -- common/autotest_common.sh@341 -- # [[ -n '' ]] 00:08:02.315 00:47:46 -- common/autotest_common.sh@346 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.4QRW3B/tests/target /tmp/spdk.4QRW3B 00:08:02.315 00:47:46 -- common/autotest_common.sh@349 -- # requested_size=2214592512 00:08:02.315 00:47:46 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:02.315 00:47:46 -- common/autotest_common.sh@318 -- # df -T 00:08:02.315 00:47:46 -- common/autotest_common.sh@318 -- # grep -v Filesystem 00:08:02.315 00:47:46 -- common/autotest_common.sh@352 -- # mounts["$mount"]=spdk_devtmpfs 00:08:02.315 00:47:46 -- common/autotest_common.sh@352 -- # fss["$mount"]=devtmpfs 00:08:02.315 00:47:46 -- common/autotest_common.sh@353 -- # avails["$mount"]=67108864 00:08:02.315 00:47:46 -- common/autotest_common.sh@353 -- # sizes["$mount"]=67108864 00:08:02.315 00:47:46 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:08:02.315 00:47:46 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:02.315 00:47:46 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/pmem0 00:08:02.315 00:47:46 -- common/autotest_common.sh@352 -- # fss["$mount"]=ext2 00:08:02.315 00:47:46 -- common/autotest_common.sh@353 -- # avails["$mount"]=953643008 00:08:02.315 00:47:46 -- common/autotest_common.sh@353 -- # sizes["$mount"]=5284429824 00:08:02.315 00:47:46 -- common/autotest_common.sh@354 -- # uses["$mount"]=4330786816 00:08:02.315 00:47:46 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:02.315 00:47:46 -- common/autotest_common.sh@352 -- # mounts["$mount"]=spdk_root 00:08:02.315 00:47:46 -- common/autotest_common.sh@352 -- # fss["$mount"]=overlay 00:08:02.315 00:47:46 -- common/autotest_common.sh@353 -- # avails["$mount"]=53540581376 00:08:02.315 00:47:46 -- common/autotest_common.sh@353 -- # sizes["$mount"]=61994708992 00:08:02.315 00:47:46 -- common/autotest_common.sh@354 -- # uses["$mount"]=8454127616 00:08:02.315 00:47:46 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:02.315 00:47:46 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:08:02.315 00:47:46 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:08:02.315 00:47:46 -- common/autotest_common.sh@353 -- # avails["$mount"]=30943834112 00:08:02.315 00:47:46 -- common/autotest_common.sh@353 -- # sizes["$mount"]=30997352448 00:08:02.315 00:47:46 -- common/autotest_common.sh@354 -- # uses["$mount"]=53518336 00:08:02.315 00:47:46 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:02.315 00:47:46 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:08:02.315 00:47:46 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:08:02.315 00:47:46 -- common/autotest_common.sh@353 -- # avails["$mount"]=12390182912 00:08:02.315 00:47:46 -- common/autotest_common.sh@353 -- # sizes["$mount"]=12398944256 00:08:02.315 00:47:46 -- common/autotest_common.sh@354 -- # uses["$mount"]=8761344 00:08:02.315 00:47:46 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:02.315 00:47:46 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:08:02.315 00:47:46 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:08:02.315 00:47:46 -- common/autotest_common.sh@353 -- # avails["$mount"]=30996361216 00:08:02.315 00:47:46 -- common/autotest_common.sh@353 -- # sizes["$mount"]=30997356544 00:08:02.315 00:47:46 -- common/autotest_common.sh@354 -- # uses["$mount"]=995328 00:08:02.315 00:47:46 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:02.315 00:47:46 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:08:02.315 00:47:46 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:08:02.315 00:47:46 -- common/autotest_common.sh@353 -- # avails["$mount"]=6199463936 00:08:02.315 00:47:46 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6199468032 00:08:02.315 00:47:46 -- common/autotest_common.sh@354 -- # uses["$mount"]=4096 00:08:02.315 00:47:46 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:02.315 00:47:46 -- common/autotest_common.sh@357 -- # printf '* Looking for test storage...\n' 00:08:02.315 * Looking for test storage... 00:08:02.315 00:47:46 -- common/autotest_common.sh@359 -- # local target_space new_size 00:08:02.315 00:47:46 -- common/autotest_common.sh@360 -- # for target_dir in "${storage_candidates[@]}" 00:08:02.316 00:47:46 -- common/autotest_common.sh@363 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:02.316 00:47:46 -- common/autotest_common.sh@363 -- # awk '$1 !~ /Filesystem/{print $6}' 00:08:02.316 00:47:46 -- common/autotest_common.sh@363 -- # mount=/ 00:08:02.316 00:47:46 -- common/autotest_common.sh@365 -- # target_space=53540581376 00:08:02.316 00:47:46 -- common/autotest_common.sh@366 -- # (( target_space == 0 || target_space < requested_size )) 00:08:02.316 00:47:46 -- common/autotest_common.sh@369 -- # (( target_space >= requested_size )) 00:08:02.316 00:47:46 -- common/autotest_common.sh@371 -- # [[ overlay == tmpfs ]] 00:08:02.316 00:47:46 -- common/autotest_common.sh@371 -- # [[ overlay == ramfs ]] 00:08:02.316 00:47:46 -- common/autotest_common.sh@371 -- # [[ / == / ]] 00:08:02.316 00:47:46 -- common/autotest_common.sh@372 -- # new_size=10668720128 00:08:02.316 00:47:46 -- common/autotest_common.sh@373 -- # (( new_size * 100 / sizes[/] > 95 )) 00:08:02.316 00:47:46 -- common/autotest_common.sh@378 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:02.316 00:47:46 -- common/autotest_common.sh@378 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:02.316 00:47:46 -- common/autotest_common.sh@379 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:02.316 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:02.316 00:47:46 -- common/autotest_common.sh@380 -- # return 0 00:08:02.316 00:47:46 -- common/autotest_common.sh@1667 -- # set -o errtrace 00:08:02.316 00:47:46 -- common/autotest_common.sh@1668 -- # shopt -s extdebug 00:08:02.316 00:47:46 -- common/autotest_common.sh@1669 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:08:02.316 00:47:46 -- common/autotest_common.sh@1671 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:08:02.316 00:47:46 -- common/autotest_common.sh@1672 -- # true 00:08:02.316 00:47:46 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:08:02.316 00:47:46 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:08:02.316 00:47:46 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:08:02.316 00:47:46 -- common/autotest_common.sh@27 -- # exec 00:08:02.316 00:47:46 -- common/autotest_common.sh@29 -- # exec 00:08:02.316 00:47:46 -- common/autotest_common.sh@31 -- # xtrace_restore 00:08:02.316 00:47:46 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:08:02.316 00:47:46 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:08:02.316 00:47:46 -- common/autotest_common.sh@18 -- # set -x 00:08:02.316 00:47:46 -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:02.316 00:47:46 -- nvmf/common.sh@7 -- # uname -s 00:08:02.316 00:47:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:02.316 00:47:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:02.316 00:47:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:02.316 00:47:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:02.316 00:47:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:02.316 00:47:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:02.316 00:47:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:02.316 00:47:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:02.316 00:47:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:02.316 00:47:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:02.316 00:47:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:02.316 00:47:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:02.316 00:47:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:02.316 00:47:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:02.316 00:47:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:02.316 00:47:46 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:02.316 00:47:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:02.316 00:47:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:02.316 00:47:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:02.316 00:47:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.316 00:47:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.316 00:47:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.316 00:47:46 -- paths/export.sh@5 -- # export PATH 00:08:02.316 00:47:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.316 00:47:46 -- nvmf/common.sh@46 -- # : 0 00:08:02.316 00:47:46 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:02.316 00:47:46 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:02.316 00:47:46 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:02.316 00:47:46 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:02.316 00:47:46 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:02.316 00:47:46 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:02.316 00:47:46 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:02.316 00:47:46 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:02.316 00:47:46 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:08:02.316 00:47:46 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:08:02.316 00:47:46 -- target/filesystem.sh@15 -- # nvmftestinit 00:08:02.316 00:47:46 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:02.316 00:47:46 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:02.316 00:47:46 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:02.316 00:47:46 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:02.316 00:47:46 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:02.316 00:47:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:02.316 00:47:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:02.316 00:47:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:02.316 00:47:46 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:02.316 00:47:46 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:02.316 00:47:46 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:02.316 00:47:46 -- common/autotest_common.sh@10 -- # set +x 00:08:04.872 00:47:48 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:04.872 00:47:48 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:04.872 00:47:48 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:04.872 00:47:48 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:04.872 00:47:48 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:04.872 00:47:48 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:04.872 00:47:48 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:04.872 00:47:48 -- nvmf/common.sh@294 -- # net_devs=() 00:08:04.872 00:47:48 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:04.872 00:47:48 -- nvmf/common.sh@295 -- # e810=() 00:08:04.872 00:47:48 -- nvmf/common.sh@295 -- # local -ga e810 00:08:04.872 00:47:48 -- nvmf/common.sh@296 -- # x722=() 00:08:04.872 00:47:48 -- nvmf/common.sh@296 -- # local -ga x722 00:08:04.872 00:47:48 -- nvmf/common.sh@297 -- # mlx=() 00:08:04.872 00:47:48 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:04.872 00:47:48 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:04.872 00:47:48 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:04.872 00:47:48 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:04.872 00:47:48 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:04.872 00:47:48 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:04.872 00:47:48 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:04.872 00:47:48 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:04.872 00:47:48 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:04.872 00:47:48 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:04.872 00:47:48 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:04.872 00:47:48 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:04.872 00:47:48 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:04.872 00:47:48 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:08:04.872 00:47:48 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:08:04.872 00:47:48 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:08:04.872 00:47:48 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:08:04.872 00:47:48 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:04.872 00:47:48 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:04.872 00:47:48 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:04.872 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:04.872 00:47:48 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:04.872 00:47:48 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:04.872 00:47:48 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:04.872 00:47:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:04.872 00:47:48 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:04.872 00:47:48 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:04.872 00:47:48 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:04.872 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:04.872 00:47:48 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:04.872 00:47:48 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:04.872 00:47:48 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:04.872 00:47:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:04.872 00:47:48 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:04.872 00:47:48 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:04.872 00:47:48 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:08:04.872 00:47:48 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:08:04.872 00:47:48 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:04.872 00:47:48 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:04.872 00:47:48 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:04.872 00:47:48 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:04.872 00:47:48 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:04.872 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:04.872 00:47:48 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:04.872 00:47:48 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:04.872 00:47:48 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:04.872 00:47:48 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:04.872 00:47:48 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:04.872 00:47:48 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:04.872 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:04.872 00:47:48 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:04.872 00:47:48 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:04.872 00:47:48 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:04.872 00:47:48 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:04.872 00:47:48 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:08:04.872 00:47:48 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:08:04.872 00:47:48 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:04.872 00:47:48 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:04.872 00:47:48 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:04.872 00:47:48 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:08:04.872 00:47:48 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:04.872 00:47:48 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:04.872 00:47:48 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:08:04.872 00:47:48 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:04.872 00:47:48 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:04.872 00:47:48 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:08:04.872 00:47:48 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:08:04.872 00:47:48 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:08:04.872 00:47:48 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:04.872 00:47:48 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:04.872 00:47:48 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:04.872 00:47:48 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:08:04.872 00:47:48 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:04.872 00:47:48 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:04.872 00:47:48 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:04.872 00:47:48 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:08:04.872 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:04.872 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.213 ms 00:08:04.872 00:08:04.872 --- 10.0.0.2 ping statistics --- 00:08:04.872 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:04.872 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:08:04.872 00:47:48 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:04.872 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:04.872 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:08:04.872 00:08:04.872 --- 10.0.0.1 ping statistics --- 00:08:04.872 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:04.872 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:08:04.872 00:47:48 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:04.872 00:47:48 -- nvmf/common.sh@410 -- # return 0 00:08:04.872 00:47:48 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:04.872 00:47:48 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:04.872 00:47:48 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:04.872 00:47:48 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:04.872 00:47:48 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:04.872 00:47:48 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:04.872 00:47:48 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:04.872 00:47:48 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:08:04.872 00:47:48 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:04.872 00:47:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:04.872 00:47:48 -- common/autotest_common.sh@10 -- # set +x 00:08:04.872 ************************************ 00:08:04.872 START TEST nvmf_filesystem_no_in_capsule 00:08:04.872 ************************************ 00:08:04.872 00:47:48 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_part 0 00:08:04.872 00:47:48 -- target/filesystem.sh@47 -- # in_capsule=0 00:08:04.872 00:47:48 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:04.872 00:47:48 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:04.872 00:47:48 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:04.872 00:47:48 -- common/autotest_common.sh@10 -- # set +x 00:08:04.872 00:47:48 -- nvmf/common.sh@469 -- # nvmfpid=3299611 00:08:04.872 00:47:48 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:04.872 00:47:48 -- nvmf/common.sh@470 -- # waitforlisten 3299611 00:08:04.872 00:47:48 -- common/autotest_common.sh@819 -- # '[' -z 3299611 ']' 00:08:04.872 00:47:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:04.872 00:47:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:04.872 00:47:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:04.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:04.873 00:47:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:04.873 00:47:48 -- common/autotest_common.sh@10 -- # set +x 00:08:04.873 [2024-07-23 00:47:48.798818] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:08:04.873 [2024-07-23 00:47:48.798911] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:04.873 EAL: No free 2048 kB hugepages reported on node 1 00:08:04.873 [2024-07-23 00:47:48.863293] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:04.873 [2024-07-23 00:47:48.954012] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:04.873 [2024-07-23 00:47:48.954161] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:04.873 [2024-07-23 00:47:48.954179] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:04.873 [2024-07-23 00:47:48.954192] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:04.873 [2024-07-23 00:47:48.954267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:04.873 [2024-07-23 00:47:48.954318] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:04.873 [2024-07-23 00:47:48.954366] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:04.873 [2024-07-23 00:47:48.954368] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.807 00:47:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:05.807 00:47:49 -- common/autotest_common.sh@852 -- # return 0 00:08:05.807 00:47:49 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:05.807 00:47:49 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:05.807 00:47:49 -- common/autotest_common.sh@10 -- # set +x 00:08:05.807 00:47:49 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:05.807 00:47:49 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:05.807 00:47:49 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:05.807 00:47:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:05.807 00:47:49 -- common/autotest_common.sh@10 -- # set +x 00:08:05.807 [2024-07-23 00:47:49.766222] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:05.807 00:47:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:05.807 00:47:49 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:05.807 00:47:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:05.807 00:47:49 -- common/autotest_common.sh@10 -- # set +x 00:08:05.807 Malloc1 00:08:05.807 00:47:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:05.807 00:47:49 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:05.807 00:47:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:05.807 00:47:49 -- common/autotest_common.sh@10 -- # set +x 00:08:05.807 00:47:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:05.807 00:47:49 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:05.807 00:47:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:05.807 00:47:49 -- common/autotest_common.sh@10 -- # set +x 00:08:05.807 00:47:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:05.807 00:47:49 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:05.807 00:47:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:05.807 00:47:49 -- common/autotest_common.sh@10 -- # set +x 00:08:05.807 [2024-07-23 00:47:49.947000] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:05.807 00:47:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:05.807 00:47:49 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:05.807 00:47:49 -- common/autotest_common.sh@1357 -- # local bdev_name=Malloc1 00:08:05.807 00:47:49 -- common/autotest_common.sh@1358 -- # local bdev_info 00:08:05.807 00:47:49 -- common/autotest_common.sh@1359 -- # local bs 00:08:05.807 00:47:49 -- common/autotest_common.sh@1360 -- # local nb 00:08:05.807 00:47:49 -- common/autotest_common.sh@1361 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:05.807 00:47:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:05.807 00:47:49 -- common/autotest_common.sh@10 -- # set +x 00:08:05.807 00:47:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:05.807 00:47:49 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:08:05.807 { 00:08:05.807 "name": "Malloc1", 00:08:05.807 "aliases": [ 00:08:05.807 "df054044-4a81-498a-a895-dd78dcf8436a" 00:08:05.807 ], 00:08:05.807 "product_name": "Malloc disk", 00:08:05.807 "block_size": 512, 00:08:05.807 "num_blocks": 1048576, 00:08:05.807 "uuid": "df054044-4a81-498a-a895-dd78dcf8436a", 00:08:05.807 "assigned_rate_limits": { 00:08:05.807 "rw_ios_per_sec": 0, 00:08:05.807 "rw_mbytes_per_sec": 0, 00:08:05.807 "r_mbytes_per_sec": 0, 00:08:05.807 "w_mbytes_per_sec": 0 00:08:05.807 }, 00:08:05.807 "claimed": true, 00:08:05.807 "claim_type": "exclusive_write", 00:08:05.807 "zoned": false, 00:08:05.807 "supported_io_types": { 00:08:05.807 "read": true, 00:08:05.807 "write": true, 00:08:05.807 "unmap": true, 00:08:05.807 "write_zeroes": true, 00:08:05.807 "flush": true, 00:08:05.807 "reset": true, 00:08:05.807 "compare": false, 00:08:05.807 "compare_and_write": false, 00:08:05.807 "abort": true, 00:08:05.807 "nvme_admin": false, 00:08:05.807 "nvme_io": false 00:08:05.807 }, 00:08:05.807 "memory_domains": [ 00:08:05.807 { 00:08:05.807 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:05.807 "dma_device_type": 2 00:08:05.807 } 00:08:05.807 ], 00:08:05.807 "driver_specific": {} 00:08:05.807 } 00:08:05.807 ]' 00:08:05.807 00:47:49 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:08:06.065 00:47:50 -- common/autotest_common.sh@1362 -- # bs=512 00:08:06.065 00:47:50 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:08:06.065 00:47:50 -- common/autotest_common.sh@1363 -- # nb=1048576 00:08:06.065 00:47:50 -- common/autotest_common.sh@1366 -- # bdev_size=512 00:08:06.065 00:47:50 -- common/autotest_common.sh@1367 -- # echo 512 00:08:06.065 00:47:50 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:06.065 00:47:50 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:06.630 00:47:50 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:06.630 00:47:50 -- common/autotest_common.sh@1177 -- # local i=0 00:08:06.630 00:47:50 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:08:06.630 00:47:50 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:08:06.630 00:47:50 -- common/autotest_common.sh@1184 -- # sleep 2 00:08:09.155 00:47:52 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:08:09.155 00:47:52 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:08:09.155 00:47:52 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:08:09.155 00:47:52 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:08:09.155 00:47:52 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:08:09.155 00:47:52 -- common/autotest_common.sh@1187 -- # return 0 00:08:09.155 00:47:52 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:09.155 00:47:52 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:09.155 00:47:52 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:09.155 00:47:52 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:09.155 00:47:52 -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:09.155 00:47:52 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:09.155 00:47:52 -- setup/common.sh@80 -- # echo 536870912 00:08:09.155 00:47:52 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:09.155 00:47:52 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:09.155 00:47:52 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:09.155 00:47:52 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:09.155 00:47:52 -- target/filesystem.sh@69 -- # partprobe 00:08:09.155 00:47:53 -- target/filesystem.sh@70 -- # sleep 1 00:08:10.084 00:47:54 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:08:10.084 00:47:54 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:10.084 00:47:54 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:10.084 00:47:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:10.084 00:47:54 -- common/autotest_common.sh@10 -- # set +x 00:08:10.084 ************************************ 00:08:10.084 START TEST filesystem_ext4 00:08:10.084 ************************************ 00:08:10.084 00:47:54 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:10.084 00:47:54 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:10.084 00:47:54 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:10.085 00:47:54 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:10.085 00:47:54 -- common/autotest_common.sh@902 -- # local fstype=ext4 00:08:10.085 00:47:54 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:10.085 00:47:54 -- common/autotest_common.sh@904 -- # local i=0 00:08:10.085 00:47:54 -- common/autotest_common.sh@905 -- # local force 00:08:10.085 00:47:54 -- common/autotest_common.sh@907 -- # '[' ext4 = ext4 ']' 00:08:10.085 00:47:54 -- common/autotest_common.sh@908 -- # force=-F 00:08:10.085 00:47:54 -- common/autotest_common.sh@913 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:10.085 mke2fs 1.46.5 (30-Dec-2021) 00:08:10.343 Discarding device blocks: 0/522240 done 00:08:10.343 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:10.343 Filesystem UUID: 5fcd1b0f-95bc-4290-8256-6ad20516e2c3 00:08:10.343 Superblock backups stored on blocks: 00:08:10.343 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:10.343 00:08:10.343 Allocating group tables: 0/64 done 00:08:10.343 Writing inode tables: 0/64 done 00:08:10.600 Creating journal (8192 blocks): done 00:08:11.423 Writing superblocks and filesystem accounting information: 0/64 8/64 done 00:08:11.423 00:08:11.423 00:47:55 -- common/autotest_common.sh@921 -- # return 0 00:08:11.423 00:47:55 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:12.356 00:47:56 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:12.356 00:47:56 -- target/filesystem.sh@25 -- # sync 00:08:12.356 00:47:56 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:12.356 00:47:56 -- target/filesystem.sh@27 -- # sync 00:08:12.356 00:47:56 -- target/filesystem.sh@29 -- # i=0 00:08:12.356 00:47:56 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:12.356 00:47:56 -- target/filesystem.sh@37 -- # kill -0 3299611 00:08:12.356 00:47:56 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:12.356 00:47:56 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:12.356 00:47:56 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:12.356 00:47:56 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:12.356 00:08:12.356 real 0m2.221s 00:08:12.356 user 0m0.014s 00:08:12.356 sys 0m0.058s 00:08:12.356 00:47:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:12.356 00:47:56 -- common/autotest_common.sh@10 -- # set +x 00:08:12.356 ************************************ 00:08:12.356 END TEST filesystem_ext4 00:08:12.356 ************************************ 00:08:12.356 00:47:56 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:12.356 00:47:56 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:12.356 00:47:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:12.356 00:47:56 -- common/autotest_common.sh@10 -- # set +x 00:08:12.356 ************************************ 00:08:12.356 START TEST filesystem_btrfs 00:08:12.356 ************************************ 00:08:12.356 00:47:56 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:12.356 00:47:56 -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:12.356 00:47:56 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:12.356 00:47:56 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:12.356 00:47:56 -- common/autotest_common.sh@902 -- # local fstype=btrfs 00:08:12.356 00:47:56 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:12.356 00:47:56 -- common/autotest_common.sh@904 -- # local i=0 00:08:12.356 00:47:56 -- common/autotest_common.sh@905 -- # local force 00:08:12.356 00:47:56 -- common/autotest_common.sh@907 -- # '[' btrfs = ext4 ']' 00:08:12.356 00:47:56 -- common/autotest_common.sh@910 -- # force=-f 00:08:12.356 00:47:56 -- common/autotest_common.sh@913 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:12.923 btrfs-progs v6.6.2 00:08:12.923 See https://btrfs.readthedocs.io for more information. 00:08:12.923 00:08:12.923 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:12.923 NOTE: several default settings have changed in version 5.15, please make sure 00:08:12.923 this does not affect your deployments: 00:08:12.923 - DUP for metadata (-m dup) 00:08:12.923 - enabled no-holes (-O no-holes) 00:08:12.923 - enabled free-space-tree (-R free-space-tree) 00:08:12.923 00:08:12.923 Label: (null) 00:08:12.923 UUID: 59f30ba2-10fb-439e-a42e-44d8babe28ff 00:08:12.923 Node size: 16384 00:08:12.923 Sector size: 4096 00:08:12.923 Filesystem size: 510.00MiB 00:08:12.923 Block group profiles: 00:08:12.923 Data: single 8.00MiB 00:08:12.923 Metadata: DUP 32.00MiB 00:08:12.923 System: DUP 8.00MiB 00:08:12.923 SSD detected: yes 00:08:12.923 Zoned device: no 00:08:12.923 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:12.923 Runtime features: free-space-tree 00:08:12.923 Checksum: crc32c 00:08:12.923 Number of devices: 1 00:08:12.923 Devices: 00:08:12.923 ID SIZE PATH 00:08:12.923 1 510.00MiB /dev/nvme0n1p1 00:08:12.923 00:08:12.923 00:47:56 -- common/autotest_common.sh@921 -- # return 0 00:08:12.923 00:47:56 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:13.486 00:47:57 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:13.486 00:47:57 -- target/filesystem.sh@25 -- # sync 00:08:13.486 00:47:57 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:13.486 00:47:57 -- target/filesystem.sh@27 -- # sync 00:08:13.486 00:47:57 -- target/filesystem.sh@29 -- # i=0 00:08:13.486 00:47:57 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:13.486 00:47:57 -- target/filesystem.sh@37 -- # kill -0 3299611 00:08:13.486 00:47:57 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:13.486 00:47:57 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:13.486 00:47:57 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:13.486 00:47:57 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:13.486 00:08:13.486 real 0m1.211s 00:08:13.486 user 0m0.021s 00:08:13.486 sys 0m0.111s 00:08:13.486 00:47:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:13.486 00:47:57 -- common/autotest_common.sh@10 -- # set +x 00:08:13.486 ************************************ 00:08:13.486 END TEST filesystem_btrfs 00:08:13.486 ************************************ 00:08:13.486 00:47:57 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:08:13.486 00:47:57 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:13.486 00:47:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:13.486 00:47:57 -- common/autotest_common.sh@10 -- # set +x 00:08:13.747 ************************************ 00:08:13.747 START TEST filesystem_xfs 00:08:13.747 ************************************ 00:08:13.747 00:47:57 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create xfs nvme0n1 00:08:13.747 00:47:57 -- target/filesystem.sh@18 -- # fstype=xfs 00:08:13.747 00:47:57 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:13.747 00:47:57 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:13.747 00:47:57 -- common/autotest_common.sh@902 -- # local fstype=xfs 00:08:13.747 00:47:57 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:13.747 00:47:57 -- common/autotest_common.sh@904 -- # local i=0 00:08:13.747 00:47:57 -- common/autotest_common.sh@905 -- # local force 00:08:13.747 00:47:57 -- common/autotest_common.sh@907 -- # '[' xfs = ext4 ']' 00:08:13.747 00:47:57 -- common/autotest_common.sh@910 -- # force=-f 00:08:13.747 00:47:57 -- common/autotest_common.sh@913 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:13.747 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:13.747 = sectsz=512 attr=2, projid32bit=1 00:08:13.747 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:13.747 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:13.747 data = bsize=4096 blocks=130560, imaxpct=25 00:08:13.747 = sunit=0 swidth=0 blks 00:08:13.747 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:13.747 log =internal log bsize=4096 blocks=16384, version=2 00:08:13.747 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:13.747 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:14.682 Discarding blocks...Done. 00:08:14.682 00:47:58 -- common/autotest_common.sh@921 -- # return 0 00:08:14.682 00:47:58 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:17.211 00:48:01 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:17.211 00:48:01 -- target/filesystem.sh@25 -- # sync 00:08:17.211 00:48:01 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:17.211 00:48:01 -- target/filesystem.sh@27 -- # sync 00:08:17.211 00:48:01 -- target/filesystem.sh@29 -- # i=0 00:08:17.211 00:48:01 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:17.211 00:48:01 -- target/filesystem.sh@37 -- # kill -0 3299611 00:08:17.211 00:48:01 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:17.212 00:48:01 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:17.212 00:48:01 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:17.212 00:48:01 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:17.212 00:08:17.212 real 0m3.616s 00:08:17.212 user 0m0.023s 00:08:17.212 sys 0m0.054s 00:08:17.212 00:48:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:17.212 00:48:01 -- common/autotest_common.sh@10 -- # set +x 00:08:17.212 ************************************ 00:08:17.212 END TEST filesystem_xfs 00:08:17.212 ************************************ 00:08:17.212 00:48:01 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:17.212 00:48:01 -- target/filesystem.sh@93 -- # sync 00:08:17.212 00:48:01 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:17.471 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:17.471 00:48:01 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:17.471 00:48:01 -- common/autotest_common.sh@1198 -- # local i=0 00:08:17.471 00:48:01 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:08:17.471 00:48:01 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:17.471 00:48:01 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:08:17.471 00:48:01 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:17.471 00:48:01 -- common/autotest_common.sh@1210 -- # return 0 00:08:17.471 00:48:01 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:17.471 00:48:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:17.471 00:48:01 -- common/autotest_common.sh@10 -- # set +x 00:08:17.471 00:48:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:17.471 00:48:01 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:17.471 00:48:01 -- target/filesystem.sh@101 -- # killprocess 3299611 00:08:17.471 00:48:01 -- common/autotest_common.sh@926 -- # '[' -z 3299611 ']' 00:08:17.471 00:48:01 -- common/autotest_common.sh@930 -- # kill -0 3299611 00:08:17.471 00:48:01 -- common/autotest_common.sh@931 -- # uname 00:08:17.471 00:48:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:17.471 00:48:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3299611 00:08:17.471 00:48:01 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:17.471 00:48:01 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:17.471 00:48:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3299611' 00:08:17.471 killing process with pid 3299611 00:08:17.471 00:48:01 -- common/autotest_common.sh@945 -- # kill 3299611 00:08:17.471 00:48:01 -- common/autotest_common.sh@950 -- # wait 3299611 00:08:18.039 00:48:01 -- target/filesystem.sh@102 -- # nvmfpid= 00:08:18.039 00:08:18.039 real 0m13.190s 00:08:18.039 user 0m50.951s 00:08:18.039 sys 0m1.810s 00:08:18.039 00:48:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:18.039 00:48:01 -- common/autotest_common.sh@10 -- # set +x 00:08:18.039 ************************************ 00:08:18.039 END TEST nvmf_filesystem_no_in_capsule 00:08:18.039 ************************************ 00:08:18.039 00:48:01 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:08:18.039 00:48:01 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:18.039 00:48:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:18.039 00:48:01 -- common/autotest_common.sh@10 -- # set +x 00:08:18.039 ************************************ 00:08:18.039 START TEST nvmf_filesystem_in_capsule 00:08:18.039 ************************************ 00:08:18.039 00:48:01 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_part 4096 00:08:18.039 00:48:01 -- target/filesystem.sh@47 -- # in_capsule=4096 00:08:18.039 00:48:01 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:18.039 00:48:01 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:18.039 00:48:01 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:18.039 00:48:01 -- common/autotest_common.sh@10 -- # set +x 00:08:18.039 00:48:01 -- nvmf/common.sh@469 -- # nvmfpid=3301579 00:08:18.039 00:48:01 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:18.039 00:48:01 -- nvmf/common.sh@470 -- # waitforlisten 3301579 00:08:18.039 00:48:01 -- common/autotest_common.sh@819 -- # '[' -z 3301579 ']' 00:08:18.039 00:48:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:18.039 00:48:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:18.039 00:48:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:18.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:18.039 00:48:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:18.039 00:48:01 -- common/autotest_common.sh@10 -- # set +x 00:08:18.039 [2024-07-23 00:48:02.017503] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:08:18.040 [2024-07-23 00:48:02.017608] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:18.040 EAL: No free 2048 kB hugepages reported on node 1 00:08:18.040 [2024-07-23 00:48:02.085988] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:18.040 [2024-07-23 00:48:02.177945] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:18.040 [2024-07-23 00:48:02.178105] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:18.040 [2024-07-23 00:48:02.178127] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:18.040 [2024-07-23 00:48:02.178143] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:18.040 [2024-07-23 00:48:02.178208] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:18.040 [2024-07-23 00:48:02.178263] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:18.040 [2024-07-23 00:48:02.178314] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:18.040 [2024-07-23 00:48:02.178317] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.976 00:48:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:18.976 00:48:02 -- common/autotest_common.sh@852 -- # return 0 00:08:18.976 00:48:02 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:18.976 00:48:02 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:18.976 00:48:02 -- common/autotest_common.sh@10 -- # set +x 00:08:18.976 00:48:03 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:18.976 00:48:03 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:18.976 00:48:03 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:08:18.976 00:48:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:18.976 00:48:03 -- common/autotest_common.sh@10 -- # set +x 00:08:18.976 [2024-07-23 00:48:03.017317] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:18.976 00:48:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:18.976 00:48:03 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:18.976 00:48:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:18.976 00:48:03 -- common/autotest_common.sh@10 -- # set +x 00:08:19.237 Malloc1 00:08:19.237 00:48:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:19.237 00:48:03 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:19.237 00:48:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:19.237 00:48:03 -- common/autotest_common.sh@10 -- # set +x 00:08:19.237 00:48:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:19.237 00:48:03 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:19.237 00:48:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:19.237 00:48:03 -- common/autotest_common.sh@10 -- # set +x 00:08:19.237 00:48:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:19.237 00:48:03 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:19.237 00:48:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:19.237 00:48:03 -- common/autotest_common.sh@10 -- # set +x 00:08:19.237 [2024-07-23 00:48:03.210156] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:19.237 00:48:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:19.237 00:48:03 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:19.237 00:48:03 -- common/autotest_common.sh@1357 -- # local bdev_name=Malloc1 00:08:19.237 00:48:03 -- common/autotest_common.sh@1358 -- # local bdev_info 00:08:19.237 00:48:03 -- common/autotest_common.sh@1359 -- # local bs 00:08:19.237 00:48:03 -- common/autotest_common.sh@1360 -- # local nb 00:08:19.237 00:48:03 -- common/autotest_common.sh@1361 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:19.237 00:48:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:19.237 00:48:03 -- common/autotest_common.sh@10 -- # set +x 00:08:19.237 00:48:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:19.237 00:48:03 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:08:19.237 { 00:08:19.237 "name": "Malloc1", 00:08:19.237 "aliases": [ 00:08:19.237 "5cc4e8cb-680b-4561-9bf4-fb492be493e2" 00:08:19.237 ], 00:08:19.237 "product_name": "Malloc disk", 00:08:19.237 "block_size": 512, 00:08:19.237 "num_blocks": 1048576, 00:08:19.237 "uuid": "5cc4e8cb-680b-4561-9bf4-fb492be493e2", 00:08:19.237 "assigned_rate_limits": { 00:08:19.237 "rw_ios_per_sec": 0, 00:08:19.237 "rw_mbytes_per_sec": 0, 00:08:19.237 "r_mbytes_per_sec": 0, 00:08:19.237 "w_mbytes_per_sec": 0 00:08:19.237 }, 00:08:19.237 "claimed": true, 00:08:19.237 "claim_type": "exclusive_write", 00:08:19.237 "zoned": false, 00:08:19.237 "supported_io_types": { 00:08:19.237 "read": true, 00:08:19.237 "write": true, 00:08:19.237 "unmap": true, 00:08:19.237 "write_zeroes": true, 00:08:19.237 "flush": true, 00:08:19.237 "reset": true, 00:08:19.237 "compare": false, 00:08:19.237 "compare_and_write": false, 00:08:19.237 "abort": true, 00:08:19.237 "nvme_admin": false, 00:08:19.237 "nvme_io": false 00:08:19.237 }, 00:08:19.237 "memory_domains": [ 00:08:19.237 { 00:08:19.237 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:19.237 "dma_device_type": 2 00:08:19.237 } 00:08:19.237 ], 00:08:19.237 "driver_specific": {} 00:08:19.237 } 00:08:19.237 ]' 00:08:19.237 00:48:03 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:08:19.237 00:48:03 -- common/autotest_common.sh@1362 -- # bs=512 00:08:19.237 00:48:03 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:08:19.237 00:48:03 -- common/autotest_common.sh@1363 -- # nb=1048576 00:08:19.237 00:48:03 -- common/autotest_common.sh@1366 -- # bdev_size=512 00:08:19.237 00:48:03 -- common/autotest_common.sh@1367 -- # echo 512 00:08:19.237 00:48:03 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:19.237 00:48:03 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:19.806 00:48:04 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:19.806 00:48:04 -- common/autotest_common.sh@1177 -- # local i=0 00:08:19.806 00:48:04 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:08:19.806 00:48:04 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:08:19.806 00:48:04 -- common/autotest_common.sh@1184 -- # sleep 2 00:08:22.336 00:48:06 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:08:22.336 00:48:06 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:08:22.336 00:48:06 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:08:22.336 00:48:06 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:08:22.336 00:48:06 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:08:22.336 00:48:06 -- common/autotest_common.sh@1187 -- # return 0 00:08:22.336 00:48:06 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:22.336 00:48:06 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:22.336 00:48:06 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:22.336 00:48:06 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:22.336 00:48:06 -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:22.336 00:48:06 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:22.336 00:48:06 -- setup/common.sh@80 -- # echo 536870912 00:08:22.336 00:48:06 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:22.336 00:48:06 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:22.336 00:48:06 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:22.336 00:48:06 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:22.336 00:48:06 -- target/filesystem.sh@69 -- # partprobe 00:08:23.289 00:48:07 -- target/filesystem.sh@70 -- # sleep 1 00:08:24.235 00:48:08 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:08:24.235 00:48:08 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:24.235 00:48:08 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:24.235 00:48:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:24.235 00:48:08 -- common/autotest_common.sh@10 -- # set +x 00:08:24.235 ************************************ 00:08:24.235 START TEST filesystem_in_capsule_ext4 00:08:24.235 ************************************ 00:08:24.235 00:48:08 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:24.235 00:48:08 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:24.235 00:48:08 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:24.235 00:48:08 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:24.235 00:48:08 -- common/autotest_common.sh@902 -- # local fstype=ext4 00:08:24.235 00:48:08 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:24.235 00:48:08 -- common/autotest_common.sh@904 -- # local i=0 00:08:24.235 00:48:08 -- common/autotest_common.sh@905 -- # local force 00:08:24.235 00:48:08 -- common/autotest_common.sh@907 -- # '[' ext4 = ext4 ']' 00:08:24.235 00:48:08 -- common/autotest_common.sh@908 -- # force=-F 00:08:24.235 00:48:08 -- common/autotest_common.sh@913 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:24.235 mke2fs 1.46.5 (30-Dec-2021) 00:08:24.235 Discarding device blocks: 0/522240 done 00:08:24.235 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:24.235 Filesystem UUID: f9096f78-9539-40ed-938c-86bc97e6922d 00:08:24.235 Superblock backups stored on blocks: 00:08:24.235 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:24.235 00:08:24.235 Allocating group tables: 0/64 done 00:08:24.235 Writing inode tables: 0/64 done 00:08:26.764 Creating journal (8192 blocks): done 00:08:26.764 Writing superblocks and filesystem accounting information: 0/64 done 00:08:26.764 00:08:26.764 00:48:10 -- common/autotest_common.sh@921 -- # return 0 00:08:26.764 00:48:10 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:27.330 00:48:11 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:27.330 00:48:11 -- target/filesystem.sh@25 -- # sync 00:08:27.330 00:48:11 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:27.330 00:48:11 -- target/filesystem.sh@27 -- # sync 00:08:27.330 00:48:11 -- target/filesystem.sh@29 -- # i=0 00:08:27.330 00:48:11 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:27.330 00:48:11 -- target/filesystem.sh@37 -- # kill -0 3301579 00:08:27.330 00:48:11 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:27.330 00:48:11 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:27.330 00:48:11 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:27.330 00:48:11 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:27.589 00:08:27.589 real 0m3.272s 00:08:27.589 user 0m0.019s 00:08:27.589 sys 0m0.059s 00:08:27.589 00:48:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:27.589 00:48:11 -- common/autotest_common.sh@10 -- # set +x 00:08:27.589 ************************************ 00:08:27.589 END TEST filesystem_in_capsule_ext4 00:08:27.589 ************************************ 00:08:27.589 00:48:11 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:27.589 00:48:11 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:27.589 00:48:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:27.589 00:48:11 -- common/autotest_common.sh@10 -- # set +x 00:08:27.589 ************************************ 00:08:27.589 START TEST filesystem_in_capsule_btrfs 00:08:27.589 ************************************ 00:08:27.589 00:48:11 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:27.589 00:48:11 -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:27.589 00:48:11 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:27.589 00:48:11 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:27.589 00:48:11 -- common/autotest_common.sh@902 -- # local fstype=btrfs 00:08:27.589 00:48:11 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:27.589 00:48:11 -- common/autotest_common.sh@904 -- # local i=0 00:08:27.589 00:48:11 -- common/autotest_common.sh@905 -- # local force 00:08:27.589 00:48:11 -- common/autotest_common.sh@907 -- # '[' btrfs = ext4 ']' 00:08:27.589 00:48:11 -- common/autotest_common.sh@910 -- # force=-f 00:08:27.589 00:48:11 -- common/autotest_common.sh@913 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:27.847 btrfs-progs v6.6.2 00:08:27.847 See https://btrfs.readthedocs.io for more information. 00:08:27.847 00:08:27.847 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:27.847 NOTE: several default settings have changed in version 5.15, please make sure 00:08:27.847 this does not affect your deployments: 00:08:27.847 - DUP for metadata (-m dup) 00:08:27.847 - enabled no-holes (-O no-holes) 00:08:27.847 - enabled free-space-tree (-R free-space-tree) 00:08:27.847 00:08:27.847 Label: (null) 00:08:27.847 UUID: 74807622-40d0-4468-9673-860938011962 00:08:27.847 Node size: 16384 00:08:27.847 Sector size: 4096 00:08:27.847 Filesystem size: 510.00MiB 00:08:27.847 Block group profiles: 00:08:27.847 Data: single 8.00MiB 00:08:27.847 Metadata: DUP 32.00MiB 00:08:27.847 System: DUP 8.00MiB 00:08:27.847 SSD detected: yes 00:08:27.847 Zoned device: no 00:08:27.847 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:27.847 Runtime features: free-space-tree 00:08:27.847 Checksum: crc32c 00:08:27.847 Number of devices: 1 00:08:27.847 Devices: 00:08:27.847 ID SIZE PATH 00:08:27.847 1 510.00MiB /dev/nvme0n1p1 00:08:27.847 00:08:27.847 00:48:11 -- common/autotest_common.sh@921 -- # return 0 00:08:27.847 00:48:11 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:28.781 00:48:12 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:28.781 00:48:12 -- target/filesystem.sh@25 -- # sync 00:08:28.781 00:48:12 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:28.781 00:48:12 -- target/filesystem.sh@27 -- # sync 00:08:28.781 00:48:12 -- target/filesystem.sh@29 -- # i=0 00:08:28.781 00:48:12 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:28.782 00:48:12 -- target/filesystem.sh@37 -- # kill -0 3301579 00:08:28.782 00:48:12 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:28.782 00:48:12 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:28.782 00:48:12 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:28.782 00:48:12 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:28.782 00:08:28.782 real 0m1.319s 00:08:28.782 user 0m0.030s 00:08:28.782 sys 0m0.104s 00:08:28.782 00:48:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:28.782 00:48:12 -- common/autotest_common.sh@10 -- # set +x 00:08:28.782 ************************************ 00:08:28.782 END TEST filesystem_in_capsule_btrfs 00:08:28.782 ************************************ 00:08:28.782 00:48:12 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:28.782 00:48:12 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:28.782 00:48:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:28.782 00:48:12 -- common/autotest_common.sh@10 -- # set +x 00:08:28.782 ************************************ 00:08:28.782 START TEST filesystem_in_capsule_xfs 00:08:28.782 ************************************ 00:08:28.782 00:48:12 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create xfs nvme0n1 00:08:28.782 00:48:12 -- target/filesystem.sh@18 -- # fstype=xfs 00:08:28.782 00:48:12 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:28.782 00:48:12 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:28.782 00:48:12 -- common/autotest_common.sh@902 -- # local fstype=xfs 00:08:28.782 00:48:12 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:28.782 00:48:12 -- common/autotest_common.sh@904 -- # local i=0 00:08:28.782 00:48:12 -- common/autotest_common.sh@905 -- # local force 00:08:28.782 00:48:12 -- common/autotest_common.sh@907 -- # '[' xfs = ext4 ']' 00:08:28.782 00:48:12 -- common/autotest_common.sh@910 -- # force=-f 00:08:28.782 00:48:12 -- common/autotest_common.sh@913 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:29.041 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:29.041 = sectsz=512 attr=2, projid32bit=1 00:08:29.041 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:29.041 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:29.041 data = bsize=4096 blocks=130560, imaxpct=25 00:08:29.041 = sunit=0 swidth=0 blks 00:08:29.041 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:29.041 log =internal log bsize=4096 blocks=16384, version=2 00:08:29.041 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:29.041 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:29.606 Discarding blocks...Done. 00:08:29.606 00:48:13 -- common/autotest_common.sh@921 -- # return 0 00:08:29.606 00:48:13 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:32.136 00:48:15 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:32.136 00:48:15 -- target/filesystem.sh@25 -- # sync 00:08:32.136 00:48:15 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:32.136 00:48:15 -- target/filesystem.sh@27 -- # sync 00:08:32.136 00:48:15 -- target/filesystem.sh@29 -- # i=0 00:08:32.136 00:48:15 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:32.136 00:48:15 -- target/filesystem.sh@37 -- # kill -0 3301579 00:08:32.136 00:48:15 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:32.136 00:48:15 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:32.136 00:48:15 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:32.136 00:48:15 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:32.136 00:08:32.136 real 0m2.966s 00:08:32.136 user 0m0.020s 00:08:32.136 sys 0m0.057s 00:08:32.136 00:48:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:32.136 00:48:15 -- common/autotest_common.sh@10 -- # set +x 00:08:32.136 ************************************ 00:08:32.136 END TEST filesystem_in_capsule_xfs 00:08:32.136 ************************************ 00:08:32.136 00:48:15 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:32.136 00:48:15 -- target/filesystem.sh@93 -- # sync 00:08:32.136 00:48:15 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:32.136 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:32.136 00:48:16 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:32.136 00:48:16 -- common/autotest_common.sh@1198 -- # local i=0 00:08:32.136 00:48:16 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:08:32.136 00:48:16 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:32.136 00:48:16 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:08:32.136 00:48:16 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:32.136 00:48:16 -- common/autotest_common.sh@1210 -- # return 0 00:08:32.136 00:48:16 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:32.136 00:48:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:32.136 00:48:16 -- common/autotest_common.sh@10 -- # set +x 00:08:32.136 00:48:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:32.136 00:48:16 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:32.136 00:48:16 -- target/filesystem.sh@101 -- # killprocess 3301579 00:08:32.136 00:48:16 -- common/autotest_common.sh@926 -- # '[' -z 3301579 ']' 00:08:32.136 00:48:16 -- common/autotest_common.sh@930 -- # kill -0 3301579 00:08:32.136 00:48:16 -- common/autotest_common.sh@931 -- # uname 00:08:32.136 00:48:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:32.136 00:48:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3301579 00:08:32.136 00:48:16 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:32.136 00:48:16 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:32.136 00:48:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3301579' 00:08:32.136 killing process with pid 3301579 00:08:32.136 00:48:16 -- common/autotest_common.sh@945 -- # kill 3301579 00:08:32.136 00:48:16 -- common/autotest_common.sh@950 -- # wait 3301579 00:08:32.397 00:48:16 -- target/filesystem.sh@102 -- # nvmfpid= 00:08:32.397 00:08:32.397 real 0m14.578s 00:08:32.397 user 0m56.388s 00:08:32.397 sys 0m1.934s 00:08:32.397 00:48:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:32.397 00:48:16 -- common/autotest_common.sh@10 -- # set +x 00:08:32.397 ************************************ 00:08:32.397 END TEST nvmf_filesystem_in_capsule 00:08:32.397 ************************************ 00:08:32.397 00:48:16 -- target/filesystem.sh@108 -- # nvmftestfini 00:08:32.397 00:48:16 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:32.397 00:48:16 -- nvmf/common.sh@116 -- # sync 00:08:32.397 00:48:16 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:32.397 00:48:16 -- nvmf/common.sh@119 -- # set +e 00:08:32.397 00:48:16 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:32.397 00:48:16 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:32.397 rmmod nvme_tcp 00:08:32.397 rmmod nvme_fabrics 00:08:32.658 rmmod nvme_keyring 00:08:32.658 00:48:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:32.658 00:48:16 -- nvmf/common.sh@123 -- # set -e 00:08:32.658 00:48:16 -- nvmf/common.sh@124 -- # return 0 00:08:32.658 00:48:16 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:08:32.658 00:48:16 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:32.658 00:48:16 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:32.658 00:48:16 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:32.658 00:48:16 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:32.658 00:48:16 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:32.658 00:48:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:32.658 00:48:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:32.658 00:48:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:34.569 00:48:18 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:08:34.569 00:08:34.569 real 0m32.387s 00:08:34.569 user 1m48.261s 00:08:34.569 sys 0m5.438s 00:08:34.569 00:48:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:34.569 00:48:18 -- common/autotest_common.sh@10 -- # set +x 00:08:34.569 ************************************ 00:08:34.569 END TEST nvmf_filesystem 00:08:34.569 ************************************ 00:08:34.569 00:48:18 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:34.569 00:48:18 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:34.570 00:48:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:34.570 00:48:18 -- common/autotest_common.sh@10 -- # set +x 00:08:34.570 ************************************ 00:08:34.570 START TEST nvmf_discovery 00:08:34.570 ************************************ 00:08:34.570 00:48:18 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:34.570 * Looking for test storage... 00:08:34.570 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:34.570 00:48:18 -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:34.570 00:48:18 -- nvmf/common.sh@7 -- # uname -s 00:08:34.570 00:48:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:34.570 00:48:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:34.570 00:48:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:34.570 00:48:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:34.570 00:48:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:34.570 00:48:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:34.570 00:48:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:34.570 00:48:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:34.570 00:48:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:34.570 00:48:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:34.570 00:48:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:34.570 00:48:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:34.570 00:48:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:34.570 00:48:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:34.570 00:48:18 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:34.570 00:48:18 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:34.570 00:48:18 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:34.570 00:48:18 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:34.570 00:48:18 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:34.570 00:48:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.570 00:48:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.570 00:48:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.570 00:48:18 -- paths/export.sh@5 -- # export PATH 00:08:34.570 00:48:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.570 00:48:18 -- nvmf/common.sh@46 -- # : 0 00:08:34.570 00:48:18 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:34.570 00:48:18 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:34.570 00:48:18 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:34.570 00:48:18 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:34.570 00:48:18 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:34.570 00:48:18 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:34.570 00:48:18 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:34.570 00:48:18 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:34.570 00:48:18 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:34.570 00:48:18 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:34.570 00:48:18 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:34.570 00:48:18 -- target/discovery.sh@15 -- # hash nvme 00:08:34.570 00:48:18 -- target/discovery.sh@20 -- # nvmftestinit 00:08:34.570 00:48:18 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:34.570 00:48:18 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:34.570 00:48:18 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:34.570 00:48:18 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:34.570 00:48:18 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:34.570 00:48:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:34.570 00:48:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:34.570 00:48:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:34.570 00:48:18 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:34.570 00:48:18 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:34.570 00:48:18 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:34.570 00:48:18 -- common/autotest_common.sh@10 -- # set +x 00:08:37.107 00:48:20 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:37.107 00:48:20 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:37.107 00:48:20 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:37.107 00:48:20 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:37.107 00:48:20 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:37.107 00:48:20 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:37.107 00:48:20 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:37.107 00:48:20 -- nvmf/common.sh@294 -- # net_devs=() 00:08:37.107 00:48:20 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:37.107 00:48:20 -- nvmf/common.sh@295 -- # e810=() 00:08:37.107 00:48:20 -- nvmf/common.sh@295 -- # local -ga e810 00:08:37.107 00:48:20 -- nvmf/common.sh@296 -- # x722=() 00:08:37.107 00:48:20 -- nvmf/common.sh@296 -- # local -ga x722 00:08:37.107 00:48:20 -- nvmf/common.sh@297 -- # mlx=() 00:08:37.107 00:48:20 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:37.107 00:48:20 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:37.107 00:48:20 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:37.107 00:48:20 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:37.107 00:48:20 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:37.107 00:48:20 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:37.107 00:48:20 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:37.107 00:48:20 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:37.107 00:48:20 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:37.107 00:48:20 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:37.107 00:48:20 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:37.107 00:48:20 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:37.107 00:48:20 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:37.107 00:48:20 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:08:37.107 00:48:20 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:08:37.107 00:48:20 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:08:37.107 00:48:20 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:08:37.107 00:48:20 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:37.107 00:48:20 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:37.107 00:48:20 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:37.107 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:37.107 00:48:20 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:37.107 00:48:20 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:37.108 00:48:20 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:37.108 00:48:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:37.108 00:48:20 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:37.108 00:48:20 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:37.108 00:48:20 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:37.108 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:37.108 00:48:20 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:37.108 00:48:20 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:37.108 00:48:20 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:37.108 00:48:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:37.108 00:48:20 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:37.108 00:48:20 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:37.108 00:48:20 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:08:37.108 00:48:20 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:08:37.108 00:48:20 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:37.108 00:48:20 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:37.108 00:48:20 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:37.108 00:48:20 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:37.108 00:48:20 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:37.108 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:37.108 00:48:20 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:37.108 00:48:20 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:37.108 00:48:20 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:37.108 00:48:20 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:37.108 00:48:20 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:37.108 00:48:20 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:37.108 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:37.108 00:48:20 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:37.108 00:48:20 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:37.108 00:48:20 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:37.108 00:48:20 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:37.108 00:48:20 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:08:37.108 00:48:20 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:08:37.108 00:48:20 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:37.108 00:48:20 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:37.108 00:48:20 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:37.108 00:48:20 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:08:37.108 00:48:20 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:37.108 00:48:20 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:37.108 00:48:20 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:08:37.108 00:48:20 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:37.108 00:48:20 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:37.108 00:48:20 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:08:37.108 00:48:20 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:08:37.108 00:48:20 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:08:37.108 00:48:20 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:37.108 00:48:20 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:37.108 00:48:20 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:37.108 00:48:20 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:08:37.108 00:48:20 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:37.108 00:48:20 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:37.108 00:48:20 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:37.108 00:48:20 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:08:37.108 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:37.108 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:08:37.108 00:08:37.108 --- 10.0.0.2 ping statistics --- 00:08:37.108 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:37.108 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:08:37.108 00:48:20 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:37.108 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:37.108 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:08:37.108 00:08:37.108 --- 10.0.0.1 ping statistics --- 00:08:37.108 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:37.108 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:08:37.108 00:48:20 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:37.108 00:48:20 -- nvmf/common.sh@410 -- # return 0 00:08:37.108 00:48:20 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:37.108 00:48:20 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:37.108 00:48:20 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:37.108 00:48:20 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:37.108 00:48:20 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:37.108 00:48:20 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:37.108 00:48:20 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:37.108 00:48:20 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:37.108 00:48:20 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:37.108 00:48:20 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:37.108 00:48:20 -- common/autotest_common.sh@10 -- # set +x 00:08:37.108 00:48:21 -- nvmf/common.sh@469 -- # nvmfpid=3306020 00:08:37.108 00:48:21 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:37.108 00:48:21 -- nvmf/common.sh@470 -- # waitforlisten 3306020 00:08:37.108 00:48:21 -- common/autotest_common.sh@819 -- # '[' -z 3306020 ']' 00:08:37.108 00:48:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:37.108 00:48:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:37.108 00:48:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:37.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:37.108 00:48:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:37.108 00:48:21 -- common/autotest_common.sh@10 -- # set +x 00:08:37.108 [2024-07-23 00:48:21.045216] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:08:37.108 [2024-07-23 00:48:21.045306] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:37.108 EAL: No free 2048 kB hugepages reported on node 1 00:08:37.108 [2024-07-23 00:48:21.114540] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:37.108 [2024-07-23 00:48:21.204934] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:37.108 [2024-07-23 00:48:21.205110] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:37.108 [2024-07-23 00:48:21.205131] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:37.108 [2024-07-23 00:48:21.205147] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:37.108 [2024-07-23 00:48:21.205238] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:37.108 [2024-07-23 00:48:21.205292] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:37.108 [2024-07-23 00:48:21.205345] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:37.108 [2024-07-23 00:48:21.205347] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.041 00:48:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:38.041 00:48:21 -- common/autotest_common.sh@852 -- # return 0 00:08:38.041 00:48:21 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:38.041 00:48:21 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:38.041 00:48:21 -- common/autotest_common.sh@10 -- # set +x 00:08:38.041 00:48:22 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:38.041 00:48:22 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:38.041 00:48:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:38.041 00:48:22 -- common/autotest_common.sh@10 -- # set +x 00:08:38.041 [2024-07-23 00:48:22.015197] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:38.041 00:48:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:38.041 00:48:22 -- target/discovery.sh@26 -- # seq 1 4 00:08:38.042 00:48:22 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:38.042 00:48:22 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:38.042 00:48:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:38.042 00:48:22 -- common/autotest_common.sh@10 -- # set +x 00:08:38.042 Null1 00:08:38.042 00:48:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:38.042 00:48:22 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:38.042 00:48:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:38.042 00:48:22 -- common/autotest_common.sh@10 -- # set +x 00:08:38.042 00:48:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:38.042 00:48:22 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:38.042 00:48:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:38.042 00:48:22 -- common/autotest_common.sh@10 -- # set +x 00:08:38.042 00:48:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:38.042 00:48:22 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:38.042 00:48:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:38.042 00:48:22 -- common/autotest_common.sh@10 -- # set +x 00:08:38.042 [2024-07-23 00:48:22.055444] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:38.042 00:48:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:38.042 00:48:22 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:38.042 00:48:22 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:38.042 00:48:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:38.042 00:48:22 -- common/autotest_common.sh@10 -- # set +x 00:08:38.042 Null2 00:08:38.042 00:48:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:38.042 00:48:22 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:38.042 00:48:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:38.042 00:48:22 -- common/autotest_common.sh@10 -- # set +x 00:08:38.042 00:48:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:38.042 00:48:22 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:38.042 00:48:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:38.042 00:48:22 -- common/autotest_common.sh@10 -- # set +x 00:08:38.042 00:48:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:38.042 00:48:22 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:38.042 00:48:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:38.042 00:48:22 -- common/autotest_common.sh@10 -- # set +x 00:08:38.042 00:48:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:38.042 00:48:22 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:38.042 00:48:22 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:38.042 00:48:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:38.042 00:48:22 -- common/autotest_common.sh@10 -- # set +x 00:08:38.042 Null3 00:08:38.042 00:48:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:38.042 00:48:22 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:38.042 00:48:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:38.042 00:48:22 -- common/autotest_common.sh@10 -- # set +x 00:08:38.042 00:48:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:38.042 00:48:22 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:38.042 00:48:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:38.042 00:48:22 -- common/autotest_common.sh@10 -- # set +x 00:08:38.042 00:48:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:38.042 00:48:22 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:38.042 00:48:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:38.042 00:48:22 -- common/autotest_common.sh@10 -- # set +x 00:08:38.042 00:48:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:38.042 00:48:22 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:38.042 00:48:22 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:38.042 00:48:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:38.042 00:48:22 -- common/autotest_common.sh@10 -- # set +x 00:08:38.042 Null4 00:08:38.042 00:48:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:38.042 00:48:22 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:38.042 00:48:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:38.042 00:48:22 -- common/autotest_common.sh@10 -- # set +x 00:08:38.042 00:48:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:38.042 00:48:22 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:38.042 00:48:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:38.042 00:48:22 -- common/autotest_common.sh@10 -- # set +x 00:08:38.042 00:48:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:38.042 00:48:22 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:38.042 00:48:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:38.042 00:48:22 -- common/autotest_common.sh@10 -- # set +x 00:08:38.042 00:48:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:38.042 00:48:22 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:38.042 00:48:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:38.042 00:48:22 -- common/autotest_common.sh@10 -- # set +x 00:08:38.042 00:48:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:38.042 00:48:22 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:38.042 00:48:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:38.042 00:48:22 -- common/autotest_common.sh@10 -- # set +x 00:08:38.042 00:48:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:38.042 00:48:22 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:08:38.301 00:08:38.301 Discovery Log Number of Records 6, Generation counter 6 00:08:38.301 =====Discovery Log Entry 0====== 00:08:38.301 trtype: tcp 00:08:38.301 adrfam: ipv4 00:08:38.301 subtype: current discovery subsystem 00:08:38.301 treq: not required 00:08:38.301 portid: 0 00:08:38.301 trsvcid: 4420 00:08:38.301 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:38.301 traddr: 10.0.0.2 00:08:38.301 eflags: explicit discovery connections, duplicate discovery information 00:08:38.301 sectype: none 00:08:38.301 =====Discovery Log Entry 1====== 00:08:38.301 trtype: tcp 00:08:38.301 adrfam: ipv4 00:08:38.301 subtype: nvme subsystem 00:08:38.301 treq: not required 00:08:38.301 portid: 0 00:08:38.301 trsvcid: 4420 00:08:38.301 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:38.301 traddr: 10.0.0.2 00:08:38.301 eflags: none 00:08:38.301 sectype: none 00:08:38.301 =====Discovery Log Entry 2====== 00:08:38.301 trtype: tcp 00:08:38.301 adrfam: ipv4 00:08:38.301 subtype: nvme subsystem 00:08:38.301 treq: not required 00:08:38.301 portid: 0 00:08:38.301 trsvcid: 4420 00:08:38.301 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:38.301 traddr: 10.0.0.2 00:08:38.301 eflags: none 00:08:38.301 sectype: none 00:08:38.301 =====Discovery Log Entry 3====== 00:08:38.301 trtype: tcp 00:08:38.301 adrfam: ipv4 00:08:38.301 subtype: nvme subsystem 00:08:38.301 treq: not required 00:08:38.301 portid: 0 00:08:38.302 trsvcid: 4420 00:08:38.302 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:38.302 traddr: 10.0.0.2 00:08:38.302 eflags: none 00:08:38.302 sectype: none 00:08:38.302 =====Discovery Log Entry 4====== 00:08:38.302 trtype: tcp 00:08:38.302 adrfam: ipv4 00:08:38.302 subtype: nvme subsystem 00:08:38.302 treq: not required 00:08:38.302 portid: 0 00:08:38.302 trsvcid: 4420 00:08:38.302 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:38.302 traddr: 10.0.0.2 00:08:38.302 eflags: none 00:08:38.302 sectype: none 00:08:38.302 =====Discovery Log Entry 5====== 00:08:38.302 trtype: tcp 00:08:38.302 adrfam: ipv4 00:08:38.302 subtype: discovery subsystem referral 00:08:38.302 treq: not required 00:08:38.302 portid: 0 00:08:38.302 trsvcid: 4430 00:08:38.302 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:38.302 traddr: 10.0.0.2 00:08:38.302 eflags: none 00:08:38.302 sectype: none 00:08:38.302 00:48:22 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:38.302 Perform nvmf subsystem discovery via RPC 00:08:38.302 00:48:22 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:38.302 00:48:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:38.302 00:48:22 -- common/autotest_common.sh@10 -- # set +x 00:08:38.302 [2024-07-23 00:48:22.260001] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:08:38.302 [ 00:08:38.302 { 00:08:38.302 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:38.302 "subtype": "Discovery", 00:08:38.302 "listen_addresses": [ 00:08:38.302 { 00:08:38.302 "transport": "TCP", 00:08:38.302 "trtype": "TCP", 00:08:38.302 "adrfam": "IPv4", 00:08:38.302 "traddr": "10.0.0.2", 00:08:38.302 "trsvcid": "4420" 00:08:38.302 } 00:08:38.302 ], 00:08:38.302 "allow_any_host": true, 00:08:38.302 "hosts": [] 00:08:38.302 }, 00:08:38.302 { 00:08:38.302 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:38.302 "subtype": "NVMe", 00:08:38.302 "listen_addresses": [ 00:08:38.302 { 00:08:38.302 "transport": "TCP", 00:08:38.302 "trtype": "TCP", 00:08:38.302 "adrfam": "IPv4", 00:08:38.302 "traddr": "10.0.0.2", 00:08:38.302 "trsvcid": "4420" 00:08:38.302 } 00:08:38.302 ], 00:08:38.302 "allow_any_host": true, 00:08:38.302 "hosts": [], 00:08:38.302 "serial_number": "SPDK00000000000001", 00:08:38.302 "model_number": "SPDK bdev Controller", 00:08:38.302 "max_namespaces": 32, 00:08:38.302 "min_cntlid": 1, 00:08:38.302 "max_cntlid": 65519, 00:08:38.302 "namespaces": [ 00:08:38.302 { 00:08:38.302 "nsid": 1, 00:08:38.302 "bdev_name": "Null1", 00:08:38.302 "name": "Null1", 00:08:38.302 "nguid": "1CB9C56615094F13BD3F86639E8D66DD", 00:08:38.302 "uuid": "1cb9c566-1509-4f13-bd3f-86639e8d66dd" 00:08:38.302 } 00:08:38.302 ] 00:08:38.302 }, 00:08:38.302 { 00:08:38.302 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:38.302 "subtype": "NVMe", 00:08:38.302 "listen_addresses": [ 00:08:38.302 { 00:08:38.302 "transport": "TCP", 00:08:38.302 "trtype": "TCP", 00:08:38.302 "adrfam": "IPv4", 00:08:38.302 "traddr": "10.0.0.2", 00:08:38.302 "trsvcid": "4420" 00:08:38.302 } 00:08:38.302 ], 00:08:38.302 "allow_any_host": true, 00:08:38.302 "hosts": [], 00:08:38.302 "serial_number": "SPDK00000000000002", 00:08:38.302 "model_number": "SPDK bdev Controller", 00:08:38.302 "max_namespaces": 32, 00:08:38.302 "min_cntlid": 1, 00:08:38.302 "max_cntlid": 65519, 00:08:38.302 "namespaces": [ 00:08:38.302 { 00:08:38.302 "nsid": 1, 00:08:38.302 "bdev_name": "Null2", 00:08:38.302 "name": "Null2", 00:08:38.302 "nguid": "1713602AA7C348CEAA3E47433019D71D", 00:08:38.302 "uuid": "1713602a-a7c3-48ce-aa3e-47433019d71d" 00:08:38.302 } 00:08:38.302 ] 00:08:38.302 }, 00:08:38.302 { 00:08:38.302 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:38.302 "subtype": "NVMe", 00:08:38.302 "listen_addresses": [ 00:08:38.302 { 00:08:38.302 "transport": "TCP", 00:08:38.302 "trtype": "TCP", 00:08:38.302 "adrfam": "IPv4", 00:08:38.302 "traddr": "10.0.0.2", 00:08:38.302 "trsvcid": "4420" 00:08:38.302 } 00:08:38.302 ], 00:08:38.302 "allow_any_host": true, 00:08:38.302 "hosts": [], 00:08:38.302 "serial_number": "SPDK00000000000003", 00:08:38.302 "model_number": "SPDK bdev Controller", 00:08:38.302 "max_namespaces": 32, 00:08:38.302 "min_cntlid": 1, 00:08:38.302 "max_cntlid": 65519, 00:08:38.302 "namespaces": [ 00:08:38.302 { 00:08:38.302 "nsid": 1, 00:08:38.302 "bdev_name": "Null3", 00:08:38.302 "name": "Null3", 00:08:38.302 "nguid": "1E6FCBEB6EC841C3A735A6E4CBB15394", 00:08:38.302 "uuid": "1e6fcbeb-6ec8-41c3-a735-a6e4cbb15394" 00:08:38.302 } 00:08:38.302 ] 00:08:38.302 }, 00:08:38.302 { 00:08:38.302 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:38.302 "subtype": "NVMe", 00:08:38.302 "listen_addresses": [ 00:08:38.302 { 00:08:38.302 "transport": "TCP", 00:08:38.302 "trtype": "TCP", 00:08:38.302 "adrfam": "IPv4", 00:08:38.302 "traddr": "10.0.0.2", 00:08:38.302 "trsvcid": "4420" 00:08:38.302 } 00:08:38.302 ], 00:08:38.302 "allow_any_host": true, 00:08:38.302 "hosts": [], 00:08:38.302 "serial_number": "SPDK00000000000004", 00:08:38.302 "model_number": "SPDK bdev Controller", 00:08:38.302 "max_namespaces": 32, 00:08:38.302 "min_cntlid": 1, 00:08:38.302 "max_cntlid": 65519, 00:08:38.302 "namespaces": [ 00:08:38.302 { 00:08:38.302 "nsid": 1, 00:08:38.302 "bdev_name": "Null4", 00:08:38.302 "name": "Null4", 00:08:38.302 "nguid": "B34F8F8C1E094CD78847CE463F6BF905", 00:08:38.302 "uuid": "b34f8f8c-1e09-4cd7-8847-ce463f6bf905" 00:08:38.302 } 00:08:38.302 ] 00:08:38.302 } 00:08:38.302 ] 00:08:38.302 00:48:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:38.302 00:48:22 -- target/discovery.sh@42 -- # seq 1 4 00:08:38.302 00:48:22 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:38.302 00:48:22 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:38.302 00:48:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:38.302 00:48:22 -- common/autotest_common.sh@10 -- # set +x 00:08:38.302 00:48:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:38.302 00:48:22 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:38.302 00:48:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:38.302 00:48:22 -- common/autotest_common.sh@10 -- # set +x 00:08:38.302 00:48:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:38.302 00:48:22 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:38.302 00:48:22 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:38.302 00:48:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:38.302 00:48:22 -- common/autotest_common.sh@10 -- # set +x 00:08:38.302 00:48:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:38.302 00:48:22 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:38.302 00:48:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:38.302 00:48:22 -- common/autotest_common.sh@10 -- # set +x 00:08:38.302 00:48:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:38.302 00:48:22 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:38.302 00:48:22 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:38.302 00:48:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:38.302 00:48:22 -- common/autotest_common.sh@10 -- # set +x 00:08:38.302 00:48:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:38.302 00:48:22 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:38.302 00:48:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:38.302 00:48:22 -- common/autotest_common.sh@10 -- # set +x 00:08:38.302 00:48:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:38.302 00:48:22 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:38.302 00:48:22 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:38.302 00:48:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:38.302 00:48:22 -- common/autotest_common.sh@10 -- # set +x 00:08:38.302 00:48:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:38.302 00:48:22 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:38.302 00:48:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:38.302 00:48:22 -- common/autotest_common.sh@10 -- # set +x 00:08:38.302 00:48:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:38.302 00:48:22 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:38.302 00:48:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:38.302 00:48:22 -- common/autotest_common.sh@10 -- # set +x 00:08:38.302 00:48:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:38.302 00:48:22 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:38.302 00:48:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:38.302 00:48:22 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:38.302 00:48:22 -- common/autotest_common.sh@10 -- # set +x 00:08:38.302 00:48:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:38.302 00:48:22 -- target/discovery.sh@49 -- # check_bdevs= 00:08:38.302 00:48:22 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:38.302 00:48:22 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:38.303 00:48:22 -- target/discovery.sh@57 -- # nvmftestfini 00:08:38.303 00:48:22 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:38.303 00:48:22 -- nvmf/common.sh@116 -- # sync 00:08:38.303 00:48:22 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:38.303 00:48:22 -- nvmf/common.sh@119 -- # set +e 00:08:38.303 00:48:22 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:38.303 00:48:22 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:38.303 rmmod nvme_tcp 00:08:38.303 rmmod nvme_fabrics 00:08:38.303 rmmod nvme_keyring 00:08:38.303 00:48:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:38.303 00:48:22 -- nvmf/common.sh@123 -- # set -e 00:08:38.303 00:48:22 -- nvmf/common.sh@124 -- # return 0 00:08:38.303 00:48:22 -- nvmf/common.sh@477 -- # '[' -n 3306020 ']' 00:08:38.303 00:48:22 -- nvmf/common.sh@478 -- # killprocess 3306020 00:08:38.303 00:48:22 -- common/autotest_common.sh@926 -- # '[' -z 3306020 ']' 00:08:38.303 00:48:22 -- common/autotest_common.sh@930 -- # kill -0 3306020 00:08:38.303 00:48:22 -- common/autotest_common.sh@931 -- # uname 00:08:38.303 00:48:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:38.303 00:48:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3306020 00:08:38.303 00:48:22 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:38.303 00:48:22 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:38.303 00:48:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3306020' 00:08:38.303 killing process with pid 3306020 00:08:38.303 00:48:22 -- common/autotest_common.sh@945 -- # kill 3306020 00:08:38.303 [2024-07-23 00:48:22.468161] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:08:38.303 00:48:22 -- common/autotest_common.sh@950 -- # wait 3306020 00:08:38.562 00:48:22 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:38.562 00:48:22 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:38.562 00:48:22 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:38.562 00:48:22 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:38.562 00:48:22 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:38.562 00:48:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:38.562 00:48:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:38.562 00:48:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:41.104 00:48:24 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:08:41.104 00:08:41.104 real 0m6.054s 00:08:41.104 user 0m6.918s 00:08:41.104 sys 0m1.911s 00:08:41.104 00:48:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:41.104 00:48:24 -- common/autotest_common.sh@10 -- # set +x 00:08:41.104 ************************************ 00:08:41.104 END TEST nvmf_discovery 00:08:41.104 ************************************ 00:08:41.104 00:48:24 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:41.104 00:48:24 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:41.104 00:48:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:41.104 00:48:24 -- common/autotest_common.sh@10 -- # set +x 00:08:41.104 ************************************ 00:08:41.104 START TEST nvmf_referrals 00:08:41.104 ************************************ 00:08:41.104 00:48:24 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:41.104 * Looking for test storage... 00:08:41.104 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:41.104 00:48:24 -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:41.104 00:48:24 -- nvmf/common.sh@7 -- # uname -s 00:08:41.104 00:48:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:41.104 00:48:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:41.104 00:48:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:41.104 00:48:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:41.104 00:48:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:41.104 00:48:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:41.104 00:48:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:41.104 00:48:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:41.104 00:48:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:41.104 00:48:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:41.104 00:48:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:41.104 00:48:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:41.104 00:48:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:41.104 00:48:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:41.104 00:48:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:41.104 00:48:24 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:41.104 00:48:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:41.104 00:48:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:41.104 00:48:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:41.104 00:48:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.104 00:48:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.104 00:48:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.104 00:48:24 -- paths/export.sh@5 -- # export PATH 00:08:41.104 00:48:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.104 00:48:24 -- nvmf/common.sh@46 -- # : 0 00:08:41.104 00:48:24 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:41.104 00:48:24 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:41.104 00:48:24 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:41.104 00:48:24 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:41.104 00:48:24 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:41.104 00:48:24 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:41.104 00:48:24 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:41.104 00:48:24 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:41.104 00:48:24 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:41.104 00:48:24 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:41.104 00:48:24 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:41.104 00:48:24 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:41.104 00:48:24 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:41.104 00:48:24 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:41.104 00:48:24 -- target/referrals.sh@37 -- # nvmftestinit 00:08:41.104 00:48:24 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:41.104 00:48:24 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:41.104 00:48:24 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:41.104 00:48:24 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:41.104 00:48:24 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:41.104 00:48:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:41.104 00:48:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:41.104 00:48:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:41.104 00:48:24 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:41.104 00:48:24 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:41.104 00:48:24 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:41.104 00:48:24 -- common/autotest_common.sh@10 -- # set +x 00:08:43.013 00:48:26 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:43.013 00:48:26 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:43.013 00:48:26 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:43.013 00:48:26 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:43.013 00:48:26 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:43.013 00:48:26 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:43.013 00:48:26 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:43.013 00:48:26 -- nvmf/common.sh@294 -- # net_devs=() 00:08:43.013 00:48:26 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:43.013 00:48:26 -- nvmf/common.sh@295 -- # e810=() 00:08:43.013 00:48:26 -- nvmf/common.sh@295 -- # local -ga e810 00:08:43.013 00:48:26 -- nvmf/common.sh@296 -- # x722=() 00:08:43.013 00:48:26 -- nvmf/common.sh@296 -- # local -ga x722 00:08:43.013 00:48:26 -- nvmf/common.sh@297 -- # mlx=() 00:08:43.013 00:48:26 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:43.013 00:48:26 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:43.013 00:48:26 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:43.013 00:48:26 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:43.013 00:48:26 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:43.013 00:48:26 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:43.013 00:48:26 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:43.013 00:48:26 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:43.013 00:48:26 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:43.013 00:48:26 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:43.013 00:48:26 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:43.013 00:48:26 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:43.013 00:48:26 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:43.013 00:48:26 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:08:43.013 00:48:26 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:08:43.013 00:48:26 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:08:43.013 00:48:26 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:08:43.013 00:48:26 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:43.013 00:48:26 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:43.013 00:48:26 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:43.013 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:43.013 00:48:26 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:43.013 00:48:26 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:43.013 00:48:26 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:43.013 00:48:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:43.013 00:48:26 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:43.013 00:48:26 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:43.013 00:48:26 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:43.013 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:43.013 00:48:26 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:43.013 00:48:26 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:43.013 00:48:26 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:43.013 00:48:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:43.013 00:48:26 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:43.013 00:48:26 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:43.013 00:48:26 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:08:43.013 00:48:26 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:08:43.013 00:48:26 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:43.013 00:48:26 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:43.013 00:48:26 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:43.013 00:48:26 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:43.013 00:48:26 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:43.013 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:43.013 00:48:26 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:43.013 00:48:26 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:43.013 00:48:26 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:43.013 00:48:26 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:43.013 00:48:26 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:43.013 00:48:26 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:43.013 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:43.013 00:48:26 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:43.013 00:48:26 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:43.013 00:48:26 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:43.013 00:48:26 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:43.013 00:48:26 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:08:43.013 00:48:26 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:08:43.013 00:48:26 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:43.013 00:48:26 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:43.013 00:48:26 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:43.013 00:48:26 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:08:43.013 00:48:26 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:43.013 00:48:26 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:43.013 00:48:26 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:08:43.013 00:48:26 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:43.013 00:48:26 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:43.013 00:48:26 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:08:43.013 00:48:26 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:08:43.013 00:48:26 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:08:43.013 00:48:26 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:43.013 00:48:26 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:43.013 00:48:26 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:43.013 00:48:26 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:08:43.013 00:48:26 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:43.013 00:48:26 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:43.013 00:48:26 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:43.013 00:48:26 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:08:43.013 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:43.013 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.211 ms 00:08:43.013 00:08:43.013 --- 10.0.0.2 ping statistics --- 00:08:43.013 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:43.013 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:08:43.013 00:48:26 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:43.013 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:43.013 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:08:43.013 00:08:43.013 --- 10.0.0.1 ping statistics --- 00:08:43.013 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:43.014 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:08:43.014 00:48:26 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:43.014 00:48:26 -- nvmf/common.sh@410 -- # return 0 00:08:43.014 00:48:26 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:43.014 00:48:26 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:43.014 00:48:26 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:43.014 00:48:26 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:43.014 00:48:26 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:43.014 00:48:26 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:43.014 00:48:26 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:43.014 00:48:27 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:43.014 00:48:27 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:43.014 00:48:27 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:43.014 00:48:27 -- common/autotest_common.sh@10 -- # set +x 00:08:43.014 00:48:27 -- nvmf/common.sh@469 -- # nvmfpid=3308143 00:08:43.014 00:48:27 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:43.014 00:48:27 -- nvmf/common.sh@470 -- # waitforlisten 3308143 00:08:43.014 00:48:27 -- common/autotest_common.sh@819 -- # '[' -z 3308143 ']' 00:08:43.014 00:48:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:43.014 00:48:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:43.014 00:48:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:43.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:43.014 00:48:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:43.014 00:48:27 -- common/autotest_common.sh@10 -- # set +x 00:08:43.014 [2024-07-23 00:48:27.063709] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:08:43.014 [2024-07-23 00:48:27.063787] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:43.014 EAL: No free 2048 kB hugepages reported on node 1 00:08:43.014 [2024-07-23 00:48:27.128493] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:43.273 [2024-07-23 00:48:27.218059] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:43.273 [2024-07-23 00:48:27.218210] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:43.273 [2024-07-23 00:48:27.218227] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:43.273 [2024-07-23 00:48:27.218241] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:43.273 [2024-07-23 00:48:27.218309] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:43.273 [2024-07-23 00:48:27.218401] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:43.273 [2024-07-23 00:48:27.218432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:43.273 [2024-07-23 00:48:27.218433] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.257 00:48:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:44.257 00:48:28 -- common/autotest_common.sh@852 -- # return 0 00:08:44.257 00:48:28 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:44.257 00:48:28 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:44.257 00:48:28 -- common/autotest_common.sh@10 -- # set +x 00:08:44.257 00:48:28 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:44.257 00:48:28 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:44.257 00:48:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:44.257 00:48:28 -- common/autotest_common.sh@10 -- # set +x 00:08:44.257 [2024-07-23 00:48:28.079395] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:44.257 00:48:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:44.257 00:48:28 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:44.257 00:48:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:44.257 00:48:28 -- common/autotest_common.sh@10 -- # set +x 00:08:44.257 [2024-07-23 00:48:28.091608] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:44.257 00:48:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:44.257 00:48:28 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:44.257 00:48:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:44.257 00:48:28 -- common/autotest_common.sh@10 -- # set +x 00:08:44.257 00:48:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:44.257 00:48:28 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:44.257 00:48:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:44.257 00:48:28 -- common/autotest_common.sh@10 -- # set +x 00:08:44.257 00:48:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:44.257 00:48:28 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:44.257 00:48:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:44.257 00:48:28 -- common/autotest_common.sh@10 -- # set +x 00:08:44.257 00:48:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:44.257 00:48:28 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:44.257 00:48:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:44.257 00:48:28 -- target/referrals.sh@48 -- # jq length 00:08:44.257 00:48:28 -- common/autotest_common.sh@10 -- # set +x 00:08:44.257 00:48:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:44.257 00:48:28 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:44.257 00:48:28 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:44.257 00:48:28 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:44.257 00:48:28 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:44.258 00:48:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:44.258 00:48:28 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:44.258 00:48:28 -- common/autotest_common.sh@10 -- # set +x 00:08:44.258 00:48:28 -- target/referrals.sh@21 -- # sort 00:08:44.258 00:48:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:44.258 00:48:28 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:44.258 00:48:28 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:44.258 00:48:28 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:44.258 00:48:28 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:44.258 00:48:28 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:44.258 00:48:28 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:44.258 00:48:28 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:44.258 00:48:28 -- target/referrals.sh@26 -- # sort 00:08:44.258 00:48:28 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:44.258 00:48:28 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:44.258 00:48:28 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:44.258 00:48:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:44.258 00:48:28 -- common/autotest_common.sh@10 -- # set +x 00:08:44.258 00:48:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:44.258 00:48:28 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:44.258 00:48:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:44.258 00:48:28 -- common/autotest_common.sh@10 -- # set +x 00:08:44.258 00:48:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:44.258 00:48:28 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:44.258 00:48:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:44.258 00:48:28 -- common/autotest_common.sh@10 -- # set +x 00:08:44.258 00:48:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:44.258 00:48:28 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:44.258 00:48:28 -- target/referrals.sh@56 -- # jq length 00:08:44.258 00:48:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:44.258 00:48:28 -- common/autotest_common.sh@10 -- # set +x 00:08:44.258 00:48:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:44.517 00:48:28 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:44.517 00:48:28 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:44.517 00:48:28 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:44.517 00:48:28 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:44.517 00:48:28 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:44.517 00:48:28 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:44.517 00:48:28 -- target/referrals.sh@26 -- # sort 00:08:44.517 00:48:28 -- target/referrals.sh@26 -- # echo 00:08:44.517 00:48:28 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:44.517 00:48:28 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:44.517 00:48:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:44.517 00:48:28 -- common/autotest_common.sh@10 -- # set +x 00:08:44.517 00:48:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:44.517 00:48:28 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:44.517 00:48:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:44.517 00:48:28 -- common/autotest_common.sh@10 -- # set +x 00:08:44.517 00:48:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:44.517 00:48:28 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:44.517 00:48:28 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:44.517 00:48:28 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:44.517 00:48:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:44.517 00:48:28 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:44.517 00:48:28 -- common/autotest_common.sh@10 -- # set +x 00:08:44.517 00:48:28 -- target/referrals.sh@21 -- # sort 00:08:44.517 00:48:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:44.518 00:48:28 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:44.518 00:48:28 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:44.518 00:48:28 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:44.518 00:48:28 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:44.518 00:48:28 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:44.518 00:48:28 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:44.518 00:48:28 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:44.518 00:48:28 -- target/referrals.sh@26 -- # sort 00:08:44.777 00:48:28 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:44.777 00:48:28 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:44.777 00:48:28 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:44.777 00:48:28 -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:44.777 00:48:28 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:44.777 00:48:28 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:44.777 00:48:28 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:44.777 00:48:28 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:44.777 00:48:28 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:44.777 00:48:28 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:44.777 00:48:28 -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:44.777 00:48:28 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:44.777 00:48:28 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:44.777 00:48:28 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:44.777 00:48:28 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:44.777 00:48:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:44.777 00:48:28 -- common/autotest_common.sh@10 -- # set +x 00:08:44.777 00:48:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:44.777 00:48:28 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:44.777 00:48:28 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:44.777 00:48:28 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:44.777 00:48:28 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:44.777 00:48:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:44.777 00:48:28 -- common/autotest_common.sh@10 -- # set +x 00:08:44.777 00:48:28 -- target/referrals.sh@21 -- # sort 00:08:44.777 00:48:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:44.777 00:48:28 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:44.777 00:48:28 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:44.777 00:48:28 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:44.777 00:48:28 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:44.777 00:48:28 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:44.777 00:48:28 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:44.777 00:48:28 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:44.777 00:48:28 -- target/referrals.sh@26 -- # sort 00:08:45.037 00:48:29 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:45.037 00:48:29 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:45.037 00:48:29 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:45.037 00:48:29 -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:45.037 00:48:29 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:45.037 00:48:29 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:45.037 00:48:29 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:45.297 00:48:29 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:45.297 00:48:29 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:45.297 00:48:29 -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:45.297 00:48:29 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:45.297 00:48:29 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:45.297 00:48:29 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:45.297 00:48:29 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:45.297 00:48:29 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:45.297 00:48:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:45.297 00:48:29 -- common/autotest_common.sh@10 -- # set +x 00:08:45.297 00:48:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:45.297 00:48:29 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:45.297 00:48:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:45.297 00:48:29 -- target/referrals.sh@82 -- # jq length 00:08:45.297 00:48:29 -- common/autotest_common.sh@10 -- # set +x 00:08:45.297 00:48:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:45.297 00:48:29 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:45.297 00:48:29 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:45.297 00:48:29 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:45.297 00:48:29 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:45.297 00:48:29 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:45.297 00:48:29 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:45.297 00:48:29 -- target/referrals.sh@26 -- # sort 00:08:45.557 00:48:29 -- target/referrals.sh@26 -- # echo 00:08:45.557 00:48:29 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:45.557 00:48:29 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:45.557 00:48:29 -- target/referrals.sh@86 -- # nvmftestfini 00:08:45.557 00:48:29 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:45.557 00:48:29 -- nvmf/common.sh@116 -- # sync 00:08:45.557 00:48:29 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:45.557 00:48:29 -- nvmf/common.sh@119 -- # set +e 00:08:45.557 00:48:29 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:45.557 00:48:29 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:45.557 rmmod nvme_tcp 00:08:45.557 rmmod nvme_fabrics 00:08:45.557 rmmod nvme_keyring 00:08:45.557 00:48:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:45.557 00:48:29 -- nvmf/common.sh@123 -- # set -e 00:08:45.557 00:48:29 -- nvmf/common.sh@124 -- # return 0 00:08:45.557 00:48:29 -- nvmf/common.sh@477 -- # '[' -n 3308143 ']' 00:08:45.557 00:48:29 -- nvmf/common.sh@478 -- # killprocess 3308143 00:08:45.557 00:48:29 -- common/autotest_common.sh@926 -- # '[' -z 3308143 ']' 00:08:45.557 00:48:29 -- common/autotest_common.sh@930 -- # kill -0 3308143 00:08:45.557 00:48:29 -- common/autotest_common.sh@931 -- # uname 00:08:45.557 00:48:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:45.557 00:48:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3308143 00:08:45.557 00:48:29 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:45.557 00:48:29 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:45.557 00:48:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3308143' 00:08:45.557 killing process with pid 3308143 00:08:45.557 00:48:29 -- common/autotest_common.sh@945 -- # kill 3308143 00:08:45.557 00:48:29 -- common/autotest_common.sh@950 -- # wait 3308143 00:08:45.817 00:48:29 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:45.817 00:48:29 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:45.817 00:48:29 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:45.817 00:48:29 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:45.817 00:48:29 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:45.817 00:48:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:45.817 00:48:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:45.817 00:48:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:47.723 00:48:31 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:08:47.723 00:08:47.723 real 0m7.136s 00:08:47.723 user 0m12.053s 00:08:47.723 sys 0m2.189s 00:08:47.723 00:48:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:47.723 00:48:31 -- common/autotest_common.sh@10 -- # set +x 00:08:47.723 ************************************ 00:08:47.723 END TEST nvmf_referrals 00:08:47.723 ************************************ 00:08:47.982 00:48:31 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:47.982 00:48:31 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:47.982 00:48:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:47.982 00:48:31 -- common/autotest_common.sh@10 -- # set +x 00:08:47.982 ************************************ 00:08:47.982 START TEST nvmf_connect_disconnect 00:08:47.982 ************************************ 00:08:47.982 00:48:31 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:47.982 * Looking for test storage... 00:08:47.982 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:47.982 00:48:31 -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:47.982 00:48:31 -- nvmf/common.sh@7 -- # uname -s 00:08:47.982 00:48:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:47.982 00:48:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:47.982 00:48:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:47.982 00:48:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:47.982 00:48:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:47.982 00:48:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:47.982 00:48:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:47.982 00:48:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:47.982 00:48:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:47.982 00:48:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:47.982 00:48:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:47.982 00:48:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:47.982 00:48:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:47.982 00:48:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:47.982 00:48:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:47.982 00:48:32 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:47.982 00:48:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:47.982 00:48:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:47.982 00:48:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:47.982 00:48:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.982 00:48:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.982 00:48:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.982 00:48:32 -- paths/export.sh@5 -- # export PATH 00:08:47.982 00:48:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.982 00:48:32 -- nvmf/common.sh@46 -- # : 0 00:08:47.982 00:48:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:47.982 00:48:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:47.982 00:48:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:47.982 00:48:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:47.982 00:48:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:47.982 00:48:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:47.982 00:48:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:47.982 00:48:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:47.982 00:48:32 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:47.982 00:48:32 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:47.982 00:48:32 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:47.982 00:48:32 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:47.982 00:48:32 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:47.982 00:48:32 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:47.982 00:48:32 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:47.982 00:48:32 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:47.982 00:48:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:47.982 00:48:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:47.982 00:48:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:47.982 00:48:32 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:47.982 00:48:32 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:47.982 00:48:32 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:47.982 00:48:32 -- common/autotest_common.sh@10 -- # set +x 00:08:49.881 00:48:34 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:49.881 00:48:34 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:49.881 00:48:34 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:49.881 00:48:34 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:49.881 00:48:34 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:49.881 00:48:34 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:49.881 00:48:34 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:49.881 00:48:34 -- nvmf/common.sh@294 -- # net_devs=() 00:08:49.881 00:48:34 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:49.881 00:48:34 -- nvmf/common.sh@295 -- # e810=() 00:08:49.881 00:48:34 -- nvmf/common.sh@295 -- # local -ga e810 00:08:49.881 00:48:34 -- nvmf/common.sh@296 -- # x722=() 00:08:49.881 00:48:34 -- nvmf/common.sh@296 -- # local -ga x722 00:08:49.881 00:48:34 -- nvmf/common.sh@297 -- # mlx=() 00:08:49.881 00:48:34 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:49.881 00:48:34 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:49.881 00:48:34 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:49.881 00:48:34 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:49.881 00:48:34 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:49.881 00:48:34 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:49.881 00:48:34 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:49.882 00:48:34 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:49.882 00:48:34 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:49.882 00:48:34 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:49.882 00:48:34 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:49.882 00:48:34 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:49.882 00:48:34 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:49.882 00:48:34 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:08:49.882 00:48:34 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:08:49.882 00:48:34 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:08:49.882 00:48:34 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:08:49.882 00:48:34 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:49.882 00:48:34 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:49.882 00:48:34 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:49.882 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:49.882 00:48:34 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:49.882 00:48:34 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:49.882 00:48:34 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:49.882 00:48:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:49.882 00:48:34 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:49.882 00:48:34 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:49.882 00:48:34 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:49.882 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:49.882 00:48:34 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:49.882 00:48:34 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:49.882 00:48:34 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:49.882 00:48:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:49.882 00:48:34 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:49.882 00:48:34 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:49.882 00:48:34 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:08:49.882 00:48:34 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:08:49.882 00:48:34 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:49.882 00:48:34 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:49.882 00:48:34 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:49.882 00:48:34 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:49.882 00:48:34 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:49.882 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:49.882 00:48:34 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:49.882 00:48:34 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:49.882 00:48:34 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:49.882 00:48:34 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:49.882 00:48:34 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:49.882 00:48:34 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:49.882 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:49.882 00:48:34 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:49.882 00:48:34 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:49.882 00:48:34 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:49.882 00:48:34 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:49.882 00:48:34 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:08:49.882 00:48:34 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:08:49.882 00:48:34 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:49.882 00:48:34 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:49.882 00:48:34 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:49.882 00:48:34 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:08:49.882 00:48:34 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:49.882 00:48:34 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:49.882 00:48:34 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:08:49.882 00:48:34 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:49.882 00:48:34 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:49.882 00:48:34 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:08:49.882 00:48:34 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:08:49.882 00:48:34 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:08:49.882 00:48:34 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:50.140 00:48:34 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:50.140 00:48:34 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:50.140 00:48:34 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:08:50.140 00:48:34 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:50.140 00:48:34 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:50.140 00:48:34 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:50.140 00:48:34 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:08:50.140 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:50.140 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:08:50.140 00:08:50.140 --- 10.0.0.2 ping statistics --- 00:08:50.140 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:50.140 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:08:50.140 00:48:34 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:50.140 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:50.140 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:08:50.140 00:08:50.140 --- 10.0.0.1 ping statistics --- 00:08:50.140 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:50.140 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:08:50.140 00:48:34 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:50.140 00:48:34 -- nvmf/common.sh@410 -- # return 0 00:08:50.140 00:48:34 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:50.140 00:48:34 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:50.140 00:48:34 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:50.140 00:48:34 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:50.140 00:48:34 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:50.140 00:48:34 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:50.140 00:48:34 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:50.140 00:48:34 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:50.140 00:48:34 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:50.140 00:48:34 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:50.140 00:48:34 -- common/autotest_common.sh@10 -- # set +x 00:08:50.140 00:48:34 -- nvmf/common.sh@469 -- # nvmfpid=3310466 00:08:50.140 00:48:34 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:50.140 00:48:34 -- nvmf/common.sh@470 -- # waitforlisten 3310466 00:08:50.140 00:48:34 -- common/autotest_common.sh@819 -- # '[' -z 3310466 ']' 00:08:50.140 00:48:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:50.140 00:48:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:50.140 00:48:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:50.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:50.140 00:48:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:50.140 00:48:34 -- common/autotest_common.sh@10 -- # set +x 00:08:50.140 [2024-07-23 00:48:34.266441] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:08:50.140 [2024-07-23 00:48:34.266517] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:50.140 EAL: No free 2048 kB hugepages reported on node 1 00:08:50.140 [2024-07-23 00:48:34.335341] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:50.400 [2024-07-23 00:48:34.428814] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:50.400 [2024-07-23 00:48:34.428983] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:50.400 [2024-07-23 00:48:34.429004] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:50.400 [2024-07-23 00:48:34.429019] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:50.400 [2024-07-23 00:48:34.429072] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:50.400 [2024-07-23 00:48:34.429134] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:50.400 [2024-07-23 00:48:34.429185] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:50.400 [2024-07-23 00:48:34.429188] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.337 00:48:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:51.337 00:48:35 -- common/autotest_common.sh@852 -- # return 0 00:08:51.337 00:48:35 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:51.337 00:48:35 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:51.337 00:48:35 -- common/autotest_common.sh@10 -- # set +x 00:08:51.337 00:48:35 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:51.337 00:48:35 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:51.337 00:48:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:51.337 00:48:35 -- common/autotest_common.sh@10 -- # set +x 00:08:51.337 [2024-07-23 00:48:35.213113] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:51.337 00:48:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:51.337 00:48:35 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:51.337 00:48:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:51.337 00:48:35 -- common/autotest_common.sh@10 -- # set +x 00:08:51.337 00:48:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:51.337 00:48:35 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:51.337 00:48:35 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:51.337 00:48:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:51.337 00:48:35 -- common/autotest_common.sh@10 -- # set +x 00:08:51.337 00:48:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:51.337 00:48:35 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:51.337 00:48:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:51.337 00:48:35 -- common/autotest_common.sh@10 -- # set +x 00:08:51.337 00:48:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:51.337 00:48:35 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:51.337 00:48:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:51.337 00:48:35 -- common/autotest_common.sh@10 -- # set +x 00:08:51.337 [2024-07-23 00:48:35.266068] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:51.337 00:48:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:51.337 00:48:35 -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:08:51.337 00:48:35 -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:08:51.337 00:48:35 -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:08:51.337 00:48:35 -- target/connect_disconnect.sh@34 -- # set +x 00:08:53.878 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:55.785 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:58.318 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:00.228 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:02.764 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:05.302 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:07.237 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:09.774 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:11.682 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:14.220 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:16.758 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:18.662 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:21.195 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:23.105 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:25.643 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:28.180 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:30.087 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:32.658 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:34.566 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:37.104 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:39.641 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:41.549 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:44.081 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:45.987 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:48.525 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:51.061 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:52.972 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:55.571 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:58.113 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:00.020 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:02.555 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:04.463 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:06.997 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:09.532 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:11.436 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:13.969 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:15.874 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:18.414 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:20.968 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:22.878 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:25.414 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:27.324 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:29.864 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:32.396 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:34.303 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:36.841 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:39.379 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:41.285 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:43.872 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:45.781 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:48.318 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:50.851 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:53.386 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:55.291 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:57.828 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:00.359 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:02.264 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:04.800 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:07.334 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:09.278 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:11.817 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:13.721 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:16.253 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:18.785 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:20.690 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:23.223 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:25.754 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:27.655 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:30.251 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:32.157 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:34.694 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:37.231 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:39.137 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:41.671 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:44.207 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:46.107 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:48.632 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:51.162 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:53.091 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:55.626 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:57.535 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:00.072 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:02.604 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:04.509 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:07.047 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:08.951 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:11.481 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:13.383 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:15.955 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:18.488 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:20.393 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:22.932 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:24.835 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:27.373 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:29.915 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:31.820 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:34.362 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:36.301 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:38.840 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:41.419 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:41.419 00:52:25 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:41.419 00:52:25 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:41.419 00:52:25 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:41.419 00:52:25 -- nvmf/common.sh@116 -- # sync 00:12:41.419 00:52:25 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:41.419 00:52:25 -- nvmf/common.sh@119 -- # set +e 00:12:41.419 00:52:25 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:41.419 00:52:25 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:41.419 rmmod nvme_tcp 00:12:41.419 rmmod nvme_fabrics 00:12:41.419 rmmod nvme_keyring 00:12:41.419 00:52:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:41.419 00:52:25 -- nvmf/common.sh@123 -- # set -e 00:12:41.419 00:52:25 -- nvmf/common.sh@124 -- # return 0 00:12:41.419 00:52:25 -- nvmf/common.sh@477 -- # '[' -n 3310466 ']' 00:12:41.419 00:52:25 -- nvmf/common.sh@478 -- # killprocess 3310466 00:12:41.419 00:52:25 -- common/autotest_common.sh@926 -- # '[' -z 3310466 ']' 00:12:41.419 00:52:25 -- common/autotest_common.sh@930 -- # kill -0 3310466 00:12:41.419 00:52:25 -- common/autotest_common.sh@931 -- # uname 00:12:41.419 00:52:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:41.419 00:52:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3310466 00:12:41.419 00:52:25 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:41.419 00:52:25 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:41.419 00:52:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3310466' 00:12:41.419 killing process with pid 3310466 00:12:41.419 00:52:25 -- common/autotest_common.sh@945 -- # kill 3310466 00:12:41.419 00:52:25 -- common/autotest_common.sh@950 -- # wait 3310466 00:12:41.419 00:52:25 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:41.419 00:52:25 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:41.419 00:52:25 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:41.419 00:52:25 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:41.419 00:52:25 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:41.419 00:52:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:41.419 00:52:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:41.419 00:52:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:43.323 00:52:27 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:12:43.323 00:12:43.323 real 3m55.523s 00:12:43.323 user 14m56.800s 00:12:43.323 sys 0m34.842s 00:12:43.323 00:52:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:43.323 00:52:27 -- common/autotest_common.sh@10 -- # set +x 00:12:43.323 ************************************ 00:12:43.323 END TEST nvmf_connect_disconnect 00:12:43.323 ************************************ 00:12:43.323 00:52:27 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:43.323 00:52:27 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:43.323 00:52:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:43.323 00:52:27 -- common/autotest_common.sh@10 -- # set +x 00:12:43.323 ************************************ 00:12:43.323 START TEST nvmf_multitarget 00:12:43.323 ************************************ 00:12:43.323 00:52:27 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:43.582 * Looking for test storage... 00:12:43.582 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:43.582 00:52:27 -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:43.582 00:52:27 -- nvmf/common.sh@7 -- # uname -s 00:12:43.582 00:52:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:43.582 00:52:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:43.582 00:52:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:43.582 00:52:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:43.582 00:52:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:43.582 00:52:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:43.582 00:52:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:43.582 00:52:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:43.582 00:52:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:43.582 00:52:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:43.582 00:52:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:43.582 00:52:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:43.582 00:52:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:43.582 00:52:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:43.582 00:52:27 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:43.582 00:52:27 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:43.582 00:52:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:43.582 00:52:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:43.582 00:52:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:43.582 00:52:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.582 00:52:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.582 00:52:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.582 00:52:27 -- paths/export.sh@5 -- # export PATH 00:12:43.582 00:52:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.582 00:52:27 -- nvmf/common.sh@46 -- # : 0 00:12:43.582 00:52:27 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:43.582 00:52:27 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:43.582 00:52:27 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:43.582 00:52:27 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:43.582 00:52:27 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:43.582 00:52:27 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:43.582 00:52:27 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:43.582 00:52:27 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:43.582 00:52:27 -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:43.582 00:52:27 -- target/multitarget.sh@15 -- # nvmftestinit 00:12:43.582 00:52:27 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:43.582 00:52:27 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:43.582 00:52:27 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:43.582 00:52:27 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:43.582 00:52:27 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:43.582 00:52:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:43.582 00:52:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:43.582 00:52:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:43.582 00:52:27 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:12:43.582 00:52:27 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:12:43.582 00:52:27 -- nvmf/common.sh@284 -- # xtrace_disable 00:12:43.582 00:52:27 -- common/autotest_common.sh@10 -- # set +x 00:12:45.484 00:52:29 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:45.484 00:52:29 -- nvmf/common.sh@290 -- # pci_devs=() 00:12:45.484 00:52:29 -- nvmf/common.sh@290 -- # local -a pci_devs 00:12:45.484 00:52:29 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:12:45.484 00:52:29 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:12:45.484 00:52:29 -- nvmf/common.sh@292 -- # pci_drivers=() 00:12:45.484 00:52:29 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:12:45.484 00:52:29 -- nvmf/common.sh@294 -- # net_devs=() 00:12:45.484 00:52:29 -- nvmf/common.sh@294 -- # local -ga net_devs 00:12:45.484 00:52:29 -- nvmf/common.sh@295 -- # e810=() 00:12:45.484 00:52:29 -- nvmf/common.sh@295 -- # local -ga e810 00:12:45.484 00:52:29 -- nvmf/common.sh@296 -- # x722=() 00:12:45.484 00:52:29 -- nvmf/common.sh@296 -- # local -ga x722 00:12:45.484 00:52:29 -- nvmf/common.sh@297 -- # mlx=() 00:12:45.484 00:52:29 -- nvmf/common.sh@297 -- # local -ga mlx 00:12:45.484 00:52:29 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:45.484 00:52:29 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:45.484 00:52:29 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:45.484 00:52:29 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:45.484 00:52:29 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:45.484 00:52:29 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:45.484 00:52:29 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:45.484 00:52:29 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:45.484 00:52:29 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:45.484 00:52:29 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:45.484 00:52:29 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:45.484 00:52:29 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:12:45.484 00:52:29 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:12:45.484 00:52:29 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:12:45.484 00:52:29 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:12:45.484 00:52:29 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:12:45.484 00:52:29 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:12:45.484 00:52:29 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:45.484 00:52:29 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:45.484 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:45.484 00:52:29 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:45.485 00:52:29 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:45.485 00:52:29 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:45.485 00:52:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:45.485 00:52:29 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:45.485 00:52:29 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:45.485 00:52:29 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:45.485 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:45.485 00:52:29 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:45.485 00:52:29 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:45.485 00:52:29 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:45.485 00:52:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:45.485 00:52:29 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:45.485 00:52:29 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:12:45.485 00:52:29 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:12:45.485 00:52:29 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:12:45.485 00:52:29 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:45.485 00:52:29 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:45.485 00:52:29 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:45.485 00:52:29 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:45.485 00:52:29 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:45.485 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:45.485 00:52:29 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:45.485 00:52:29 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:45.485 00:52:29 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:45.485 00:52:29 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:45.485 00:52:29 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:45.485 00:52:29 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:45.485 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:45.485 00:52:29 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:45.485 00:52:29 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:12:45.485 00:52:29 -- nvmf/common.sh@402 -- # is_hw=yes 00:12:45.485 00:52:29 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:12:45.485 00:52:29 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:12:45.485 00:52:29 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:12:45.485 00:52:29 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:45.485 00:52:29 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:45.485 00:52:29 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:45.485 00:52:29 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:12:45.485 00:52:29 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:45.485 00:52:29 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:45.485 00:52:29 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:12:45.485 00:52:29 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:45.485 00:52:29 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:45.485 00:52:29 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:12:45.485 00:52:29 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:12:45.485 00:52:29 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:12:45.485 00:52:29 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:45.485 00:52:29 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:45.485 00:52:29 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:45.485 00:52:29 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:12:45.485 00:52:29 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:45.485 00:52:29 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:45.485 00:52:29 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:45.485 00:52:29 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:12:45.485 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:45.485 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:12:45.485 00:12:45.485 --- 10.0.0.2 ping statistics --- 00:12:45.485 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:45.485 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:12:45.485 00:52:29 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:45.485 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:45.485 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:12:45.485 00:12:45.485 --- 10.0.0.1 ping statistics --- 00:12:45.485 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:45.485 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:12:45.485 00:52:29 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:45.485 00:52:29 -- nvmf/common.sh@410 -- # return 0 00:12:45.485 00:52:29 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:45.485 00:52:29 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:45.485 00:52:29 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:45.485 00:52:29 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:45.485 00:52:29 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:45.485 00:52:29 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:45.485 00:52:29 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:45.485 00:52:29 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:45.485 00:52:29 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:45.485 00:52:29 -- common/autotest_common.sh@712 -- # xtrace_disable 00:12:45.485 00:52:29 -- common/autotest_common.sh@10 -- # set +x 00:12:45.485 00:52:29 -- nvmf/common.sh@469 -- # nvmfpid=3342364 00:12:45.485 00:52:29 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:45.485 00:52:29 -- nvmf/common.sh@470 -- # waitforlisten 3342364 00:12:45.485 00:52:29 -- common/autotest_common.sh@819 -- # '[' -z 3342364 ']' 00:12:45.485 00:52:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:45.485 00:52:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:45.485 00:52:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:45.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:45.485 00:52:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:45.485 00:52:29 -- common/autotest_common.sh@10 -- # set +x 00:12:45.745 [2024-07-23 00:52:29.723045] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:12:45.745 [2024-07-23 00:52:29.723125] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:45.745 EAL: No free 2048 kB hugepages reported on node 1 00:12:45.745 [2024-07-23 00:52:29.787752] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:45.745 [2024-07-23 00:52:29.875686] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:45.745 [2024-07-23 00:52:29.875833] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:45.745 [2024-07-23 00:52:29.875850] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:45.745 [2024-07-23 00:52:29.875864] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:45.745 [2024-07-23 00:52:29.875931] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:45.745 [2024-07-23 00:52:29.875997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:45.745 [2024-07-23 00:52:29.876061] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:45.745 [2024-07-23 00:52:29.876063] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:46.680 00:52:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:46.680 00:52:30 -- common/autotest_common.sh@852 -- # return 0 00:12:46.680 00:52:30 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:46.681 00:52:30 -- common/autotest_common.sh@718 -- # xtrace_disable 00:12:46.681 00:52:30 -- common/autotest_common.sh@10 -- # set +x 00:12:46.681 00:52:30 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:46.681 00:52:30 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:46.681 00:52:30 -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:46.681 00:52:30 -- target/multitarget.sh@21 -- # jq length 00:12:46.681 00:52:30 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:46.681 00:52:30 -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:46.939 "nvmf_tgt_1" 00:12:46.939 00:52:30 -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:46.939 "nvmf_tgt_2" 00:12:46.939 00:52:31 -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:46.939 00:52:31 -- target/multitarget.sh@28 -- # jq length 00:12:46.939 00:52:31 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:46.939 00:52:31 -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:47.198 true 00:12:47.198 00:52:31 -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:47.198 true 00:12:47.198 00:52:31 -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:47.198 00:52:31 -- target/multitarget.sh@35 -- # jq length 00:12:47.456 00:52:31 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:47.456 00:52:31 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:47.456 00:52:31 -- target/multitarget.sh@41 -- # nvmftestfini 00:12:47.456 00:52:31 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:47.456 00:52:31 -- nvmf/common.sh@116 -- # sync 00:12:47.456 00:52:31 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:47.456 00:52:31 -- nvmf/common.sh@119 -- # set +e 00:12:47.456 00:52:31 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:47.456 00:52:31 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:47.456 rmmod nvme_tcp 00:12:47.456 rmmod nvme_fabrics 00:12:47.456 rmmod nvme_keyring 00:12:47.456 00:52:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:47.456 00:52:31 -- nvmf/common.sh@123 -- # set -e 00:12:47.456 00:52:31 -- nvmf/common.sh@124 -- # return 0 00:12:47.456 00:52:31 -- nvmf/common.sh@477 -- # '[' -n 3342364 ']' 00:12:47.456 00:52:31 -- nvmf/common.sh@478 -- # killprocess 3342364 00:12:47.456 00:52:31 -- common/autotest_common.sh@926 -- # '[' -z 3342364 ']' 00:12:47.456 00:52:31 -- common/autotest_common.sh@930 -- # kill -0 3342364 00:12:47.456 00:52:31 -- common/autotest_common.sh@931 -- # uname 00:12:47.456 00:52:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:47.456 00:52:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3342364 00:12:47.456 00:52:31 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:47.456 00:52:31 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:47.456 00:52:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3342364' 00:12:47.456 killing process with pid 3342364 00:12:47.456 00:52:31 -- common/autotest_common.sh@945 -- # kill 3342364 00:12:47.456 00:52:31 -- common/autotest_common.sh@950 -- # wait 3342364 00:12:47.714 00:52:31 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:47.714 00:52:31 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:47.714 00:52:31 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:47.715 00:52:31 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:47.715 00:52:31 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:47.715 00:52:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:47.715 00:52:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:47.715 00:52:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:49.624 00:52:33 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:12:49.624 00:12:49.624 real 0m6.316s 00:12:49.624 user 0m9.230s 00:12:49.624 sys 0m1.877s 00:12:49.624 00:52:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:49.624 00:52:33 -- common/autotest_common.sh@10 -- # set +x 00:12:49.624 ************************************ 00:12:49.624 END TEST nvmf_multitarget 00:12:49.624 ************************************ 00:12:49.883 00:52:33 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:49.883 00:52:33 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:49.883 00:52:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:49.883 00:52:33 -- common/autotest_common.sh@10 -- # set +x 00:12:49.883 ************************************ 00:12:49.883 START TEST nvmf_rpc 00:12:49.883 ************************************ 00:12:49.883 00:52:33 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:49.883 * Looking for test storage... 00:12:49.883 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:49.883 00:52:33 -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:49.883 00:52:33 -- nvmf/common.sh@7 -- # uname -s 00:12:49.883 00:52:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:49.883 00:52:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:49.883 00:52:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:49.883 00:52:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:49.883 00:52:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:49.883 00:52:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:49.883 00:52:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:49.883 00:52:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:49.883 00:52:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:49.883 00:52:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:49.883 00:52:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:49.883 00:52:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:49.883 00:52:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:49.883 00:52:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:49.883 00:52:33 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:49.883 00:52:33 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:49.883 00:52:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:49.883 00:52:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:49.883 00:52:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:49.883 00:52:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.883 00:52:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.883 00:52:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.883 00:52:33 -- paths/export.sh@5 -- # export PATH 00:12:49.883 00:52:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.883 00:52:33 -- nvmf/common.sh@46 -- # : 0 00:12:49.883 00:52:33 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:49.883 00:52:33 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:49.883 00:52:33 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:49.883 00:52:33 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:49.883 00:52:33 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:49.883 00:52:33 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:49.883 00:52:33 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:49.883 00:52:33 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:49.883 00:52:33 -- target/rpc.sh@11 -- # loops=5 00:12:49.883 00:52:33 -- target/rpc.sh@23 -- # nvmftestinit 00:12:49.883 00:52:33 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:49.883 00:52:33 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:49.883 00:52:33 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:49.883 00:52:33 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:49.883 00:52:33 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:49.883 00:52:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:49.883 00:52:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:49.883 00:52:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:49.883 00:52:33 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:12:49.883 00:52:33 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:12:49.883 00:52:33 -- nvmf/common.sh@284 -- # xtrace_disable 00:12:49.883 00:52:33 -- common/autotest_common.sh@10 -- # set +x 00:12:51.787 00:52:35 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:51.787 00:52:35 -- nvmf/common.sh@290 -- # pci_devs=() 00:12:51.787 00:52:35 -- nvmf/common.sh@290 -- # local -a pci_devs 00:12:51.787 00:52:35 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:12:51.787 00:52:35 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:12:51.787 00:52:35 -- nvmf/common.sh@292 -- # pci_drivers=() 00:12:51.787 00:52:35 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:12:51.787 00:52:35 -- nvmf/common.sh@294 -- # net_devs=() 00:12:51.787 00:52:35 -- nvmf/common.sh@294 -- # local -ga net_devs 00:12:51.787 00:52:35 -- nvmf/common.sh@295 -- # e810=() 00:12:51.787 00:52:35 -- nvmf/common.sh@295 -- # local -ga e810 00:12:51.787 00:52:35 -- nvmf/common.sh@296 -- # x722=() 00:12:51.787 00:52:35 -- nvmf/common.sh@296 -- # local -ga x722 00:12:51.787 00:52:35 -- nvmf/common.sh@297 -- # mlx=() 00:12:51.787 00:52:35 -- nvmf/common.sh@297 -- # local -ga mlx 00:12:51.787 00:52:35 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:51.787 00:52:35 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:51.787 00:52:35 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:51.787 00:52:35 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:51.787 00:52:35 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:51.787 00:52:35 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:51.787 00:52:35 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:51.787 00:52:35 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:51.787 00:52:35 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:51.787 00:52:35 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:51.787 00:52:35 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:51.787 00:52:35 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:12:51.787 00:52:35 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:12:51.787 00:52:35 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:12:51.787 00:52:35 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:12:51.787 00:52:35 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:12:51.787 00:52:35 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:12:51.787 00:52:35 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:51.787 00:52:35 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:51.787 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:51.787 00:52:35 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:51.787 00:52:35 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:51.787 00:52:35 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:51.787 00:52:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:51.787 00:52:35 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:51.787 00:52:35 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:51.787 00:52:35 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:51.787 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:51.787 00:52:35 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:51.787 00:52:35 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:51.787 00:52:35 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:51.787 00:52:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:51.787 00:52:35 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:51.787 00:52:35 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:12:51.787 00:52:35 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:12:51.787 00:52:35 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:12:51.787 00:52:35 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:51.787 00:52:35 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:51.787 00:52:35 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:51.787 00:52:35 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:51.787 00:52:35 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:51.787 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:51.787 00:52:35 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:51.787 00:52:35 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:51.787 00:52:35 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:51.787 00:52:35 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:51.787 00:52:35 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:51.787 00:52:35 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:51.787 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:51.787 00:52:35 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:51.787 00:52:35 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:12:51.787 00:52:35 -- nvmf/common.sh@402 -- # is_hw=yes 00:12:51.787 00:52:35 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:12:51.787 00:52:35 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:12:51.787 00:52:35 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:12:51.787 00:52:35 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:51.787 00:52:35 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:51.787 00:52:35 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:51.787 00:52:35 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:12:51.787 00:52:35 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:51.787 00:52:35 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:51.787 00:52:35 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:12:51.787 00:52:35 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:51.787 00:52:35 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:51.787 00:52:35 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:12:51.787 00:52:35 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:12:51.787 00:52:35 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:12:51.787 00:52:35 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:51.787 00:52:35 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:51.787 00:52:35 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:51.787 00:52:35 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:12:51.787 00:52:35 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:52.048 00:52:36 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:52.048 00:52:36 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:52.048 00:52:36 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:12:52.048 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:52.048 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.234 ms 00:12:52.048 00:12:52.048 --- 10.0.0.2 ping statistics --- 00:12:52.048 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:52.048 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:12:52.048 00:52:36 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:52.048 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:52.048 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:12:52.048 00:12:52.048 --- 10.0.0.1 ping statistics --- 00:12:52.048 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:52.048 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:12:52.048 00:52:36 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:52.048 00:52:36 -- nvmf/common.sh@410 -- # return 0 00:12:52.048 00:52:36 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:52.048 00:52:36 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:52.048 00:52:36 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:52.048 00:52:36 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:52.048 00:52:36 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:52.048 00:52:36 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:52.048 00:52:36 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:52.048 00:52:36 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:52.048 00:52:36 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:52.048 00:52:36 -- common/autotest_common.sh@712 -- # xtrace_disable 00:12:52.048 00:52:36 -- common/autotest_common.sh@10 -- # set +x 00:12:52.048 00:52:36 -- nvmf/common.sh@469 -- # nvmfpid=3344607 00:12:52.048 00:52:36 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:52.048 00:52:36 -- nvmf/common.sh@470 -- # waitforlisten 3344607 00:12:52.048 00:52:36 -- common/autotest_common.sh@819 -- # '[' -z 3344607 ']' 00:12:52.048 00:52:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:52.048 00:52:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:52.048 00:52:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:52.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:52.048 00:52:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:52.048 00:52:36 -- common/autotest_common.sh@10 -- # set +x 00:12:52.048 [2024-07-23 00:52:36.103460] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:12:52.048 [2024-07-23 00:52:36.103552] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:52.048 EAL: No free 2048 kB hugepages reported on node 1 00:12:52.048 [2024-07-23 00:52:36.179414] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:52.306 [2024-07-23 00:52:36.277258] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:52.306 [2024-07-23 00:52:36.277417] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:52.306 [2024-07-23 00:52:36.277438] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:52.306 [2024-07-23 00:52:36.277452] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:52.307 [2024-07-23 00:52:36.277533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:52.307 [2024-07-23 00:52:36.277591] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:52.307 [2024-07-23 00:52:36.277643] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:52.307 [2024-07-23 00:52:36.277648] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:53.246 00:52:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:53.246 00:52:37 -- common/autotest_common.sh@852 -- # return 0 00:12:53.246 00:52:37 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:53.246 00:52:37 -- common/autotest_common.sh@718 -- # xtrace_disable 00:12:53.246 00:52:37 -- common/autotest_common.sh@10 -- # set +x 00:12:53.246 00:52:37 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:53.246 00:52:37 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:53.246 00:52:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:53.246 00:52:37 -- common/autotest_common.sh@10 -- # set +x 00:12:53.246 00:52:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:53.246 00:52:37 -- target/rpc.sh@26 -- # stats='{ 00:12:53.246 "tick_rate": 2700000000, 00:12:53.246 "poll_groups": [ 00:12:53.246 { 00:12:53.246 "name": "nvmf_tgt_poll_group_0", 00:12:53.246 "admin_qpairs": 0, 00:12:53.246 "io_qpairs": 0, 00:12:53.246 "current_admin_qpairs": 0, 00:12:53.246 "current_io_qpairs": 0, 00:12:53.246 "pending_bdev_io": 0, 00:12:53.246 "completed_nvme_io": 0, 00:12:53.246 "transports": [] 00:12:53.246 }, 00:12:53.246 { 00:12:53.246 "name": "nvmf_tgt_poll_group_1", 00:12:53.246 "admin_qpairs": 0, 00:12:53.246 "io_qpairs": 0, 00:12:53.246 "current_admin_qpairs": 0, 00:12:53.246 "current_io_qpairs": 0, 00:12:53.246 "pending_bdev_io": 0, 00:12:53.246 "completed_nvme_io": 0, 00:12:53.246 "transports": [] 00:12:53.246 }, 00:12:53.246 { 00:12:53.246 "name": "nvmf_tgt_poll_group_2", 00:12:53.246 "admin_qpairs": 0, 00:12:53.246 "io_qpairs": 0, 00:12:53.246 "current_admin_qpairs": 0, 00:12:53.246 "current_io_qpairs": 0, 00:12:53.246 "pending_bdev_io": 0, 00:12:53.246 "completed_nvme_io": 0, 00:12:53.246 "transports": [] 00:12:53.246 }, 00:12:53.246 { 00:12:53.246 "name": "nvmf_tgt_poll_group_3", 00:12:53.246 "admin_qpairs": 0, 00:12:53.246 "io_qpairs": 0, 00:12:53.246 "current_admin_qpairs": 0, 00:12:53.246 "current_io_qpairs": 0, 00:12:53.246 "pending_bdev_io": 0, 00:12:53.246 "completed_nvme_io": 0, 00:12:53.246 "transports": [] 00:12:53.246 } 00:12:53.246 ] 00:12:53.246 }' 00:12:53.246 00:52:37 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:53.246 00:52:37 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:53.246 00:52:37 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:53.246 00:52:37 -- target/rpc.sh@15 -- # wc -l 00:12:53.246 00:52:37 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:53.246 00:52:37 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:53.246 00:52:37 -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:53.246 00:52:37 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:53.246 00:52:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:53.246 00:52:37 -- common/autotest_common.sh@10 -- # set +x 00:12:53.246 [2024-07-23 00:52:37.200524] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:53.246 00:52:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:53.246 00:52:37 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:53.246 00:52:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:53.246 00:52:37 -- common/autotest_common.sh@10 -- # set +x 00:12:53.246 00:52:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:53.246 00:52:37 -- target/rpc.sh@33 -- # stats='{ 00:12:53.246 "tick_rate": 2700000000, 00:12:53.246 "poll_groups": [ 00:12:53.246 { 00:12:53.246 "name": "nvmf_tgt_poll_group_0", 00:12:53.246 "admin_qpairs": 0, 00:12:53.246 "io_qpairs": 0, 00:12:53.246 "current_admin_qpairs": 0, 00:12:53.246 "current_io_qpairs": 0, 00:12:53.246 "pending_bdev_io": 0, 00:12:53.246 "completed_nvme_io": 0, 00:12:53.246 "transports": [ 00:12:53.246 { 00:12:53.246 "trtype": "TCP" 00:12:53.246 } 00:12:53.246 ] 00:12:53.246 }, 00:12:53.246 { 00:12:53.246 "name": "nvmf_tgt_poll_group_1", 00:12:53.246 "admin_qpairs": 0, 00:12:53.246 "io_qpairs": 0, 00:12:53.246 "current_admin_qpairs": 0, 00:12:53.246 "current_io_qpairs": 0, 00:12:53.246 "pending_bdev_io": 0, 00:12:53.246 "completed_nvme_io": 0, 00:12:53.246 "transports": [ 00:12:53.246 { 00:12:53.246 "trtype": "TCP" 00:12:53.246 } 00:12:53.246 ] 00:12:53.246 }, 00:12:53.246 { 00:12:53.246 "name": "nvmf_tgt_poll_group_2", 00:12:53.246 "admin_qpairs": 0, 00:12:53.246 "io_qpairs": 0, 00:12:53.246 "current_admin_qpairs": 0, 00:12:53.246 "current_io_qpairs": 0, 00:12:53.246 "pending_bdev_io": 0, 00:12:53.246 "completed_nvme_io": 0, 00:12:53.246 "transports": [ 00:12:53.246 { 00:12:53.246 "trtype": "TCP" 00:12:53.246 } 00:12:53.246 ] 00:12:53.246 }, 00:12:53.246 { 00:12:53.246 "name": "nvmf_tgt_poll_group_3", 00:12:53.246 "admin_qpairs": 0, 00:12:53.246 "io_qpairs": 0, 00:12:53.246 "current_admin_qpairs": 0, 00:12:53.246 "current_io_qpairs": 0, 00:12:53.246 "pending_bdev_io": 0, 00:12:53.246 "completed_nvme_io": 0, 00:12:53.246 "transports": [ 00:12:53.246 { 00:12:53.246 "trtype": "TCP" 00:12:53.246 } 00:12:53.246 ] 00:12:53.246 } 00:12:53.246 ] 00:12:53.246 }' 00:12:53.246 00:52:37 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:53.246 00:52:37 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:53.246 00:52:37 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:53.247 00:52:37 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:53.247 00:52:37 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:53.247 00:52:37 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:53.247 00:52:37 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:53.247 00:52:37 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:53.247 00:52:37 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:53.247 00:52:37 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:53.247 00:52:37 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:53.247 00:52:37 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:53.247 00:52:37 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:53.247 00:52:37 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:53.247 00:52:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:53.247 00:52:37 -- common/autotest_common.sh@10 -- # set +x 00:12:53.247 Malloc1 00:12:53.247 00:52:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:53.247 00:52:37 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:53.247 00:52:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:53.247 00:52:37 -- common/autotest_common.sh@10 -- # set +x 00:12:53.247 00:52:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:53.247 00:52:37 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:53.247 00:52:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:53.247 00:52:37 -- common/autotest_common.sh@10 -- # set +x 00:12:53.247 00:52:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:53.247 00:52:37 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:53.247 00:52:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:53.247 00:52:37 -- common/autotest_common.sh@10 -- # set +x 00:12:53.247 00:52:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:53.247 00:52:37 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:53.247 00:52:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:53.247 00:52:37 -- common/autotest_common.sh@10 -- # set +x 00:12:53.247 [2024-07-23 00:52:37.340992] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:53.247 00:52:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:53.247 00:52:37 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:12:53.247 00:52:37 -- common/autotest_common.sh@640 -- # local es=0 00:12:53.247 00:52:37 -- common/autotest_common.sh@642 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:12:53.247 00:52:37 -- common/autotest_common.sh@628 -- # local arg=nvme 00:12:53.247 00:52:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:53.247 00:52:37 -- common/autotest_common.sh@632 -- # type -t nvme 00:12:53.247 00:52:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:53.247 00:52:37 -- common/autotest_common.sh@634 -- # type -P nvme 00:12:53.247 00:52:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:53.247 00:52:37 -- common/autotest_common.sh@634 -- # arg=/usr/sbin/nvme 00:12:53.247 00:52:37 -- common/autotest_common.sh@634 -- # [[ -x /usr/sbin/nvme ]] 00:12:53.247 00:52:37 -- common/autotest_common.sh@643 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:12:53.247 [2024-07-23 00:52:37.363540] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:12:53.247 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:53.247 could not add new controller: failed to write to nvme-fabrics device 00:12:53.247 00:52:37 -- common/autotest_common.sh@643 -- # es=1 00:12:53.247 00:52:37 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:12:53.247 00:52:37 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:12:53.247 00:52:37 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:12:53.247 00:52:37 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:53.247 00:52:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:53.247 00:52:37 -- common/autotest_common.sh@10 -- # set +x 00:12:53.247 00:52:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:53.247 00:52:37 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:53.813 00:52:37 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:53.813 00:52:37 -- common/autotest_common.sh@1177 -- # local i=0 00:12:53.813 00:52:37 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:53.813 00:52:37 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:53.813 00:52:37 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:56.378 00:52:39 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:56.378 00:52:39 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:56.378 00:52:39 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:12:56.378 00:52:39 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:56.378 00:52:39 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:56.378 00:52:39 -- common/autotest_common.sh@1187 -- # return 0 00:12:56.378 00:52:39 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:56.378 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:56.378 00:52:40 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:56.378 00:52:40 -- common/autotest_common.sh@1198 -- # local i=0 00:12:56.378 00:52:40 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:56.378 00:52:40 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:56.378 00:52:40 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:56.378 00:52:40 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:56.378 00:52:40 -- common/autotest_common.sh@1210 -- # return 0 00:12:56.378 00:52:40 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:56.378 00:52:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:56.378 00:52:40 -- common/autotest_common.sh@10 -- # set +x 00:12:56.378 00:52:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:56.378 00:52:40 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:56.378 00:52:40 -- common/autotest_common.sh@640 -- # local es=0 00:12:56.378 00:52:40 -- common/autotest_common.sh@642 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:56.378 00:52:40 -- common/autotest_common.sh@628 -- # local arg=nvme 00:12:56.378 00:52:40 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:56.378 00:52:40 -- common/autotest_common.sh@632 -- # type -t nvme 00:12:56.378 00:52:40 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:56.378 00:52:40 -- common/autotest_common.sh@634 -- # type -P nvme 00:12:56.378 00:52:40 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:56.378 00:52:40 -- common/autotest_common.sh@634 -- # arg=/usr/sbin/nvme 00:12:56.378 00:52:40 -- common/autotest_common.sh@634 -- # [[ -x /usr/sbin/nvme ]] 00:12:56.378 00:52:40 -- common/autotest_common.sh@643 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:56.378 [2024-07-23 00:52:40.093085] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:12:56.378 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:56.378 could not add new controller: failed to write to nvme-fabrics device 00:12:56.378 00:52:40 -- common/autotest_common.sh@643 -- # es=1 00:12:56.378 00:52:40 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:12:56.378 00:52:40 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:12:56.378 00:52:40 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:12:56.378 00:52:40 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:56.378 00:52:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:56.378 00:52:40 -- common/autotest_common.sh@10 -- # set +x 00:12:56.378 00:52:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:56.378 00:52:40 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:56.638 00:52:40 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:56.638 00:52:40 -- common/autotest_common.sh@1177 -- # local i=0 00:12:56.638 00:52:40 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:56.638 00:52:40 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:56.638 00:52:40 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:59.183 00:52:42 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:59.183 00:52:42 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:59.183 00:52:42 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:12:59.183 00:52:42 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:59.183 00:52:42 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:59.183 00:52:42 -- common/autotest_common.sh@1187 -- # return 0 00:12:59.183 00:52:42 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:59.183 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:59.183 00:52:42 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:59.183 00:52:42 -- common/autotest_common.sh@1198 -- # local i=0 00:12:59.183 00:52:42 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:59.183 00:52:42 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:59.183 00:52:42 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:59.183 00:52:42 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:59.183 00:52:42 -- common/autotest_common.sh@1210 -- # return 0 00:12:59.183 00:52:42 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:59.183 00:52:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:59.183 00:52:42 -- common/autotest_common.sh@10 -- # set +x 00:12:59.183 00:52:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:59.183 00:52:42 -- target/rpc.sh@81 -- # seq 1 5 00:12:59.183 00:52:42 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:59.183 00:52:42 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:59.183 00:52:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:59.183 00:52:42 -- common/autotest_common.sh@10 -- # set +x 00:12:59.183 00:52:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:59.183 00:52:42 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:59.183 00:52:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:59.183 00:52:42 -- common/autotest_common.sh@10 -- # set +x 00:12:59.183 [2024-07-23 00:52:42.973759] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:59.183 00:52:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:59.183 00:52:42 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:59.183 00:52:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:59.183 00:52:42 -- common/autotest_common.sh@10 -- # set +x 00:12:59.183 00:52:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:59.183 00:52:42 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:59.183 00:52:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:59.183 00:52:42 -- common/autotest_common.sh@10 -- # set +x 00:12:59.183 00:52:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:59.183 00:52:42 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:59.445 00:52:43 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:59.445 00:52:43 -- common/autotest_common.sh@1177 -- # local i=0 00:12:59.445 00:52:43 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:59.445 00:52:43 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:59.445 00:52:43 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:01.980 00:52:45 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:01.980 00:52:45 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:01.980 00:52:45 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:01.980 00:52:45 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:01.980 00:52:45 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:01.980 00:52:45 -- common/autotest_common.sh@1187 -- # return 0 00:13:01.980 00:52:45 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:01.980 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:01.980 00:52:45 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:01.980 00:52:45 -- common/autotest_common.sh@1198 -- # local i=0 00:13:01.980 00:52:45 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:01.980 00:52:45 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:01.980 00:52:45 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:01.980 00:52:45 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:01.980 00:52:45 -- common/autotest_common.sh@1210 -- # return 0 00:13:01.980 00:52:45 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:01.980 00:52:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:01.980 00:52:45 -- common/autotest_common.sh@10 -- # set +x 00:13:01.980 00:52:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:01.980 00:52:45 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:01.980 00:52:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:01.980 00:52:45 -- common/autotest_common.sh@10 -- # set +x 00:13:01.980 00:52:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:01.980 00:52:45 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:01.980 00:52:45 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:01.980 00:52:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:01.980 00:52:45 -- common/autotest_common.sh@10 -- # set +x 00:13:01.980 00:52:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:01.980 00:52:45 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:01.980 00:52:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:01.980 00:52:45 -- common/autotest_common.sh@10 -- # set +x 00:13:01.980 [2024-07-23 00:52:45.758417] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:01.980 00:52:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:01.980 00:52:45 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:01.980 00:52:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:01.980 00:52:45 -- common/autotest_common.sh@10 -- # set +x 00:13:01.980 00:52:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:01.980 00:52:45 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:01.980 00:52:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:01.980 00:52:45 -- common/autotest_common.sh@10 -- # set +x 00:13:01.980 00:52:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:01.980 00:52:45 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:02.547 00:52:46 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:02.547 00:52:46 -- common/autotest_common.sh@1177 -- # local i=0 00:13:02.547 00:52:46 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:02.547 00:52:46 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:02.547 00:52:46 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:04.453 00:52:48 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:04.453 00:52:48 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:04.453 00:52:48 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:04.453 00:52:48 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:04.453 00:52:48 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:04.454 00:52:48 -- common/autotest_common.sh@1187 -- # return 0 00:13:04.454 00:52:48 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:04.454 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:04.454 00:52:48 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:04.454 00:52:48 -- common/autotest_common.sh@1198 -- # local i=0 00:13:04.454 00:52:48 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:04.454 00:52:48 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:04.454 00:52:48 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:04.454 00:52:48 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:04.454 00:52:48 -- common/autotest_common.sh@1210 -- # return 0 00:13:04.454 00:52:48 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:04.454 00:52:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:04.454 00:52:48 -- common/autotest_common.sh@10 -- # set +x 00:13:04.454 00:52:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:04.454 00:52:48 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:04.454 00:52:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:04.454 00:52:48 -- common/autotest_common.sh@10 -- # set +x 00:13:04.454 00:52:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:04.454 00:52:48 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:04.454 00:52:48 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:04.454 00:52:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:04.454 00:52:48 -- common/autotest_common.sh@10 -- # set +x 00:13:04.454 00:52:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:04.454 00:52:48 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:04.454 00:52:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:04.454 00:52:48 -- common/autotest_common.sh@10 -- # set +x 00:13:04.454 [2024-07-23 00:52:48.636246] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:04.454 00:52:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:04.454 00:52:48 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:04.454 00:52:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:04.454 00:52:48 -- common/autotest_common.sh@10 -- # set +x 00:13:04.454 00:52:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:04.454 00:52:48 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:04.454 00:52:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:04.454 00:52:48 -- common/autotest_common.sh@10 -- # set +x 00:13:04.712 00:52:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:04.712 00:52:48 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:05.279 00:52:49 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:05.279 00:52:49 -- common/autotest_common.sh@1177 -- # local i=0 00:13:05.279 00:52:49 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:05.279 00:52:49 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:05.279 00:52:49 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:07.182 00:52:51 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:07.182 00:52:51 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:07.182 00:52:51 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:07.182 00:52:51 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:07.182 00:52:51 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:07.183 00:52:51 -- common/autotest_common.sh@1187 -- # return 0 00:13:07.183 00:52:51 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:07.442 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:07.443 00:52:51 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:07.443 00:52:51 -- common/autotest_common.sh@1198 -- # local i=0 00:13:07.443 00:52:51 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:07.443 00:52:51 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:07.443 00:52:51 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:07.443 00:52:51 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:07.443 00:52:51 -- common/autotest_common.sh@1210 -- # return 0 00:13:07.443 00:52:51 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:07.443 00:52:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:07.443 00:52:51 -- common/autotest_common.sh@10 -- # set +x 00:13:07.443 00:52:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:07.443 00:52:51 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:07.443 00:52:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:07.443 00:52:51 -- common/autotest_common.sh@10 -- # set +x 00:13:07.443 00:52:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:07.443 00:52:51 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:07.443 00:52:51 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:07.443 00:52:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:07.443 00:52:51 -- common/autotest_common.sh@10 -- # set +x 00:13:07.443 00:52:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:07.443 00:52:51 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:07.443 00:52:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:07.443 00:52:51 -- common/autotest_common.sh@10 -- # set +x 00:13:07.443 [2024-07-23 00:52:51.495308] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:07.443 00:52:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:07.443 00:52:51 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:07.443 00:52:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:07.443 00:52:51 -- common/autotest_common.sh@10 -- # set +x 00:13:07.443 00:52:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:07.443 00:52:51 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:07.443 00:52:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:07.443 00:52:51 -- common/autotest_common.sh@10 -- # set +x 00:13:07.443 00:52:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:07.443 00:52:51 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:08.011 00:52:52 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:08.011 00:52:52 -- common/autotest_common.sh@1177 -- # local i=0 00:13:08.011 00:52:52 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:08.011 00:52:52 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:08.011 00:52:52 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:10.543 00:52:54 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:10.543 00:52:54 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:10.543 00:52:54 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:10.543 00:52:54 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:10.543 00:52:54 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:10.543 00:52:54 -- common/autotest_common.sh@1187 -- # return 0 00:13:10.543 00:52:54 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:10.543 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:10.543 00:52:54 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:10.543 00:52:54 -- common/autotest_common.sh@1198 -- # local i=0 00:13:10.543 00:52:54 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:10.543 00:52:54 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:10.543 00:52:54 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:10.543 00:52:54 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:10.543 00:52:54 -- common/autotest_common.sh@1210 -- # return 0 00:13:10.543 00:52:54 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:10.543 00:52:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:10.543 00:52:54 -- common/autotest_common.sh@10 -- # set +x 00:13:10.543 00:52:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:10.543 00:52:54 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:10.543 00:52:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:10.543 00:52:54 -- common/autotest_common.sh@10 -- # set +x 00:13:10.543 00:52:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:10.543 00:52:54 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:10.543 00:52:54 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:10.543 00:52:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:10.543 00:52:54 -- common/autotest_common.sh@10 -- # set +x 00:13:10.543 00:52:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:10.543 00:52:54 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:10.543 00:52:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:10.543 00:52:54 -- common/autotest_common.sh@10 -- # set +x 00:13:10.543 [2024-07-23 00:52:54.358024] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:10.543 00:52:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:10.543 00:52:54 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:10.543 00:52:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:10.543 00:52:54 -- common/autotest_common.sh@10 -- # set +x 00:13:10.543 00:52:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:10.543 00:52:54 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:10.543 00:52:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:10.543 00:52:54 -- common/autotest_common.sh@10 -- # set +x 00:13:10.543 00:52:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:10.543 00:52:54 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:11.109 00:52:55 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:11.109 00:52:55 -- common/autotest_common.sh@1177 -- # local i=0 00:13:11.109 00:52:55 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:11.109 00:52:55 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:11.109 00:52:55 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:13.016 00:52:57 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:13.016 00:52:57 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:13.016 00:52:57 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:13.016 00:52:57 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:13.016 00:52:57 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:13.016 00:52:57 -- common/autotest_common.sh@1187 -- # return 0 00:13:13.016 00:52:57 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:13.016 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:13.016 00:52:57 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:13.016 00:52:57 -- common/autotest_common.sh@1198 -- # local i=0 00:13:13.016 00:52:57 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:13.016 00:52:57 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:13.016 00:52:57 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:13.016 00:52:57 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:13.016 00:52:57 -- common/autotest_common.sh@1210 -- # return 0 00:13:13.016 00:52:57 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:13.016 00:52:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:13.016 00:52:57 -- common/autotest_common.sh@10 -- # set +x 00:13:13.016 00:52:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:13.016 00:52:57 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:13.016 00:52:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:13.016 00:52:57 -- common/autotest_common.sh@10 -- # set +x 00:13:13.016 00:52:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:13.016 00:52:57 -- target/rpc.sh@99 -- # seq 1 5 00:13:13.016 00:52:57 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:13.016 00:52:57 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:13.016 00:52:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:13.016 00:52:57 -- common/autotest_common.sh@10 -- # set +x 00:13:13.016 00:52:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:13.016 00:52:57 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:13.016 00:52:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:13.016 00:52:57 -- common/autotest_common.sh@10 -- # set +x 00:13:13.016 [2024-07-23 00:52:57.212063] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:13.016 00:52:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:13.016 00:52:57 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:13.016 00:52:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:13.016 00:52:57 -- common/autotest_common.sh@10 -- # set +x 00:13:13.274 00:52:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:13.274 00:52:57 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:13.274 00:52:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:13.274 00:52:57 -- common/autotest_common.sh@10 -- # set +x 00:13:13.274 00:52:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:13.274 00:52:57 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:13.274 00:52:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:13.274 00:52:57 -- common/autotest_common.sh@10 -- # set +x 00:13:13.274 00:52:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:13.274 00:52:57 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:13.274 00:52:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:13.274 00:52:57 -- common/autotest_common.sh@10 -- # set +x 00:13:13.274 00:52:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:13.274 00:52:57 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:13.274 00:52:57 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:13.274 00:52:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:13.274 00:52:57 -- common/autotest_common.sh@10 -- # set +x 00:13:13.274 00:52:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:13.274 00:52:57 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:13.275 00:52:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:13.275 00:52:57 -- common/autotest_common.sh@10 -- # set +x 00:13:13.275 [2024-07-23 00:52:57.260128] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:13.275 00:52:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:13.275 00:52:57 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:13.275 00:52:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:13.275 00:52:57 -- common/autotest_common.sh@10 -- # set +x 00:13:13.275 00:52:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:13.275 00:52:57 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:13.275 00:52:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:13.275 00:52:57 -- common/autotest_common.sh@10 -- # set +x 00:13:13.275 00:52:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:13.275 00:52:57 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:13.275 00:52:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:13.275 00:52:57 -- common/autotest_common.sh@10 -- # set +x 00:13:13.275 00:52:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:13.275 00:52:57 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:13.275 00:52:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:13.275 00:52:57 -- common/autotest_common.sh@10 -- # set +x 00:13:13.275 00:52:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:13.275 00:52:57 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:13.275 00:52:57 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:13.275 00:52:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:13.275 00:52:57 -- common/autotest_common.sh@10 -- # set +x 00:13:13.275 00:52:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:13.275 00:52:57 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:13.275 00:52:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:13.275 00:52:57 -- common/autotest_common.sh@10 -- # set +x 00:13:13.275 [2024-07-23 00:52:57.308285] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:13.275 00:52:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:13.275 00:52:57 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:13.275 00:52:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:13.275 00:52:57 -- common/autotest_common.sh@10 -- # set +x 00:13:13.275 00:52:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:13.275 00:52:57 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:13.275 00:52:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:13.275 00:52:57 -- common/autotest_common.sh@10 -- # set +x 00:13:13.275 00:52:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:13.275 00:52:57 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:13.275 00:52:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:13.275 00:52:57 -- common/autotest_common.sh@10 -- # set +x 00:13:13.275 00:52:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:13.275 00:52:57 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:13.275 00:52:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:13.275 00:52:57 -- common/autotest_common.sh@10 -- # set +x 00:13:13.275 00:52:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:13.275 00:52:57 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:13.275 00:52:57 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:13.275 00:52:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:13.275 00:52:57 -- common/autotest_common.sh@10 -- # set +x 00:13:13.275 00:52:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:13.275 00:52:57 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:13.275 00:52:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:13.275 00:52:57 -- common/autotest_common.sh@10 -- # set +x 00:13:13.275 [2024-07-23 00:52:57.356456] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:13.275 00:52:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:13.275 00:52:57 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:13.275 00:52:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:13.275 00:52:57 -- common/autotest_common.sh@10 -- # set +x 00:13:13.275 00:52:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:13.275 00:52:57 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:13.275 00:52:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:13.275 00:52:57 -- common/autotest_common.sh@10 -- # set +x 00:13:13.275 00:52:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:13.275 00:52:57 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:13.275 00:52:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:13.275 00:52:57 -- common/autotest_common.sh@10 -- # set +x 00:13:13.275 00:52:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:13.275 00:52:57 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:13.275 00:52:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:13.275 00:52:57 -- common/autotest_common.sh@10 -- # set +x 00:13:13.275 00:52:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:13.275 00:52:57 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:13.275 00:52:57 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:13.275 00:52:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:13.275 00:52:57 -- common/autotest_common.sh@10 -- # set +x 00:13:13.275 00:52:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:13.275 00:52:57 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:13.275 00:52:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:13.275 00:52:57 -- common/autotest_common.sh@10 -- # set +x 00:13:13.275 [2024-07-23 00:52:57.404635] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:13.275 00:52:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:13.275 00:52:57 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:13.275 00:52:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:13.275 00:52:57 -- common/autotest_common.sh@10 -- # set +x 00:13:13.275 00:52:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:13.275 00:52:57 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:13.275 00:52:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:13.275 00:52:57 -- common/autotest_common.sh@10 -- # set +x 00:13:13.275 00:52:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:13.275 00:52:57 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:13.275 00:52:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:13.275 00:52:57 -- common/autotest_common.sh@10 -- # set +x 00:13:13.275 00:52:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:13.275 00:52:57 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:13.275 00:52:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:13.275 00:52:57 -- common/autotest_common.sh@10 -- # set +x 00:13:13.275 00:52:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:13.275 00:52:57 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:13.275 00:52:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:13.275 00:52:57 -- common/autotest_common.sh@10 -- # set +x 00:13:13.275 00:52:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:13.275 00:52:57 -- target/rpc.sh@110 -- # stats='{ 00:13:13.275 "tick_rate": 2700000000, 00:13:13.275 "poll_groups": [ 00:13:13.275 { 00:13:13.275 "name": "nvmf_tgt_poll_group_0", 00:13:13.275 "admin_qpairs": 2, 00:13:13.275 "io_qpairs": 84, 00:13:13.275 "current_admin_qpairs": 0, 00:13:13.275 "current_io_qpairs": 0, 00:13:13.275 "pending_bdev_io": 0, 00:13:13.275 "completed_nvme_io": 135, 00:13:13.275 "transports": [ 00:13:13.275 { 00:13:13.275 "trtype": "TCP" 00:13:13.275 } 00:13:13.275 ] 00:13:13.275 }, 00:13:13.275 { 00:13:13.275 "name": "nvmf_tgt_poll_group_1", 00:13:13.275 "admin_qpairs": 2, 00:13:13.275 "io_qpairs": 84, 00:13:13.275 "current_admin_qpairs": 0, 00:13:13.275 "current_io_qpairs": 0, 00:13:13.275 "pending_bdev_io": 0, 00:13:13.275 "completed_nvme_io": 232, 00:13:13.275 "transports": [ 00:13:13.275 { 00:13:13.275 "trtype": "TCP" 00:13:13.275 } 00:13:13.275 ] 00:13:13.275 }, 00:13:13.275 { 00:13:13.275 "name": "nvmf_tgt_poll_group_2", 00:13:13.275 "admin_qpairs": 1, 00:13:13.275 "io_qpairs": 84, 00:13:13.275 "current_admin_qpairs": 0, 00:13:13.275 "current_io_qpairs": 0, 00:13:13.275 "pending_bdev_io": 0, 00:13:13.275 "completed_nvme_io": 137, 00:13:13.275 "transports": [ 00:13:13.275 { 00:13:13.275 "trtype": "TCP" 00:13:13.275 } 00:13:13.275 ] 00:13:13.275 }, 00:13:13.275 { 00:13:13.275 "name": "nvmf_tgt_poll_group_3", 00:13:13.275 "admin_qpairs": 2, 00:13:13.275 "io_qpairs": 84, 00:13:13.275 "current_admin_qpairs": 0, 00:13:13.275 "current_io_qpairs": 0, 00:13:13.275 "pending_bdev_io": 0, 00:13:13.275 "completed_nvme_io": 182, 00:13:13.275 "transports": [ 00:13:13.275 { 00:13:13.275 "trtype": "TCP" 00:13:13.276 } 00:13:13.276 ] 00:13:13.276 } 00:13:13.276 ] 00:13:13.276 }' 00:13:13.276 00:52:57 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:13.276 00:52:57 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:13.276 00:52:57 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:13.276 00:52:57 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:13.534 00:52:57 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:13.534 00:52:57 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:13.534 00:52:57 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:13.534 00:52:57 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:13.534 00:52:57 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:13.534 00:52:57 -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:13:13.534 00:52:57 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:13.534 00:52:57 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:13.534 00:52:57 -- target/rpc.sh@123 -- # nvmftestfini 00:13:13.534 00:52:57 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:13.534 00:52:57 -- nvmf/common.sh@116 -- # sync 00:13:13.534 00:52:57 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:13.534 00:52:57 -- nvmf/common.sh@119 -- # set +e 00:13:13.534 00:52:57 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:13.534 00:52:57 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:13.534 rmmod nvme_tcp 00:13:13.534 rmmod nvme_fabrics 00:13:13.534 rmmod nvme_keyring 00:13:13.534 00:52:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:13.534 00:52:57 -- nvmf/common.sh@123 -- # set -e 00:13:13.534 00:52:57 -- nvmf/common.sh@124 -- # return 0 00:13:13.534 00:52:57 -- nvmf/common.sh@477 -- # '[' -n 3344607 ']' 00:13:13.534 00:52:57 -- nvmf/common.sh@478 -- # killprocess 3344607 00:13:13.534 00:52:57 -- common/autotest_common.sh@926 -- # '[' -z 3344607 ']' 00:13:13.534 00:52:57 -- common/autotest_common.sh@930 -- # kill -0 3344607 00:13:13.534 00:52:57 -- common/autotest_common.sh@931 -- # uname 00:13:13.534 00:52:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:13.534 00:52:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3344607 00:13:13.534 00:52:57 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:13.534 00:52:57 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:13.534 00:52:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3344607' 00:13:13.534 killing process with pid 3344607 00:13:13.534 00:52:57 -- common/autotest_common.sh@945 -- # kill 3344607 00:13:13.534 00:52:57 -- common/autotest_common.sh@950 -- # wait 3344607 00:13:13.794 00:52:57 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:13.794 00:52:57 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:13.794 00:52:57 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:13.794 00:52:57 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:13.794 00:52:57 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:13.794 00:52:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:13.794 00:52:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:13.794 00:52:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:15.707 00:52:59 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:15.966 00:13:15.966 real 0m26.074s 00:13:15.966 user 1m25.570s 00:13:15.966 sys 0m4.236s 00:13:15.966 00:52:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:15.966 00:52:59 -- common/autotest_common.sh@10 -- # set +x 00:13:15.966 ************************************ 00:13:15.966 END TEST nvmf_rpc 00:13:15.966 ************************************ 00:13:15.966 00:52:59 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:15.966 00:52:59 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:15.966 00:52:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:15.966 00:52:59 -- common/autotest_common.sh@10 -- # set +x 00:13:15.966 ************************************ 00:13:15.966 START TEST nvmf_invalid 00:13:15.966 ************************************ 00:13:15.966 00:52:59 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:15.966 * Looking for test storage... 00:13:15.966 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:15.966 00:52:59 -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:15.966 00:52:59 -- nvmf/common.sh@7 -- # uname -s 00:13:15.966 00:52:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:15.966 00:52:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:15.966 00:52:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:15.966 00:52:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:15.966 00:52:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:15.966 00:52:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:15.966 00:52:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:15.966 00:52:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:15.966 00:52:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:15.966 00:52:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:15.966 00:53:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:15.966 00:53:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:15.966 00:53:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:15.966 00:53:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:15.966 00:53:00 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:15.966 00:53:00 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:15.966 00:53:00 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:15.966 00:53:00 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:15.966 00:53:00 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:15.966 00:53:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.966 00:53:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.966 00:53:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.966 00:53:00 -- paths/export.sh@5 -- # export PATH 00:13:15.966 00:53:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.966 00:53:00 -- nvmf/common.sh@46 -- # : 0 00:13:15.966 00:53:00 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:15.966 00:53:00 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:15.966 00:53:00 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:15.966 00:53:00 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:15.966 00:53:00 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:15.966 00:53:00 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:15.966 00:53:00 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:15.966 00:53:00 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:15.966 00:53:00 -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:15.966 00:53:00 -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:15.966 00:53:00 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:15.966 00:53:00 -- target/invalid.sh@14 -- # target=foobar 00:13:15.966 00:53:00 -- target/invalid.sh@16 -- # RANDOM=0 00:13:15.966 00:53:00 -- target/invalid.sh@34 -- # nvmftestinit 00:13:15.966 00:53:00 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:15.966 00:53:00 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:15.966 00:53:00 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:15.966 00:53:00 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:15.966 00:53:00 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:15.966 00:53:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:15.966 00:53:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:15.966 00:53:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:15.966 00:53:00 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:15.966 00:53:00 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:15.966 00:53:00 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:15.966 00:53:00 -- common/autotest_common.sh@10 -- # set +x 00:13:17.871 00:53:01 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:17.871 00:53:01 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:17.871 00:53:01 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:17.871 00:53:01 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:17.871 00:53:01 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:17.871 00:53:01 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:17.871 00:53:01 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:17.871 00:53:01 -- nvmf/common.sh@294 -- # net_devs=() 00:13:17.871 00:53:01 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:17.871 00:53:01 -- nvmf/common.sh@295 -- # e810=() 00:13:17.871 00:53:01 -- nvmf/common.sh@295 -- # local -ga e810 00:13:17.871 00:53:01 -- nvmf/common.sh@296 -- # x722=() 00:13:17.871 00:53:01 -- nvmf/common.sh@296 -- # local -ga x722 00:13:17.871 00:53:01 -- nvmf/common.sh@297 -- # mlx=() 00:13:17.871 00:53:01 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:17.871 00:53:01 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:17.871 00:53:01 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:17.871 00:53:01 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:17.871 00:53:01 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:17.871 00:53:01 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:17.871 00:53:01 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:17.871 00:53:01 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:17.871 00:53:01 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:17.871 00:53:01 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:17.871 00:53:01 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:17.871 00:53:01 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:17.871 00:53:01 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:17.871 00:53:01 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:17.871 00:53:01 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:17.871 00:53:01 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:17.871 00:53:01 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:17.871 00:53:01 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:17.871 00:53:01 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:17.871 00:53:01 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:17.871 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:17.871 00:53:01 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:17.871 00:53:01 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:17.871 00:53:01 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:17.871 00:53:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:17.871 00:53:01 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:17.871 00:53:01 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:17.871 00:53:01 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:17.871 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:17.871 00:53:01 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:17.871 00:53:01 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:17.871 00:53:01 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:17.871 00:53:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:17.871 00:53:01 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:17.871 00:53:01 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:17.871 00:53:01 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:17.871 00:53:01 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:17.871 00:53:01 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:17.871 00:53:01 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:17.871 00:53:01 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:17.871 00:53:01 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:17.871 00:53:01 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:17.871 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:17.871 00:53:01 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:17.871 00:53:01 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:17.871 00:53:01 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:17.871 00:53:01 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:17.871 00:53:01 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:17.871 00:53:01 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:17.871 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:17.871 00:53:01 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:17.871 00:53:01 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:17.871 00:53:01 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:17.871 00:53:01 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:17.871 00:53:01 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:17.871 00:53:01 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:17.871 00:53:01 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:17.871 00:53:01 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:17.871 00:53:01 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:17.871 00:53:01 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:17.871 00:53:01 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:17.871 00:53:01 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:17.871 00:53:01 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:17.871 00:53:01 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:17.871 00:53:01 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:17.871 00:53:01 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:17.871 00:53:01 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:17.871 00:53:01 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:17.871 00:53:01 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:17.871 00:53:02 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:17.871 00:53:02 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:17.871 00:53:02 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:17.871 00:53:02 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:17.871 00:53:02 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:17.871 00:53:02 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:18.132 00:53:02 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:18.132 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:18.132 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.135 ms 00:13:18.132 00:13:18.132 --- 10.0.0.2 ping statistics --- 00:13:18.132 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:18.132 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:13:18.132 00:53:02 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:18.132 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:18.132 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:13:18.132 00:13:18.132 --- 10.0.0.1 ping statistics --- 00:13:18.132 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:18.132 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:13:18.132 00:53:02 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:18.132 00:53:02 -- nvmf/common.sh@410 -- # return 0 00:13:18.132 00:53:02 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:18.132 00:53:02 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:18.132 00:53:02 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:18.132 00:53:02 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:18.132 00:53:02 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:18.132 00:53:02 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:18.132 00:53:02 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:18.132 00:53:02 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:18.132 00:53:02 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:18.132 00:53:02 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:18.132 00:53:02 -- common/autotest_common.sh@10 -- # set +x 00:13:18.132 00:53:02 -- nvmf/common.sh@469 -- # nvmfpid=3349309 00:13:18.132 00:53:02 -- nvmf/common.sh@470 -- # waitforlisten 3349309 00:13:18.132 00:53:02 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:18.132 00:53:02 -- common/autotest_common.sh@819 -- # '[' -z 3349309 ']' 00:13:18.132 00:53:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:18.132 00:53:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:18.132 00:53:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:18.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:18.132 00:53:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:18.132 00:53:02 -- common/autotest_common.sh@10 -- # set +x 00:13:18.132 [2024-07-23 00:53:02.155320] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:13:18.132 [2024-07-23 00:53:02.155397] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:18.132 EAL: No free 2048 kB hugepages reported on node 1 00:13:18.132 [2024-07-23 00:53:02.225323] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:18.132 [2024-07-23 00:53:02.313480] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:18.132 [2024-07-23 00:53:02.313639] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:18.132 [2024-07-23 00:53:02.313657] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:18.132 [2024-07-23 00:53:02.313669] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:18.132 [2024-07-23 00:53:02.313732] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:18.132 [2024-07-23 00:53:02.313789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:18.132 [2024-07-23 00:53:02.313853] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:18.132 [2024-07-23 00:53:02.313856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:19.069 00:53:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:19.069 00:53:03 -- common/autotest_common.sh@852 -- # return 0 00:13:19.069 00:53:03 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:19.069 00:53:03 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:19.069 00:53:03 -- common/autotest_common.sh@10 -- # set +x 00:13:19.069 00:53:03 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:19.069 00:53:03 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:19.069 00:53:03 -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode12285 00:13:19.326 [2024-07-23 00:53:03.351890] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:19.326 00:53:03 -- target/invalid.sh@40 -- # out='request: 00:13:19.326 { 00:13:19.326 "nqn": "nqn.2016-06.io.spdk:cnode12285", 00:13:19.326 "tgt_name": "foobar", 00:13:19.326 "method": "nvmf_create_subsystem", 00:13:19.326 "req_id": 1 00:13:19.326 } 00:13:19.326 Got JSON-RPC error response 00:13:19.326 response: 00:13:19.326 { 00:13:19.326 "code": -32603, 00:13:19.326 "message": "Unable to find target foobar" 00:13:19.326 }' 00:13:19.326 00:53:03 -- target/invalid.sh@41 -- # [[ request: 00:13:19.326 { 00:13:19.326 "nqn": "nqn.2016-06.io.spdk:cnode12285", 00:13:19.326 "tgt_name": "foobar", 00:13:19.326 "method": "nvmf_create_subsystem", 00:13:19.326 "req_id": 1 00:13:19.326 } 00:13:19.326 Got JSON-RPC error response 00:13:19.326 response: 00:13:19.326 { 00:13:19.326 "code": -32603, 00:13:19.326 "message": "Unable to find target foobar" 00:13:19.326 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:19.326 00:53:03 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:19.326 00:53:03 -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode5131 00:13:19.583 [2024-07-23 00:53:03.592691] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5131: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:19.583 00:53:03 -- target/invalid.sh@45 -- # out='request: 00:13:19.583 { 00:13:19.583 "nqn": "nqn.2016-06.io.spdk:cnode5131", 00:13:19.583 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:19.583 "method": "nvmf_create_subsystem", 00:13:19.583 "req_id": 1 00:13:19.583 } 00:13:19.583 Got JSON-RPC error response 00:13:19.583 response: 00:13:19.583 { 00:13:19.583 "code": -32602, 00:13:19.583 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:19.583 }' 00:13:19.583 00:53:03 -- target/invalid.sh@46 -- # [[ request: 00:13:19.583 { 00:13:19.583 "nqn": "nqn.2016-06.io.spdk:cnode5131", 00:13:19.583 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:19.583 "method": "nvmf_create_subsystem", 00:13:19.583 "req_id": 1 00:13:19.583 } 00:13:19.583 Got JSON-RPC error response 00:13:19.583 response: 00:13:19.583 { 00:13:19.583 "code": -32602, 00:13:19.583 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:19.583 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:19.583 00:53:03 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:19.583 00:53:03 -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode4958 00:13:19.841 [2024-07-23 00:53:03.833466] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4958: invalid model number 'SPDK_Controller' 00:13:19.841 00:53:03 -- target/invalid.sh@50 -- # out='request: 00:13:19.841 { 00:13:19.841 "nqn": "nqn.2016-06.io.spdk:cnode4958", 00:13:19.841 "model_number": "SPDK_Controller\u001f", 00:13:19.841 "method": "nvmf_create_subsystem", 00:13:19.841 "req_id": 1 00:13:19.841 } 00:13:19.841 Got JSON-RPC error response 00:13:19.841 response: 00:13:19.841 { 00:13:19.841 "code": -32602, 00:13:19.841 "message": "Invalid MN SPDK_Controller\u001f" 00:13:19.841 }' 00:13:19.841 00:53:03 -- target/invalid.sh@51 -- # [[ request: 00:13:19.841 { 00:13:19.841 "nqn": "nqn.2016-06.io.spdk:cnode4958", 00:13:19.841 "model_number": "SPDK_Controller\u001f", 00:13:19.841 "method": "nvmf_create_subsystem", 00:13:19.841 "req_id": 1 00:13:19.841 } 00:13:19.841 Got JSON-RPC error response 00:13:19.841 response: 00:13:19.841 { 00:13:19.841 "code": -32602, 00:13:19.841 "message": "Invalid MN SPDK_Controller\u001f" 00:13:19.841 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:19.841 00:53:03 -- target/invalid.sh@54 -- # gen_random_s 21 00:13:19.841 00:53:03 -- target/invalid.sh@19 -- # local length=21 ll 00:13:19.841 00:53:03 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:19.841 00:53:03 -- target/invalid.sh@21 -- # local chars 00:13:19.841 00:53:03 -- target/invalid.sh@22 -- # local string 00:13:19.841 00:53:03 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:19.841 00:53:03 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.841 00:53:03 -- target/invalid.sh@25 -- # printf %x 49 00:13:19.841 00:53:03 -- target/invalid.sh@25 -- # echo -e '\x31' 00:13:19.841 00:53:03 -- target/invalid.sh@25 -- # string+=1 00:13:19.841 00:53:03 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.841 00:53:03 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.841 00:53:03 -- target/invalid.sh@25 -- # printf %x 78 00:13:19.841 00:53:03 -- target/invalid.sh@25 -- # echo -e '\x4e' 00:13:19.841 00:53:03 -- target/invalid.sh@25 -- # string+=N 00:13:19.841 00:53:03 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.841 00:53:03 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.841 00:53:03 -- target/invalid.sh@25 -- # printf %x 107 00:13:19.841 00:53:03 -- target/invalid.sh@25 -- # echo -e '\x6b' 00:13:19.842 00:53:03 -- target/invalid.sh@25 -- # string+=k 00:13:19.842 00:53:03 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.842 00:53:03 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.842 00:53:03 -- target/invalid.sh@25 -- # printf %x 70 00:13:19.842 00:53:03 -- target/invalid.sh@25 -- # echo -e '\x46' 00:13:19.842 00:53:03 -- target/invalid.sh@25 -- # string+=F 00:13:19.842 00:53:03 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.842 00:53:03 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.842 00:53:03 -- target/invalid.sh@25 -- # printf %x 35 00:13:19.842 00:53:03 -- target/invalid.sh@25 -- # echo -e '\x23' 00:13:19.842 00:53:03 -- target/invalid.sh@25 -- # string+='#' 00:13:19.842 00:53:03 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.842 00:53:03 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.842 00:53:03 -- target/invalid.sh@25 -- # printf %x 122 00:13:19.842 00:53:03 -- target/invalid.sh@25 -- # echo -e '\x7a' 00:13:19.842 00:53:03 -- target/invalid.sh@25 -- # string+=z 00:13:19.842 00:53:03 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.842 00:53:03 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.842 00:53:03 -- target/invalid.sh@25 -- # printf %x 80 00:13:19.842 00:53:03 -- target/invalid.sh@25 -- # echo -e '\x50' 00:13:19.842 00:53:03 -- target/invalid.sh@25 -- # string+=P 00:13:19.842 00:53:03 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.842 00:53:03 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.842 00:53:03 -- target/invalid.sh@25 -- # printf %x 40 00:13:19.842 00:53:03 -- target/invalid.sh@25 -- # echo -e '\x28' 00:13:19.842 00:53:03 -- target/invalid.sh@25 -- # string+='(' 00:13:19.842 00:53:03 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.842 00:53:03 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.842 00:53:03 -- target/invalid.sh@25 -- # printf %x 93 00:13:19.842 00:53:03 -- target/invalid.sh@25 -- # echo -e '\x5d' 00:13:19.842 00:53:03 -- target/invalid.sh@25 -- # string+=']' 00:13:19.842 00:53:03 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.842 00:53:03 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.842 00:53:03 -- target/invalid.sh@25 -- # printf %x 96 00:13:19.842 00:53:03 -- target/invalid.sh@25 -- # echo -e '\x60' 00:13:19.842 00:53:03 -- target/invalid.sh@25 -- # string+='`' 00:13:19.842 00:53:03 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.842 00:53:03 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.842 00:53:03 -- target/invalid.sh@25 -- # printf %x 70 00:13:19.842 00:53:03 -- target/invalid.sh@25 -- # echo -e '\x46' 00:13:19.842 00:53:03 -- target/invalid.sh@25 -- # string+=F 00:13:19.842 00:53:03 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.842 00:53:03 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.842 00:53:03 -- target/invalid.sh@25 -- # printf %x 115 00:13:19.842 00:53:03 -- target/invalid.sh@25 -- # echo -e '\x73' 00:13:19.842 00:53:03 -- target/invalid.sh@25 -- # string+=s 00:13:19.842 00:53:03 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.842 00:53:03 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.842 00:53:03 -- target/invalid.sh@25 -- # printf %x 93 00:13:19.842 00:53:03 -- target/invalid.sh@25 -- # echo -e '\x5d' 00:13:19.842 00:53:03 -- target/invalid.sh@25 -- # string+=']' 00:13:19.842 00:53:03 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.842 00:53:03 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.842 00:53:03 -- target/invalid.sh@25 -- # printf %x 78 00:13:19.842 00:53:03 -- target/invalid.sh@25 -- # echo -e '\x4e' 00:13:19.842 00:53:03 -- target/invalid.sh@25 -- # string+=N 00:13:19.842 00:53:03 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.842 00:53:03 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.842 00:53:03 -- target/invalid.sh@25 -- # printf %x 93 00:13:19.842 00:53:03 -- target/invalid.sh@25 -- # echo -e '\x5d' 00:13:19.842 00:53:03 -- target/invalid.sh@25 -- # string+=']' 00:13:19.842 00:53:03 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.842 00:53:03 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.842 00:53:03 -- target/invalid.sh@25 -- # printf %x 119 00:13:19.842 00:53:03 -- target/invalid.sh@25 -- # echo -e '\x77' 00:13:19.842 00:53:03 -- target/invalid.sh@25 -- # string+=w 00:13:19.842 00:53:03 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.842 00:53:03 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.842 00:53:03 -- target/invalid.sh@25 -- # printf %x 100 00:13:19.842 00:53:03 -- target/invalid.sh@25 -- # echo -e '\x64' 00:13:19.842 00:53:03 -- target/invalid.sh@25 -- # string+=d 00:13:19.842 00:53:03 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.842 00:53:03 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.842 00:53:03 -- target/invalid.sh@25 -- # printf %x 96 00:13:19.842 00:53:03 -- target/invalid.sh@25 -- # echo -e '\x60' 00:13:19.842 00:53:03 -- target/invalid.sh@25 -- # string+='`' 00:13:19.842 00:53:03 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.842 00:53:03 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.842 00:53:03 -- target/invalid.sh@25 -- # printf %x 80 00:13:19.842 00:53:03 -- target/invalid.sh@25 -- # echo -e '\x50' 00:13:19.842 00:53:03 -- target/invalid.sh@25 -- # string+=P 00:13:19.842 00:53:03 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.842 00:53:03 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.842 00:53:03 -- target/invalid.sh@25 -- # printf %x 41 00:13:19.842 00:53:03 -- target/invalid.sh@25 -- # echo -e '\x29' 00:13:19.842 00:53:03 -- target/invalid.sh@25 -- # string+=')' 00:13:19.842 00:53:03 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.842 00:53:03 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.842 00:53:03 -- target/invalid.sh@25 -- # printf %x 112 00:13:19.842 00:53:03 -- target/invalid.sh@25 -- # echo -e '\x70' 00:13:19.842 00:53:03 -- target/invalid.sh@25 -- # string+=p 00:13:19.842 00:53:03 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.842 00:53:03 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.842 00:53:03 -- target/invalid.sh@28 -- # [[ 1 == \- ]] 00:13:19.842 00:53:03 -- target/invalid.sh@31 -- # echo '1NkF#zP(]`Fs]N]wd`P)p' 00:13:19.842 00:53:03 -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '1NkF#zP(]`Fs]N]wd`P)p' nqn.2016-06.io.spdk:cnode14542 00:13:20.101 [2024-07-23 00:53:04.146505] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14542: invalid serial number '1NkF#zP(]`Fs]N]wd`P)p' 00:13:20.101 00:53:04 -- target/invalid.sh@54 -- # out='request: 00:13:20.101 { 00:13:20.101 "nqn": "nqn.2016-06.io.spdk:cnode14542", 00:13:20.101 "serial_number": "1NkF#zP(]`Fs]N]wd`P)p", 00:13:20.101 "method": "nvmf_create_subsystem", 00:13:20.101 "req_id": 1 00:13:20.101 } 00:13:20.101 Got JSON-RPC error response 00:13:20.101 response: 00:13:20.101 { 00:13:20.101 "code": -32602, 00:13:20.101 "message": "Invalid SN 1NkF#zP(]`Fs]N]wd`P)p" 00:13:20.101 }' 00:13:20.101 00:53:04 -- target/invalid.sh@55 -- # [[ request: 00:13:20.101 { 00:13:20.101 "nqn": "nqn.2016-06.io.spdk:cnode14542", 00:13:20.101 "serial_number": "1NkF#zP(]`Fs]N]wd`P)p", 00:13:20.101 "method": "nvmf_create_subsystem", 00:13:20.101 "req_id": 1 00:13:20.101 } 00:13:20.101 Got JSON-RPC error response 00:13:20.101 response: 00:13:20.101 { 00:13:20.101 "code": -32602, 00:13:20.101 "message": "Invalid SN 1NkF#zP(]`Fs]N]wd`P)p" 00:13:20.101 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:20.101 00:53:04 -- target/invalid.sh@58 -- # gen_random_s 41 00:13:20.101 00:53:04 -- target/invalid.sh@19 -- # local length=41 ll 00:13:20.101 00:53:04 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:20.101 00:53:04 -- target/invalid.sh@21 -- # local chars 00:13:20.101 00:53:04 -- target/invalid.sh@22 -- # local string 00:13:20.101 00:53:04 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:20.101 00:53:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.101 00:53:04 -- target/invalid.sh@25 -- # printf %x 50 00:13:20.101 00:53:04 -- target/invalid.sh@25 -- # echo -e '\x32' 00:13:20.101 00:53:04 -- target/invalid.sh@25 -- # string+=2 00:13:20.101 00:53:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.101 00:53:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.101 00:53:04 -- target/invalid.sh@25 -- # printf %x 80 00:13:20.101 00:53:04 -- target/invalid.sh@25 -- # echo -e '\x50' 00:13:20.101 00:53:04 -- target/invalid.sh@25 -- # string+=P 00:13:20.101 00:53:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.101 00:53:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.101 00:53:04 -- target/invalid.sh@25 -- # printf %x 69 00:13:20.101 00:53:04 -- target/invalid.sh@25 -- # echo -e '\x45' 00:13:20.101 00:53:04 -- target/invalid.sh@25 -- # string+=E 00:13:20.101 00:53:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.101 00:53:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.101 00:53:04 -- target/invalid.sh@25 -- # printf %x 81 00:13:20.101 00:53:04 -- target/invalid.sh@25 -- # echo -e '\x51' 00:13:20.101 00:53:04 -- target/invalid.sh@25 -- # string+=Q 00:13:20.102 00:53:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.102 00:53:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.102 00:53:04 -- target/invalid.sh@25 -- # printf %x 87 00:13:20.102 00:53:04 -- target/invalid.sh@25 -- # echo -e '\x57' 00:13:20.102 00:53:04 -- target/invalid.sh@25 -- # string+=W 00:13:20.102 00:53:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.102 00:53:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.102 00:53:04 -- target/invalid.sh@25 -- # printf %x 82 00:13:20.102 00:53:04 -- target/invalid.sh@25 -- # echo -e '\x52' 00:13:20.102 00:53:04 -- target/invalid.sh@25 -- # string+=R 00:13:20.102 00:53:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.102 00:53:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.102 00:53:04 -- target/invalid.sh@25 -- # printf %x 122 00:13:20.102 00:53:04 -- target/invalid.sh@25 -- # echo -e '\x7a' 00:13:20.102 00:53:04 -- target/invalid.sh@25 -- # string+=z 00:13:20.102 00:53:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.102 00:53:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.102 00:53:04 -- target/invalid.sh@25 -- # printf %x 84 00:13:20.102 00:53:04 -- target/invalid.sh@25 -- # echo -e '\x54' 00:13:20.102 00:53:04 -- target/invalid.sh@25 -- # string+=T 00:13:20.102 00:53:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.102 00:53:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.102 00:53:04 -- target/invalid.sh@25 -- # printf %x 55 00:13:20.102 00:53:04 -- target/invalid.sh@25 -- # echo -e '\x37' 00:13:20.102 00:53:04 -- target/invalid.sh@25 -- # string+=7 00:13:20.102 00:53:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.102 00:53:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.102 00:53:04 -- target/invalid.sh@25 -- # printf %x 124 00:13:20.102 00:53:04 -- target/invalid.sh@25 -- # echo -e '\x7c' 00:13:20.102 00:53:04 -- target/invalid.sh@25 -- # string+='|' 00:13:20.102 00:53:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.102 00:53:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.102 00:53:04 -- target/invalid.sh@25 -- # printf %x 112 00:13:20.102 00:53:04 -- target/invalid.sh@25 -- # echo -e '\x70' 00:13:20.102 00:53:04 -- target/invalid.sh@25 -- # string+=p 00:13:20.102 00:53:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.102 00:53:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.102 00:53:04 -- target/invalid.sh@25 -- # printf %x 43 00:13:20.102 00:53:04 -- target/invalid.sh@25 -- # echo -e '\x2b' 00:13:20.102 00:53:04 -- target/invalid.sh@25 -- # string+=+ 00:13:20.102 00:53:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.102 00:53:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.102 00:53:04 -- target/invalid.sh@25 -- # printf %x 43 00:13:20.102 00:53:04 -- target/invalid.sh@25 -- # echo -e '\x2b' 00:13:20.102 00:53:04 -- target/invalid.sh@25 -- # string+=+ 00:13:20.102 00:53:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.102 00:53:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.102 00:53:04 -- target/invalid.sh@25 -- # printf %x 91 00:13:20.102 00:53:04 -- target/invalid.sh@25 -- # echo -e '\x5b' 00:13:20.102 00:53:04 -- target/invalid.sh@25 -- # string+='[' 00:13:20.102 00:53:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.102 00:53:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.102 00:53:04 -- target/invalid.sh@25 -- # printf %x 115 00:13:20.102 00:53:04 -- target/invalid.sh@25 -- # echo -e '\x73' 00:13:20.102 00:53:04 -- target/invalid.sh@25 -- # string+=s 00:13:20.102 00:53:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.102 00:53:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.102 00:53:04 -- target/invalid.sh@25 -- # printf %x 91 00:13:20.102 00:53:04 -- target/invalid.sh@25 -- # echo -e '\x5b' 00:13:20.102 00:53:04 -- target/invalid.sh@25 -- # string+='[' 00:13:20.102 00:53:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.102 00:53:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.102 00:53:04 -- target/invalid.sh@25 -- # printf %x 106 00:13:20.102 00:53:04 -- target/invalid.sh@25 -- # echo -e '\x6a' 00:13:20.102 00:53:04 -- target/invalid.sh@25 -- # string+=j 00:13:20.102 00:53:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.102 00:53:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.102 00:53:04 -- target/invalid.sh@25 -- # printf %x 77 00:13:20.102 00:53:04 -- target/invalid.sh@25 -- # echo -e '\x4d' 00:13:20.102 00:53:04 -- target/invalid.sh@25 -- # string+=M 00:13:20.102 00:53:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.102 00:53:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.102 00:53:04 -- target/invalid.sh@25 -- # printf %x 42 00:13:20.102 00:53:04 -- target/invalid.sh@25 -- # echo -e '\x2a' 00:13:20.102 00:53:04 -- target/invalid.sh@25 -- # string+='*' 00:13:20.102 00:53:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.102 00:53:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.102 00:53:04 -- target/invalid.sh@25 -- # printf %x 82 00:13:20.102 00:53:04 -- target/invalid.sh@25 -- # echo -e '\x52' 00:13:20.102 00:53:04 -- target/invalid.sh@25 -- # string+=R 00:13:20.102 00:53:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.102 00:53:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.102 00:53:04 -- target/invalid.sh@25 -- # printf %x 47 00:13:20.102 00:53:04 -- target/invalid.sh@25 -- # echo -e '\x2f' 00:13:20.102 00:53:04 -- target/invalid.sh@25 -- # string+=/ 00:13:20.102 00:53:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.102 00:53:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.102 00:53:04 -- target/invalid.sh@25 -- # printf %x 101 00:13:20.102 00:53:04 -- target/invalid.sh@25 -- # echo -e '\x65' 00:13:20.102 00:53:04 -- target/invalid.sh@25 -- # string+=e 00:13:20.102 00:53:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.102 00:53:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.102 00:53:04 -- target/invalid.sh@25 -- # printf %x 88 00:13:20.102 00:53:04 -- target/invalid.sh@25 -- # echo -e '\x58' 00:13:20.102 00:53:04 -- target/invalid.sh@25 -- # string+=X 00:13:20.102 00:53:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.102 00:53:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.102 00:53:04 -- target/invalid.sh@25 -- # printf %x 106 00:13:20.102 00:53:04 -- target/invalid.sh@25 -- # echo -e '\x6a' 00:13:20.102 00:53:04 -- target/invalid.sh@25 -- # string+=j 00:13:20.102 00:53:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.102 00:53:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.102 00:53:04 -- target/invalid.sh@25 -- # printf %x 40 00:13:20.102 00:53:04 -- target/invalid.sh@25 -- # echo -e '\x28' 00:13:20.102 00:53:04 -- target/invalid.sh@25 -- # string+='(' 00:13:20.102 00:53:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.102 00:53:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.102 00:53:04 -- target/invalid.sh@25 -- # printf %x 105 00:13:20.102 00:53:04 -- target/invalid.sh@25 -- # echo -e '\x69' 00:13:20.102 00:53:04 -- target/invalid.sh@25 -- # string+=i 00:13:20.102 00:53:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.102 00:53:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.102 00:53:04 -- target/invalid.sh@25 -- # printf %x 121 00:13:20.102 00:53:04 -- target/invalid.sh@25 -- # echo -e '\x79' 00:13:20.102 00:53:04 -- target/invalid.sh@25 -- # string+=y 00:13:20.102 00:53:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.102 00:53:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.102 00:53:04 -- target/invalid.sh@25 -- # printf %x 70 00:13:20.102 00:53:04 -- target/invalid.sh@25 -- # echo -e '\x46' 00:13:20.102 00:53:04 -- target/invalid.sh@25 -- # string+=F 00:13:20.102 00:53:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.102 00:53:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.102 00:53:04 -- target/invalid.sh@25 -- # printf %x 65 00:13:20.102 00:53:04 -- target/invalid.sh@25 -- # echo -e '\x41' 00:13:20.102 00:53:04 -- target/invalid.sh@25 -- # string+=A 00:13:20.102 00:53:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.102 00:53:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.102 00:53:04 -- target/invalid.sh@25 -- # printf %x 80 00:13:20.102 00:53:04 -- target/invalid.sh@25 -- # echo -e '\x50' 00:13:20.102 00:53:04 -- target/invalid.sh@25 -- # string+=P 00:13:20.102 00:53:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.102 00:53:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.102 00:53:04 -- target/invalid.sh@25 -- # printf %x 111 00:13:20.102 00:53:04 -- target/invalid.sh@25 -- # echo -e '\x6f' 00:13:20.102 00:53:04 -- target/invalid.sh@25 -- # string+=o 00:13:20.102 00:53:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.102 00:53:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.102 00:53:04 -- target/invalid.sh@25 -- # printf %x 123 00:13:20.102 00:53:04 -- target/invalid.sh@25 -- # echo -e '\x7b' 00:13:20.102 00:53:04 -- target/invalid.sh@25 -- # string+='{' 00:13:20.102 00:53:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.102 00:53:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.102 00:53:04 -- target/invalid.sh@25 -- # printf %x 89 00:13:20.102 00:53:04 -- target/invalid.sh@25 -- # echo -e '\x59' 00:13:20.102 00:53:04 -- target/invalid.sh@25 -- # string+=Y 00:13:20.102 00:53:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.102 00:53:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.102 00:53:04 -- target/invalid.sh@25 -- # printf %x 40 00:13:20.102 00:53:04 -- target/invalid.sh@25 -- # echo -e '\x28' 00:13:20.102 00:53:04 -- target/invalid.sh@25 -- # string+='(' 00:13:20.102 00:53:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.102 00:53:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.102 00:53:04 -- target/invalid.sh@25 -- # printf %x 117 00:13:20.102 00:53:04 -- target/invalid.sh@25 -- # echo -e '\x75' 00:13:20.102 00:53:04 -- target/invalid.sh@25 -- # string+=u 00:13:20.102 00:53:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.102 00:53:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.102 00:53:04 -- target/invalid.sh@25 -- # printf %x 106 00:13:20.102 00:53:04 -- target/invalid.sh@25 -- # echo -e '\x6a' 00:13:20.102 00:53:04 -- target/invalid.sh@25 -- # string+=j 00:13:20.102 00:53:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.102 00:53:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.102 00:53:04 -- target/invalid.sh@25 -- # printf %x 110 00:13:20.103 00:53:04 -- target/invalid.sh@25 -- # echo -e '\x6e' 00:13:20.103 00:53:04 -- target/invalid.sh@25 -- # string+=n 00:13:20.103 00:53:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.103 00:53:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.103 00:53:04 -- target/invalid.sh@25 -- # printf %x 40 00:13:20.103 00:53:04 -- target/invalid.sh@25 -- # echo -e '\x28' 00:13:20.103 00:53:04 -- target/invalid.sh@25 -- # string+='(' 00:13:20.103 00:53:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.103 00:53:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.103 00:53:04 -- target/invalid.sh@25 -- # printf %x 59 00:13:20.103 00:53:04 -- target/invalid.sh@25 -- # echo -e '\x3b' 00:13:20.103 00:53:04 -- target/invalid.sh@25 -- # string+=';' 00:13:20.103 00:53:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.103 00:53:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.103 00:53:04 -- target/invalid.sh@25 -- # printf %x 93 00:13:20.103 00:53:04 -- target/invalid.sh@25 -- # echo -e '\x5d' 00:13:20.103 00:53:04 -- target/invalid.sh@25 -- # string+=']' 00:13:20.103 00:53:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.103 00:53:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.103 00:53:04 -- target/invalid.sh@25 -- # printf %x 49 00:13:20.103 00:53:04 -- target/invalid.sh@25 -- # echo -e '\x31' 00:13:20.103 00:53:04 -- target/invalid.sh@25 -- # string+=1 00:13:20.103 00:53:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.103 00:53:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.103 00:53:04 -- target/invalid.sh@28 -- # [[ 2 == \- ]] 00:13:20.103 00:53:04 -- target/invalid.sh@31 -- # echo '2PEQWRzT7|p++[s[jM*R/eXj(iyFAPo{Y(ujn(;]1' 00:13:20.103 00:53:04 -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '2PEQWRzT7|p++[s[jM*R/eXj(iyFAPo{Y(ujn(;]1' nqn.2016-06.io.spdk:cnode18394 00:13:20.361 [2024-07-23 00:53:04.511737] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18394: invalid model number '2PEQWRzT7|p++[s[jM*R/eXj(iyFAPo{Y(ujn(;]1' 00:13:20.361 00:53:04 -- target/invalid.sh@58 -- # out='request: 00:13:20.361 { 00:13:20.361 "nqn": "nqn.2016-06.io.spdk:cnode18394", 00:13:20.361 "model_number": "2PEQWRzT7|p++[s[jM*R/eXj(iyFAPo{Y(ujn(;]1", 00:13:20.361 "method": "nvmf_create_subsystem", 00:13:20.361 "req_id": 1 00:13:20.361 } 00:13:20.361 Got JSON-RPC error response 00:13:20.361 response: 00:13:20.361 { 00:13:20.361 "code": -32602, 00:13:20.361 "message": "Invalid MN 2PEQWRzT7|p++[s[jM*R/eXj(iyFAPo{Y(ujn(;]1" 00:13:20.361 }' 00:13:20.361 00:53:04 -- target/invalid.sh@59 -- # [[ request: 00:13:20.361 { 00:13:20.361 "nqn": "nqn.2016-06.io.spdk:cnode18394", 00:13:20.361 "model_number": "2PEQWRzT7|p++[s[jM*R/eXj(iyFAPo{Y(ujn(;]1", 00:13:20.361 "method": "nvmf_create_subsystem", 00:13:20.361 "req_id": 1 00:13:20.361 } 00:13:20.361 Got JSON-RPC error response 00:13:20.361 response: 00:13:20.361 { 00:13:20.361 "code": -32602, 00:13:20.361 "message": "Invalid MN 2PEQWRzT7|p++[s[jM*R/eXj(iyFAPo{Y(ujn(;]1" 00:13:20.361 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:20.361 00:53:04 -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:13:20.619 [2024-07-23 00:53:04.744541] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:20.619 00:53:04 -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:13:20.877 00:53:04 -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:13:20.877 00:53:05 -- target/invalid.sh@67 -- # echo '' 00:13:20.877 00:53:05 -- target/invalid.sh@67 -- # head -n 1 00:13:20.877 00:53:05 -- target/invalid.sh@67 -- # IP= 00:13:20.877 00:53:05 -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:13:21.135 [2024-07-23 00:53:05.218109] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:13:21.135 00:53:05 -- target/invalid.sh@69 -- # out='request: 00:13:21.135 { 00:13:21.135 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:21.135 "listen_address": { 00:13:21.135 "trtype": "tcp", 00:13:21.135 "traddr": "", 00:13:21.135 "trsvcid": "4421" 00:13:21.135 }, 00:13:21.135 "method": "nvmf_subsystem_remove_listener", 00:13:21.135 "req_id": 1 00:13:21.135 } 00:13:21.135 Got JSON-RPC error response 00:13:21.135 response: 00:13:21.135 { 00:13:21.135 "code": -32602, 00:13:21.135 "message": "Invalid parameters" 00:13:21.135 }' 00:13:21.135 00:53:05 -- target/invalid.sh@70 -- # [[ request: 00:13:21.135 { 00:13:21.135 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:21.135 "listen_address": { 00:13:21.135 "trtype": "tcp", 00:13:21.135 "traddr": "", 00:13:21.135 "trsvcid": "4421" 00:13:21.135 }, 00:13:21.135 "method": "nvmf_subsystem_remove_listener", 00:13:21.135 "req_id": 1 00:13:21.135 } 00:13:21.135 Got JSON-RPC error response 00:13:21.135 response: 00:13:21.135 { 00:13:21.135 "code": -32602, 00:13:21.135 "message": "Invalid parameters" 00:13:21.135 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:13:21.135 00:53:05 -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode14799 -i 0 00:13:21.394 [2024-07-23 00:53:05.474929] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14799: invalid cntlid range [0-65519] 00:13:21.394 00:53:05 -- target/invalid.sh@73 -- # out='request: 00:13:21.394 { 00:13:21.394 "nqn": "nqn.2016-06.io.spdk:cnode14799", 00:13:21.394 "min_cntlid": 0, 00:13:21.394 "method": "nvmf_create_subsystem", 00:13:21.394 "req_id": 1 00:13:21.394 } 00:13:21.394 Got JSON-RPC error response 00:13:21.394 response: 00:13:21.394 { 00:13:21.394 "code": -32602, 00:13:21.394 "message": "Invalid cntlid range [0-65519]" 00:13:21.394 }' 00:13:21.394 00:53:05 -- target/invalid.sh@74 -- # [[ request: 00:13:21.394 { 00:13:21.394 "nqn": "nqn.2016-06.io.spdk:cnode14799", 00:13:21.394 "min_cntlid": 0, 00:13:21.394 "method": "nvmf_create_subsystem", 00:13:21.394 "req_id": 1 00:13:21.394 } 00:13:21.394 Got JSON-RPC error response 00:13:21.394 response: 00:13:21.394 { 00:13:21.394 "code": -32602, 00:13:21.394 "message": "Invalid cntlid range [0-65519]" 00:13:21.394 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:21.394 00:53:05 -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode32348 -i 65520 00:13:21.652 [2024-07-23 00:53:05.711735] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32348: invalid cntlid range [65520-65519] 00:13:21.652 00:53:05 -- target/invalid.sh@75 -- # out='request: 00:13:21.652 { 00:13:21.652 "nqn": "nqn.2016-06.io.spdk:cnode32348", 00:13:21.652 "min_cntlid": 65520, 00:13:21.652 "method": "nvmf_create_subsystem", 00:13:21.652 "req_id": 1 00:13:21.652 } 00:13:21.652 Got JSON-RPC error response 00:13:21.652 response: 00:13:21.652 { 00:13:21.652 "code": -32602, 00:13:21.652 "message": "Invalid cntlid range [65520-65519]" 00:13:21.652 }' 00:13:21.652 00:53:05 -- target/invalid.sh@76 -- # [[ request: 00:13:21.652 { 00:13:21.652 "nqn": "nqn.2016-06.io.spdk:cnode32348", 00:13:21.652 "min_cntlid": 65520, 00:13:21.652 "method": "nvmf_create_subsystem", 00:13:21.652 "req_id": 1 00:13:21.652 } 00:13:21.652 Got JSON-RPC error response 00:13:21.652 response: 00:13:21.652 { 00:13:21.652 "code": -32602, 00:13:21.652 "message": "Invalid cntlid range [65520-65519]" 00:13:21.652 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:21.652 00:53:05 -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode17851 -I 0 00:13:21.911 [2024-07-23 00:53:05.948539] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17851: invalid cntlid range [1-0] 00:13:21.911 00:53:05 -- target/invalid.sh@77 -- # out='request: 00:13:21.911 { 00:13:21.911 "nqn": "nqn.2016-06.io.spdk:cnode17851", 00:13:21.911 "max_cntlid": 0, 00:13:21.911 "method": "nvmf_create_subsystem", 00:13:21.911 "req_id": 1 00:13:21.911 } 00:13:21.911 Got JSON-RPC error response 00:13:21.911 response: 00:13:21.911 { 00:13:21.911 "code": -32602, 00:13:21.911 "message": "Invalid cntlid range [1-0]" 00:13:21.911 }' 00:13:21.911 00:53:05 -- target/invalid.sh@78 -- # [[ request: 00:13:21.911 { 00:13:21.911 "nqn": "nqn.2016-06.io.spdk:cnode17851", 00:13:21.911 "max_cntlid": 0, 00:13:21.911 "method": "nvmf_create_subsystem", 00:13:21.911 "req_id": 1 00:13:21.911 } 00:13:21.911 Got JSON-RPC error response 00:13:21.911 response: 00:13:21.911 { 00:13:21.911 "code": -32602, 00:13:21.911 "message": "Invalid cntlid range [1-0]" 00:13:21.911 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:21.911 00:53:05 -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6172 -I 65520 00:13:22.169 [2024-07-23 00:53:06.185367] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6172: invalid cntlid range [1-65520] 00:13:22.169 00:53:06 -- target/invalid.sh@79 -- # out='request: 00:13:22.169 { 00:13:22.169 "nqn": "nqn.2016-06.io.spdk:cnode6172", 00:13:22.169 "max_cntlid": 65520, 00:13:22.169 "method": "nvmf_create_subsystem", 00:13:22.169 "req_id": 1 00:13:22.169 } 00:13:22.169 Got JSON-RPC error response 00:13:22.169 response: 00:13:22.169 { 00:13:22.169 "code": -32602, 00:13:22.169 "message": "Invalid cntlid range [1-65520]" 00:13:22.169 }' 00:13:22.169 00:53:06 -- target/invalid.sh@80 -- # [[ request: 00:13:22.169 { 00:13:22.169 "nqn": "nqn.2016-06.io.spdk:cnode6172", 00:13:22.169 "max_cntlid": 65520, 00:13:22.169 "method": "nvmf_create_subsystem", 00:13:22.169 "req_id": 1 00:13:22.169 } 00:13:22.169 Got JSON-RPC error response 00:13:22.169 response: 00:13:22.169 { 00:13:22.169 "code": -32602, 00:13:22.169 "message": "Invalid cntlid range [1-65520]" 00:13:22.169 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:22.169 00:53:06 -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8991 -i 6 -I 5 00:13:22.427 [2024-07-23 00:53:06.418156] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8991: invalid cntlid range [6-5] 00:13:22.427 00:53:06 -- target/invalid.sh@83 -- # out='request: 00:13:22.427 { 00:13:22.427 "nqn": "nqn.2016-06.io.spdk:cnode8991", 00:13:22.427 "min_cntlid": 6, 00:13:22.427 "max_cntlid": 5, 00:13:22.427 "method": "nvmf_create_subsystem", 00:13:22.427 "req_id": 1 00:13:22.427 } 00:13:22.427 Got JSON-RPC error response 00:13:22.427 response: 00:13:22.427 { 00:13:22.427 "code": -32602, 00:13:22.427 "message": "Invalid cntlid range [6-5]" 00:13:22.427 }' 00:13:22.427 00:53:06 -- target/invalid.sh@84 -- # [[ request: 00:13:22.427 { 00:13:22.427 "nqn": "nqn.2016-06.io.spdk:cnode8991", 00:13:22.427 "min_cntlid": 6, 00:13:22.427 "max_cntlid": 5, 00:13:22.427 "method": "nvmf_create_subsystem", 00:13:22.427 "req_id": 1 00:13:22.427 } 00:13:22.427 Got JSON-RPC error response 00:13:22.427 response: 00:13:22.427 { 00:13:22.427 "code": -32602, 00:13:22.427 "message": "Invalid cntlid range [6-5]" 00:13:22.427 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:22.428 00:53:06 -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:13:22.428 00:53:06 -- target/invalid.sh@87 -- # out='request: 00:13:22.428 { 00:13:22.428 "name": "foobar", 00:13:22.428 "method": "nvmf_delete_target", 00:13:22.428 "req_id": 1 00:13:22.428 } 00:13:22.428 Got JSON-RPC error response 00:13:22.428 response: 00:13:22.428 { 00:13:22.428 "code": -32602, 00:13:22.428 "message": "The specified target doesn'\''t exist, cannot delete it." 00:13:22.428 }' 00:13:22.428 00:53:06 -- target/invalid.sh@88 -- # [[ request: 00:13:22.428 { 00:13:22.428 "name": "foobar", 00:13:22.428 "method": "nvmf_delete_target", 00:13:22.428 "req_id": 1 00:13:22.428 } 00:13:22.428 Got JSON-RPC error response 00:13:22.428 response: 00:13:22.428 { 00:13:22.428 "code": -32602, 00:13:22.428 "message": "The specified target doesn't exist, cannot delete it." 00:13:22.428 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:13:22.428 00:53:06 -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:13:22.428 00:53:06 -- target/invalid.sh@91 -- # nvmftestfini 00:13:22.428 00:53:06 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:22.428 00:53:06 -- nvmf/common.sh@116 -- # sync 00:13:22.428 00:53:06 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:22.428 00:53:06 -- nvmf/common.sh@119 -- # set +e 00:13:22.428 00:53:06 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:22.428 00:53:06 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:22.428 rmmod nvme_tcp 00:13:22.428 rmmod nvme_fabrics 00:13:22.428 rmmod nvme_keyring 00:13:22.428 00:53:06 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:22.428 00:53:06 -- nvmf/common.sh@123 -- # set -e 00:13:22.428 00:53:06 -- nvmf/common.sh@124 -- # return 0 00:13:22.428 00:53:06 -- nvmf/common.sh@477 -- # '[' -n 3349309 ']' 00:13:22.428 00:53:06 -- nvmf/common.sh@478 -- # killprocess 3349309 00:13:22.428 00:53:06 -- common/autotest_common.sh@926 -- # '[' -z 3349309 ']' 00:13:22.428 00:53:06 -- common/autotest_common.sh@930 -- # kill -0 3349309 00:13:22.428 00:53:06 -- common/autotest_common.sh@931 -- # uname 00:13:22.428 00:53:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:22.428 00:53:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3349309 00:13:22.686 00:53:06 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:22.686 00:53:06 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:22.686 00:53:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3349309' 00:13:22.686 killing process with pid 3349309 00:13:22.687 00:53:06 -- common/autotest_common.sh@945 -- # kill 3349309 00:13:22.687 00:53:06 -- common/autotest_common.sh@950 -- # wait 3349309 00:13:22.687 00:53:06 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:22.687 00:53:06 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:22.687 00:53:06 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:22.687 00:53:06 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:22.687 00:53:06 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:22.687 00:53:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:22.687 00:53:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:22.687 00:53:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:25.229 00:53:08 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:25.229 00:13:25.229 real 0m8.981s 00:13:25.229 user 0m21.837s 00:13:25.229 sys 0m2.361s 00:13:25.229 00:53:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:25.229 00:53:08 -- common/autotest_common.sh@10 -- # set +x 00:13:25.229 ************************************ 00:13:25.229 END TEST nvmf_invalid 00:13:25.229 ************************************ 00:13:25.229 00:53:08 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:25.229 00:53:08 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:25.229 00:53:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:25.229 00:53:08 -- common/autotest_common.sh@10 -- # set +x 00:13:25.229 ************************************ 00:13:25.229 START TEST nvmf_abort 00:13:25.229 ************************************ 00:13:25.229 00:53:08 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:25.229 * Looking for test storage... 00:13:25.229 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:25.229 00:53:08 -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:25.229 00:53:08 -- nvmf/common.sh@7 -- # uname -s 00:13:25.229 00:53:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:25.229 00:53:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:25.229 00:53:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:25.229 00:53:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:25.229 00:53:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:25.229 00:53:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:25.229 00:53:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:25.229 00:53:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:25.229 00:53:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:25.229 00:53:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:25.229 00:53:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:25.229 00:53:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:25.229 00:53:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:25.229 00:53:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:25.229 00:53:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:25.229 00:53:09 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:25.229 00:53:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:25.229 00:53:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:25.229 00:53:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:25.229 00:53:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.229 00:53:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.229 00:53:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.229 00:53:09 -- paths/export.sh@5 -- # export PATH 00:13:25.229 00:53:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.229 00:53:09 -- nvmf/common.sh@46 -- # : 0 00:13:25.229 00:53:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:25.229 00:53:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:25.229 00:53:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:25.229 00:53:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:25.229 00:53:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:25.229 00:53:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:25.229 00:53:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:25.229 00:53:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:25.229 00:53:09 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:25.229 00:53:09 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:13:25.229 00:53:09 -- target/abort.sh@14 -- # nvmftestinit 00:13:25.229 00:53:09 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:25.229 00:53:09 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:25.229 00:53:09 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:25.229 00:53:09 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:25.229 00:53:09 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:25.229 00:53:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:25.229 00:53:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:25.229 00:53:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:25.229 00:53:09 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:25.229 00:53:09 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:25.229 00:53:09 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:25.229 00:53:09 -- common/autotest_common.sh@10 -- # set +x 00:13:27.137 00:53:10 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:27.137 00:53:10 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:27.137 00:53:10 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:27.137 00:53:10 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:27.137 00:53:10 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:27.137 00:53:10 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:27.137 00:53:10 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:27.137 00:53:10 -- nvmf/common.sh@294 -- # net_devs=() 00:13:27.137 00:53:10 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:27.137 00:53:10 -- nvmf/common.sh@295 -- # e810=() 00:13:27.137 00:53:10 -- nvmf/common.sh@295 -- # local -ga e810 00:13:27.137 00:53:10 -- nvmf/common.sh@296 -- # x722=() 00:13:27.137 00:53:10 -- nvmf/common.sh@296 -- # local -ga x722 00:13:27.137 00:53:10 -- nvmf/common.sh@297 -- # mlx=() 00:13:27.137 00:53:10 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:27.137 00:53:10 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:27.137 00:53:10 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:27.137 00:53:10 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:27.137 00:53:10 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:27.137 00:53:10 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:27.137 00:53:10 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:27.137 00:53:10 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:27.137 00:53:10 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:27.137 00:53:10 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:27.137 00:53:10 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:27.137 00:53:10 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:27.137 00:53:10 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:27.137 00:53:10 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:27.137 00:53:10 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:27.137 00:53:10 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:27.137 00:53:10 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:27.137 00:53:10 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:27.137 00:53:10 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:27.137 00:53:10 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:27.137 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:27.137 00:53:10 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:27.137 00:53:10 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:27.137 00:53:10 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:27.137 00:53:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:27.137 00:53:10 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:27.137 00:53:10 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:27.137 00:53:10 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:27.137 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:27.137 00:53:10 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:27.137 00:53:10 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:27.137 00:53:10 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:27.137 00:53:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:27.137 00:53:10 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:27.137 00:53:10 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:27.137 00:53:10 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:27.137 00:53:10 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:27.137 00:53:10 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:27.137 00:53:10 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:27.137 00:53:10 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:27.137 00:53:10 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:27.137 00:53:10 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:27.137 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:27.137 00:53:10 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:27.137 00:53:10 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:27.137 00:53:10 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:27.137 00:53:10 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:27.137 00:53:10 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:27.137 00:53:10 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:27.137 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:27.137 00:53:10 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:27.137 00:53:10 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:27.137 00:53:10 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:27.137 00:53:10 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:27.137 00:53:10 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:27.138 00:53:10 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:27.138 00:53:10 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:27.138 00:53:10 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:27.138 00:53:10 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:27.138 00:53:10 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:27.138 00:53:10 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:27.138 00:53:10 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:27.138 00:53:10 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:27.138 00:53:10 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:27.138 00:53:10 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:27.138 00:53:10 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:27.138 00:53:10 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:27.138 00:53:10 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:27.138 00:53:10 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:27.138 00:53:11 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:27.138 00:53:11 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:27.138 00:53:11 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:27.138 00:53:11 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:27.138 00:53:11 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:27.138 00:53:11 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:27.138 00:53:11 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:27.138 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:27.138 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.213 ms 00:13:27.138 00:13:27.138 --- 10.0.0.2 ping statistics --- 00:13:27.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:27.138 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:13:27.138 00:53:11 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:27.138 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:27.138 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:13:27.138 00:13:27.138 --- 10.0.0.1 ping statistics --- 00:13:27.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:27.138 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:13:27.138 00:53:11 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:27.138 00:53:11 -- nvmf/common.sh@410 -- # return 0 00:13:27.138 00:53:11 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:27.138 00:53:11 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:27.138 00:53:11 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:27.138 00:53:11 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:27.138 00:53:11 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:27.138 00:53:11 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:27.138 00:53:11 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:27.138 00:53:11 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:13:27.138 00:53:11 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:27.138 00:53:11 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:27.138 00:53:11 -- common/autotest_common.sh@10 -- # set +x 00:13:27.138 00:53:11 -- nvmf/common.sh@469 -- # nvmfpid=3351992 00:13:27.138 00:53:11 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:27.138 00:53:11 -- nvmf/common.sh@470 -- # waitforlisten 3351992 00:13:27.138 00:53:11 -- common/autotest_common.sh@819 -- # '[' -z 3351992 ']' 00:13:27.138 00:53:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:27.138 00:53:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:27.138 00:53:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:27.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:27.138 00:53:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:27.138 00:53:11 -- common/autotest_common.sh@10 -- # set +x 00:13:27.138 [2024-07-23 00:53:11.163222] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:13:27.138 [2024-07-23 00:53:11.163293] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:27.138 EAL: No free 2048 kB hugepages reported on node 1 00:13:27.138 [2024-07-23 00:53:11.226348] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:27.138 [2024-07-23 00:53:11.309051] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:27.138 [2024-07-23 00:53:11.309217] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:27.138 [2024-07-23 00:53:11.309235] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:27.138 [2024-07-23 00:53:11.309248] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:27.138 [2024-07-23 00:53:11.309308] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:27.138 [2024-07-23 00:53:11.309368] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:27.138 [2024-07-23 00:53:11.309370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:28.074 00:53:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:28.074 00:53:12 -- common/autotest_common.sh@852 -- # return 0 00:13:28.074 00:53:12 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:28.074 00:53:12 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:28.074 00:53:12 -- common/autotest_common.sh@10 -- # set +x 00:13:28.074 00:53:12 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:28.074 00:53:12 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:13:28.074 00:53:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:28.074 00:53:12 -- common/autotest_common.sh@10 -- # set +x 00:13:28.074 [2024-07-23 00:53:12.169355] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:28.074 00:53:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:28.074 00:53:12 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:13:28.074 00:53:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:28.074 00:53:12 -- common/autotest_common.sh@10 -- # set +x 00:13:28.074 Malloc0 00:13:28.074 00:53:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:28.074 00:53:12 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:28.074 00:53:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:28.074 00:53:12 -- common/autotest_common.sh@10 -- # set +x 00:13:28.074 Delay0 00:13:28.074 00:53:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:28.074 00:53:12 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:28.074 00:53:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:28.074 00:53:12 -- common/autotest_common.sh@10 -- # set +x 00:13:28.074 00:53:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:28.074 00:53:12 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:13:28.074 00:53:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:28.074 00:53:12 -- common/autotest_common.sh@10 -- # set +x 00:13:28.074 00:53:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:28.074 00:53:12 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:28.074 00:53:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:28.074 00:53:12 -- common/autotest_common.sh@10 -- # set +x 00:13:28.074 [2024-07-23 00:53:12.241355] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:28.074 00:53:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:28.074 00:53:12 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:28.074 00:53:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:28.074 00:53:12 -- common/autotest_common.sh@10 -- # set +x 00:13:28.074 00:53:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:28.074 00:53:12 -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:13:28.332 EAL: No free 2048 kB hugepages reported on node 1 00:13:28.332 [2024-07-23 00:53:12.338004] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:30.239 Initializing NVMe Controllers 00:13:30.239 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:13:30.239 controller IO queue size 128 less than required 00:13:30.239 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:13:30.239 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:13:30.239 Initialization complete. Launching workers. 00:13:30.239 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 31202 00:13:30.239 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 31263, failed to submit 62 00:13:30.239 success 31202, unsuccess 61, failed 0 00:13:30.239 00:53:14 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:30.239 00:53:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:30.239 00:53:14 -- common/autotest_common.sh@10 -- # set +x 00:13:30.239 00:53:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:30.239 00:53:14 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:13:30.239 00:53:14 -- target/abort.sh@38 -- # nvmftestfini 00:13:30.239 00:53:14 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:30.239 00:53:14 -- nvmf/common.sh@116 -- # sync 00:13:30.239 00:53:14 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:30.239 00:53:14 -- nvmf/common.sh@119 -- # set +e 00:13:30.239 00:53:14 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:30.239 00:53:14 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:30.239 rmmod nvme_tcp 00:13:30.499 rmmod nvme_fabrics 00:13:30.499 rmmod nvme_keyring 00:13:30.499 00:53:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:30.499 00:53:14 -- nvmf/common.sh@123 -- # set -e 00:13:30.499 00:53:14 -- nvmf/common.sh@124 -- # return 0 00:13:30.499 00:53:14 -- nvmf/common.sh@477 -- # '[' -n 3351992 ']' 00:13:30.499 00:53:14 -- nvmf/common.sh@478 -- # killprocess 3351992 00:13:30.499 00:53:14 -- common/autotest_common.sh@926 -- # '[' -z 3351992 ']' 00:13:30.499 00:53:14 -- common/autotest_common.sh@930 -- # kill -0 3351992 00:13:30.499 00:53:14 -- common/autotest_common.sh@931 -- # uname 00:13:30.499 00:53:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:30.499 00:53:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3351992 00:13:30.499 00:53:14 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:13:30.499 00:53:14 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:13:30.499 00:53:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3351992' 00:13:30.499 killing process with pid 3351992 00:13:30.499 00:53:14 -- common/autotest_common.sh@945 -- # kill 3351992 00:13:30.499 00:53:14 -- common/autotest_common.sh@950 -- # wait 3351992 00:13:30.759 00:53:14 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:30.759 00:53:14 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:30.759 00:53:14 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:30.759 00:53:14 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:30.759 00:53:14 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:30.759 00:53:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:30.759 00:53:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:30.759 00:53:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:32.696 00:53:16 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:32.696 00:13:32.696 real 0m7.864s 00:13:32.696 user 0m12.795s 00:13:32.696 sys 0m2.470s 00:13:32.696 00:53:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:32.696 00:53:16 -- common/autotest_common.sh@10 -- # set +x 00:13:32.696 ************************************ 00:13:32.696 END TEST nvmf_abort 00:13:32.696 ************************************ 00:13:32.696 00:53:16 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:32.696 00:53:16 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:32.696 00:53:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:32.696 00:53:16 -- common/autotest_common.sh@10 -- # set +x 00:13:32.696 ************************************ 00:13:32.696 START TEST nvmf_ns_hotplug_stress 00:13:32.696 ************************************ 00:13:32.696 00:53:16 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:32.956 * Looking for test storage... 00:13:32.956 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:32.956 00:53:16 -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:32.956 00:53:16 -- nvmf/common.sh@7 -- # uname -s 00:13:32.956 00:53:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:32.956 00:53:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:32.956 00:53:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:32.956 00:53:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:32.956 00:53:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:32.956 00:53:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:32.956 00:53:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:32.956 00:53:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:32.956 00:53:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:32.956 00:53:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:32.956 00:53:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:32.956 00:53:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:32.956 00:53:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:32.956 00:53:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:32.956 00:53:16 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:32.956 00:53:16 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:32.956 00:53:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:32.956 00:53:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:32.956 00:53:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:32.956 00:53:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.956 00:53:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.956 00:53:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.956 00:53:16 -- paths/export.sh@5 -- # export PATH 00:13:32.956 00:53:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.956 00:53:16 -- nvmf/common.sh@46 -- # : 0 00:13:32.956 00:53:16 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:32.956 00:53:16 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:32.956 00:53:16 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:32.956 00:53:16 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:32.956 00:53:16 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:32.956 00:53:16 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:32.956 00:53:16 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:32.956 00:53:16 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:32.956 00:53:16 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:32.956 00:53:16 -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:13:32.956 00:53:16 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:32.956 00:53:16 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:32.956 00:53:16 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:32.956 00:53:16 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:32.956 00:53:16 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:32.956 00:53:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:32.956 00:53:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:32.956 00:53:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:32.956 00:53:16 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:32.956 00:53:16 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:32.956 00:53:16 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:32.956 00:53:16 -- common/autotest_common.sh@10 -- # set +x 00:13:34.865 00:53:18 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:34.865 00:53:18 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:34.865 00:53:18 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:34.865 00:53:18 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:34.865 00:53:18 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:34.865 00:53:18 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:34.865 00:53:18 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:34.865 00:53:18 -- nvmf/common.sh@294 -- # net_devs=() 00:13:34.865 00:53:18 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:34.865 00:53:18 -- nvmf/common.sh@295 -- # e810=() 00:13:34.865 00:53:18 -- nvmf/common.sh@295 -- # local -ga e810 00:13:34.865 00:53:18 -- nvmf/common.sh@296 -- # x722=() 00:13:34.865 00:53:18 -- nvmf/common.sh@296 -- # local -ga x722 00:13:34.865 00:53:18 -- nvmf/common.sh@297 -- # mlx=() 00:13:34.865 00:53:18 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:34.865 00:53:18 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:34.865 00:53:18 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:34.865 00:53:18 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:34.865 00:53:18 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:34.865 00:53:18 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:34.865 00:53:18 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:34.865 00:53:18 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:34.865 00:53:18 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:34.865 00:53:18 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:34.865 00:53:18 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:34.865 00:53:18 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:34.865 00:53:18 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:34.865 00:53:18 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:34.865 00:53:18 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:34.865 00:53:18 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:34.865 00:53:18 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:34.865 00:53:18 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:34.865 00:53:18 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:34.865 00:53:18 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:34.865 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:34.865 00:53:18 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:34.865 00:53:18 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:34.865 00:53:18 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:34.865 00:53:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:34.865 00:53:18 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:34.865 00:53:18 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:34.865 00:53:18 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:34.865 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:34.865 00:53:18 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:34.865 00:53:18 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:34.865 00:53:18 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:34.865 00:53:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:34.865 00:53:18 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:34.865 00:53:18 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:34.865 00:53:18 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:34.865 00:53:18 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:34.865 00:53:18 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:34.865 00:53:18 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:34.865 00:53:18 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:34.866 00:53:18 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:34.866 00:53:18 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:34.866 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:34.866 00:53:18 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:34.866 00:53:18 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:34.866 00:53:18 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:34.866 00:53:18 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:34.866 00:53:18 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:34.866 00:53:18 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:34.866 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:34.866 00:53:18 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:34.866 00:53:18 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:34.866 00:53:18 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:34.866 00:53:18 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:34.866 00:53:18 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:34.866 00:53:18 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:34.866 00:53:18 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:34.866 00:53:18 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:34.866 00:53:18 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:34.866 00:53:18 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:34.866 00:53:18 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:34.866 00:53:18 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:34.866 00:53:18 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:34.866 00:53:18 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:34.866 00:53:18 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:34.866 00:53:18 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:34.866 00:53:18 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:34.866 00:53:18 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:34.866 00:53:18 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:34.866 00:53:18 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:34.866 00:53:18 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:34.866 00:53:18 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:34.866 00:53:18 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:34.866 00:53:19 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:34.866 00:53:19 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:34.866 00:53:19 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:34.866 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:34.866 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.128 ms 00:13:34.866 00:13:34.866 --- 10.0.0.2 ping statistics --- 00:13:34.866 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:34.866 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:13:34.866 00:53:19 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:34.866 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:34.866 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:13:34.866 00:13:34.866 --- 10.0.0.1 ping statistics --- 00:13:34.866 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:34.866 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:13:34.866 00:53:19 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:34.866 00:53:19 -- nvmf/common.sh@410 -- # return 0 00:13:34.866 00:53:19 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:34.866 00:53:19 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:34.866 00:53:19 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:34.866 00:53:19 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:34.866 00:53:19 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:34.866 00:53:19 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:34.866 00:53:19 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:35.125 00:53:19 -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:13:35.125 00:53:19 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:35.125 00:53:19 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:35.125 00:53:19 -- common/autotest_common.sh@10 -- # set +x 00:13:35.125 00:53:19 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:35.125 00:53:19 -- nvmf/common.sh@469 -- # nvmfpid=3354365 00:13:35.125 00:53:19 -- nvmf/common.sh@470 -- # waitforlisten 3354365 00:13:35.125 00:53:19 -- common/autotest_common.sh@819 -- # '[' -z 3354365 ']' 00:13:35.125 00:53:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:35.125 00:53:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:35.125 00:53:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:35.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:35.125 00:53:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:35.125 00:53:19 -- common/autotest_common.sh@10 -- # set +x 00:13:35.125 [2024-07-23 00:53:19.129924] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:13:35.125 [2024-07-23 00:53:19.129990] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:35.126 EAL: No free 2048 kB hugepages reported on node 1 00:13:35.126 [2024-07-23 00:53:19.195873] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:35.126 [2024-07-23 00:53:19.286683] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:35.126 [2024-07-23 00:53:19.286860] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:35.126 [2024-07-23 00:53:19.286881] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:35.126 [2024-07-23 00:53:19.286896] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:35.126 [2024-07-23 00:53:19.286985] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:35.126 [2024-07-23 00:53:19.287040] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:35.126 [2024-07-23 00:53:19.287043] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:36.059 00:53:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:36.059 00:53:20 -- common/autotest_common.sh@852 -- # return 0 00:13:36.059 00:53:20 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:36.059 00:53:20 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:36.059 00:53:20 -- common/autotest_common.sh@10 -- # set +x 00:13:36.059 00:53:20 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:36.059 00:53:20 -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:13:36.059 00:53:20 -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:36.357 [2024-07-23 00:53:20.311227] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:36.357 00:53:20 -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:36.615 00:53:20 -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:36.615 [2024-07-23 00:53:20.777842] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:36.615 00:53:20 -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:36.872 00:53:21 -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:13:37.129 Malloc0 00:13:37.129 00:53:21 -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:37.386 Delay0 00:13:37.386 00:53:21 -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:37.644 00:53:21 -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:13:37.901 NULL1 00:13:37.901 00:53:22 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:38.159 00:53:22 -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3354688 00:13:38.159 00:53:22 -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:13:38.159 00:53:22 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3354688 00:13:38.159 00:53:22 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:38.159 EAL: No free 2048 kB hugepages reported on node 1 00:13:39.538 Read completed with error (sct=0, sc=11) 00:13:39.538 00:53:23 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:39.538 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:39.538 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:39.538 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:39.538 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:39.538 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:39.796 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:39.796 00:53:23 -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:13:39.796 00:53:23 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:13:39.796 true 00:13:40.056 00:53:24 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3354688 00:13:40.056 00:53:24 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:40.623 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:40.623 00:53:24 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:40.881 00:53:25 -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:13:40.881 00:53:25 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:13:41.139 true 00:13:41.139 00:53:25 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3354688 00:13:41.139 00:53:25 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:41.397 00:53:25 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:41.655 00:53:25 -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:13:41.655 00:53:25 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:13:41.913 true 00:13:41.913 00:53:26 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3354688 00:13:41.913 00:53:26 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:42.870 00:53:26 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:43.127 00:53:27 -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:13:43.127 00:53:27 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:13:43.385 true 00:13:43.385 00:53:27 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3354688 00:13:43.385 00:53:27 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:43.643 00:53:27 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:43.901 00:53:27 -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:13:43.901 00:53:27 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:13:44.161 true 00:13:44.161 00:53:28 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3354688 00:13:44.161 00:53:28 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:45.098 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:45.098 00:53:28 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:45.098 00:53:29 -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:13:45.098 00:53:29 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:13:45.356 true 00:13:45.356 00:53:29 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3354688 00:13:45.356 00:53:29 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:45.614 00:53:29 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:45.872 00:53:29 -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:13:45.873 00:53:29 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:13:46.130 true 00:13:46.130 00:53:30 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3354688 00:13:46.130 00:53:30 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:47.066 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:47.066 00:53:31 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:47.066 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:47.324 00:53:31 -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:13:47.324 00:53:31 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:13:47.582 true 00:13:47.582 00:53:31 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3354688 00:13:47.582 00:53:31 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:47.840 00:53:31 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:48.102 00:53:32 -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:13:48.102 00:53:32 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:13:48.102 true 00:13:48.392 00:53:32 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3354688 00:13:48.392 00:53:32 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:49.328 00:53:33 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:49.328 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:49.328 00:53:33 -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:13:49.328 00:53:33 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:13:49.585 true 00:13:49.585 00:53:33 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3354688 00:13:49.586 00:53:33 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:49.843 00:53:33 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:50.101 00:53:34 -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:13:50.101 00:53:34 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:13:50.359 true 00:13:50.359 00:53:34 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3354688 00:13:50.359 00:53:34 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:51.297 00:53:35 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:51.297 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:51.297 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:51.555 00:53:35 -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:13:51.555 00:53:35 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:13:51.813 true 00:13:51.813 00:53:35 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3354688 00:13:51.813 00:53:35 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:52.071 00:53:36 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:52.329 00:53:36 -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:13:52.329 00:53:36 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:13:52.586 true 00:13:52.586 00:53:36 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3354688 00:13:52.586 00:53:36 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:53.521 00:53:37 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:53.521 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:53.521 00:53:37 -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:13:53.521 00:53:37 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:13:53.779 true 00:13:53.779 00:53:37 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3354688 00:13:53.779 00:53:37 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:54.036 00:53:38 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:54.294 00:53:38 -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:13:54.294 00:53:38 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:13:54.552 true 00:13:54.552 00:53:38 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3354688 00:13:54.552 00:53:38 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:55.489 00:53:39 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:55.747 00:53:39 -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:13:55.747 00:53:39 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:13:56.005 true 00:13:56.005 00:53:40 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3354688 00:13:56.005 00:53:40 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:56.262 00:53:40 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:56.520 00:53:40 -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:13:56.520 00:53:40 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:13:56.778 true 00:13:56.778 00:53:40 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3354688 00:13:56.778 00:53:40 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:57.714 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:57.714 00:53:41 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:57.714 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:57.714 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:57.714 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:57.714 00:53:41 -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:13:57.714 00:53:41 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:13:57.972 true 00:13:57.972 00:53:42 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3354688 00:13:57.972 00:53:42 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:58.230 00:53:42 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:58.489 00:53:42 -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:13:58.489 00:53:42 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:13:58.747 true 00:13:58.747 00:53:42 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3354688 00:13:58.747 00:53:42 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:00.121 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:00.121 00:53:43 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:00.121 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:00.121 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:00.121 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:00.121 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:00.121 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:00.121 00:53:44 -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:14:00.121 00:53:44 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:14:00.377 true 00:14:00.377 00:53:44 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3354688 00:14:00.377 00:53:44 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:00.944 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:01.203 00:53:45 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:01.203 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:01.461 00:53:45 -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:14:01.461 00:53:45 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:14:01.461 true 00:14:01.461 00:53:45 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3354688 00:14:01.461 00:53:45 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:01.719 00:53:45 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:01.976 00:53:46 -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:14:01.976 00:53:46 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:14:02.233 true 00:14:02.233 00:53:46 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3354688 00:14:02.233 00:53:46 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:03.166 00:53:47 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:03.430 00:53:47 -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:14:03.430 00:53:47 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:14:03.734 true 00:14:03.734 00:53:47 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3354688 00:14:03.734 00:53:47 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:03.991 00:53:47 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:04.251 00:53:48 -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:14:04.251 00:53:48 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:14:04.251 true 00:14:04.509 00:53:48 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3354688 00:14:04.509 00:53:48 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:05.448 00:53:49 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:05.448 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:05.448 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:05.448 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:05.448 00:53:49 -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:14:05.448 00:53:49 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:14:05.706 true 00:14:05.706 00:53:49 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3354688 00:14:05.706 00:53:49 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:05.963 00:53:50 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:06.221 00:53:50 -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:14:06.221 00:53:50 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:14:06.479 true 00:14:06.479 00:53:50 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3354688 00:14:06.479 00:53:50 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:07.413 00:53:51 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:07.671 00:53:51 -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:14:07.671 00:53:51 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:14:07.929 true 00:14:07.929 00:53:52 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3354688 00:14:07.929 00:53:52 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:08.187 00:53:52 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:08.444 00:53:52 -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:14:08.444 00:53:52 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:14:08.444 Initializing NVMe Controllers 00:14:08.444 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:08.444 Controller IO queue size 128, less than required. 00:14:08.444 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:08.444 Controller IO queue size 128, less than required. 00:14:08.444 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:08.444 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:08.444 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:08.444 Initialization complete. Launching workers. 00:14:08.444 ======================================================== 00:14:08.444 Latency(us) 00:14:08.444 Device Information : IOPS MiB/s Average min max 00:14:08.444 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1041.99 0.51 69352.84 1946.32 1014228.73 00:14:08.444 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 12838.16 6.27 9971.16 2299.55 443985.57 00:14:08.444 ======================================================== 00:14:08.444 Total : 13880.15 6.78 14428.98 1946.32 1014228.73 00:14:08.444 00:14:08.702 true 00:14:08.702 00:53:52 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3354688 00:14:08.702 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3354688) - No such process 00:14:08.702 00:53:52 -- target/ns_hotplug_stress.sh@53 -- # wait 3354688 00:14:08.702 00:53:52 -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:08.959 00:53:53 -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:09.216 00:53:53 -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:14:09.216 00:53:53 -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:14:09.216 00:53:53 -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:14:09.216 00:53:53 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:09.216 00:53:53 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:14:09.474 null0 00:14:09.474 00:53:53 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:09.474 00:53:53 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:09.474 00:53:53 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:14:09.732 null1 00:14:09.732 00:53:53 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:09.732 00:53:53 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:09.732 00:53:53 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:14:09.990 null2 00:14:09.990 00:53:53 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:09.990 00:53:53 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:09.990 00:53:53 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:14:10.247 null3 00:14:10.247 00:53:54 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:10.247 00:53:54 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:10.248 00:53:54 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:14:10.248 null4 00:14:10.506 00:53:54 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:10.506 00:53:54 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:10.506 00:53:54 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:14:10.506 null5 00:14:10.506 00:53:54 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:10.506 00:53:54 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:10.506 00:53:54 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:14:10.764 null6 00:14:10.764 00:53:54 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:10.764 00:53:54 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:10.764 00:53:54 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:14:11.022 null7 00:14:11.022 00:53:55 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:11.022 00:53:55 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:11.022 00:53:55 -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:14:11.022 00:53:55 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:11.022 00:53:55 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:11.022 00:53:55 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:11.022 00:53:55 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:11.022 00:53:55 -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:14:11.022 00:53:55 -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:14:11.022 00:53:55 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:11.022 00:53:55 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:11.022 00:53:55 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:11.022 00:53:55 -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:14:11.022 00:53:55 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:11.022 00:53:55 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:11.022 00:53:55 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:11.022 00:53:55 -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:14:11.022 00:53:55 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:11.022 00:53:55 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:11.022 00:53:55 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:11.022 00:53:55 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:11.022 00:53:55 -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:14:11.022 00:53:55 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:11.022 00:53:55 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:11.022 00:53:55 -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:14:11.022 00:53:55 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:11.022 00:53:55 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:11.022 00:53:55 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:11.022 00:53:55 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:11.022 00:53:55 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:11.022 00:53:55 -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:14:11.022 00:53:55 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:11.022 00:53:55 -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:14:11.022 00:53:55 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:11.022 00:53:55 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:11.022 00:53:55 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:11.022 00:53:55 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:11.022 00:53:55 -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:14:11.022 00:53:55 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:11.022 00:53:55 -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:14:11.022 00:53:55 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:11.022 00:53:55 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:11.022 00:53:55 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:11.022 00:53:55 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:11.022 00:53:55 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:11.022 00:53:55 -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:14:11.022 00:53:55 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:11.022 00:53:55 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:11.022 00:53:55 -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:14:11.023 00:53:55 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:11.023 00:53:55 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:11.023 00:53:55 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:11.023 00:53:55 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:11.023 00:53:55 -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:14:11.023 00:53:55 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:11.023 00:53:55 -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:14:11.023 00:53:55 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:11.023 00:53:55 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:11.023 00:53:55 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:11.023 00:53:55 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:11.023 00:53:55 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:11.023 00:53:55 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:11.023 00:53:55 -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:14:11.023 00:53:55 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:11.023 00:53:55 -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:14:11.023 00:53:55 -- target/ns_hotplug_stress.sh@66 -- # wait 3358837 3358838 3358839 3358842 3358844 3358846 3358848 3358850 00:14:11.023 00:53:55 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:11.023 00:53:55 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:11.023 00:53:55 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:11.281 00:53:55 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:11.281 00:53:55 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:11.281 00:53:55 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:11.281 00:53:55 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:11.281 00:53:55 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:11.281 00:53:55 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:11.281 00:53:55 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:11.281 00:53:55 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:11.539 00:53:55 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:11.539 00:53:55 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:11.539 00:53:55 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:11.539 00:53:55 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:11.539 00:53:55 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:11.539 00:53:55 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:11.539 00:53:55 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:11.539 00:53:55 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:11.539 00:53:55 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:11.539 00:53:55 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:11.539 00:53:55 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:11.540 00:53:55 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:11.540 00:53:55 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:11.540 00:53:55 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:11.540 00:53:55 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:11.540 00:53:55 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:11.540 00:53:55 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:11.540 00:53:55 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:11.540 00:53:55 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:11.540 00:53:55 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:11.540 00:53:55 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:11.540 00:53:55 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:11.540 00:53:55 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:11.540 00:53:55 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:11.798 00:53:55 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:11.798 00:53:55 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:11.798 00:53:55 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:11.798 00:53:55 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:11.798 00:53:55 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:11.798 00:53:55 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:11.798 00:53:55 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:11.798 00:53:55 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:12.056 00:53:56 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:12.056 00:53:56 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:12.056 00:53:56 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:12.056 00:53:56 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:12.056 00:53:56 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:12.056 00:53:56 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:12.056 00:53:56 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:12.056 00:53:56 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:12.056 00:53:56 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:12.056 00:53:56 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:12.056 00:53:56 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:12.056 00:53:56 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:12.056 00:53:56 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:12.057 00:53:56 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:12.057 00:53:56 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:12.057 00:53:56 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:12.057 00:53:56 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:12.057 00:53:56 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:12.057 00:53:56 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:12.057 00:53:56 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:12.057 00:53:56 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:12.315 00:53:56 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:12.315 00:53:56 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:12.315 00:53:56 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:12.315 00:53:56 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:12.315 00:53:56 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:12.315 00:53:56 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:12.315 00:53:56 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:12.315 00:53:56 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:12.315 00:53:56 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:12.315 00:53:56 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:12.573 00:53:56 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:12.573 00:53:56 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:12.573 00:53:56 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:12.573 00:53:56 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:12.573 00:53:56 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:12.573 00:53:56 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:12.573 00:53:56 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:12.573 00:53:56 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:12.573 00:53:56 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:12.573 00:53:56 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:12.573 00:53:56 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:12.573 00:53:56 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:12.573 00:53:56 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:12.573 00:53:56 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:12.573 00:53:56 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:12.573 00:53:56 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:12.573 00:53:56 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:12.573 00:53:56 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:12.573 00:53:56 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:12.573 00:53:56 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:12.573 00:53:56 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:12.573 00:53:56 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:12.831 00:53:56 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:12.831 00:53:56 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:12.831 00:53:56 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:12.831 00:53:57 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:12.831 00:53:57 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:12.831 00:53:57 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:12.831 00:53:57 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:12.831 00:53:57 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:12.831 00:53:57 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:12.831 00:53:57 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:12.831 00:53:57 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:13.089 00:53:57 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:13.089 00:53:57 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:13.089 00:53:57 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:13.089 00:53:57 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:13.089 00:53:57 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:13.089 00:53:57 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:13.089 00:53:57 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:13.089 00:53:57 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:13.089 00:53:57 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:13.089 00:53:57 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:13.089 00:53:57 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:13.089 00:53:57 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:13.089 00:53:57 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:13.089 00:53:57 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:13.089 00:53:57 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:13.089 00:53:57 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:13.089 00:53:57 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:13.089 00:53:57 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:13.089 00:53:57 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:13.089 00:53:57 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:13.089 00:53:57 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:13.089 00:53:57 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:13.089 00:53:57 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:13.089 00:53:57 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:13.347 00:53:57 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:13.347 00:53:57 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:13.347 00:53:57 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:13.347 00:53:57 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:13.347 00:53:57 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:13.347 00:53:57 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:13.347 00:53:57 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:13.617 00:53:57 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:13.618 00:53:57 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:13.618 00:53:57 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:13.618 00:53:57 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:13.618 00:53:57 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:13.618 00:53:57 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:13.618 00:53:57 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:13.618 00:53:57 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:13.618 00:53:57 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:13.618 00:53:57 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:13.618 00:53:57 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:13.618 00:53:57 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:13.618 00:53:57 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:13.618 00:53:57 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:13.618 00:53:57 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:13.618 00:53:57 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:13.618 00:53:57 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:13.618 00:53:57 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:13.618 00:53:57 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:13.880 00:53:57 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:13.880 00:53:57 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:13.880 00:53:57 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:13.880 00:53:57 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:13.880 00:53:57 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:13.880 00:53:57 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:13.880 00:53:58 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:13.880 00:53:58 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:13.880 00:53:58 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:13.880 00:53:58 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:13.880 00:53:58 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:13.880 00:53:58 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:13.880 00:53:58 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:14.138 00:53:58 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:14.138 00:53:58 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:14.138 00:53:58 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:14.138 00:53:58 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:14.138 00:53:58 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:14.138 00:53:58 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:14.138 00:53:58 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:14.138 00:53:58 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:14.138 00:53:58 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:14.138 00:53:58 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:14.138 00:53:58 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:14.138 00:53:58 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:14.138 00:53:58 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:14.138 00:53:58 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:14.138 00:53:58 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:14.138 00:53:58 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:14.138 00:53:58 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:14.138 00:53:58 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:14.138 00:53:58 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:14.396 00:53:58 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:14.396 00:53:58 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:14.396 00:53:58 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:14.396 00:53:58 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:14.396 00:53:58 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:14.396 00:53:58 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:14.396 00:53:58 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:14.396 00:53:58 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:14.396 00:53:58 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:14.396 00:53:58 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:14.396 00:53:58 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:14.654 00:53:58 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:14.654 00:53:58 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:14.654 00:53:58 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:14.654 00:53:58 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:14.654 00:53:58 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:14.654 00:53:58 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:14.654 00:53:58 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:14.654 00:53:58 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:14.654 00:53:58 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:14.654 00:53:58 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:14.654 00:53:58 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:14.654 00:53:58 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:14.654 00:53:58 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:14.654 00:53:58 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:14.654 00:53:58 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:14.654 00:53:58 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:14.654 00:53:58 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:14.654 00:53:58 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:14.654 00:53:58 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:14.654 00:53:58 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:14.654 00:53:58 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:14.912 00:53:58 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:14.912 00:53:58 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:14.912 00:53:58 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:14.912 00:53:58 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:14.912 00:53:58 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:14.912 00:53:58 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:14.912 00:53:59 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:14.912 00:53:59 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:14.912 00:53:59 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:14.913 00:53:59 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:14.913 00:53:59 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:15.170 00:53:59 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:15.170 00:53:59 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:15.170 00:53:59 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:15.170 00:53:59 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:15.170 00:53:59 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:15.170 00:53:59 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:15.170 00:53:59 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:15.170 00:53:59 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:15.170 00:53:59 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:15.170 00:53:59 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:15.170 00:53:59 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:15.170 00:53:59 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:15.170 00:53:59 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:15.170 00:53:59 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:15.170 00:53:59 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:15.428 00:53:59 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:15.428 00:53:59 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:15.428 00:53:59 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:15.428 00:53:59 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:15.428 00:53:59 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:15.428 00:53:59 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:15.428 00:53:59 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:15.428 00:53:59 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:15.428 00:53:59 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:15.428 00:53:59 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:15.428 00:53:59 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:15.428 00:53:59 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:15.428 00:53:59 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:15.428 00:53:59 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:15.428 00:53:59 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:15.428 00:53:59 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:15.687 00:53:59 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:15.687 00:53:59 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:15.687 00:53:59 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:15.687 00:53:59 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:15.687 00:53:59 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:15.687 00:53:59 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:15.687 00:53:59 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:15.687 00:53:59 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:15.687 00:53:59 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:15.687 00:53:59 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:15.687 00:53:59 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:15.687 00:53:59 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:15.687 00:53:59 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:15.687 00:53:59 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:15.687 00:53:59 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:15.687 00:53:59 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:15.687 00:53:59 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:15.687 00:53:59 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:15.687 00:53:59 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:15.944 00:53:59 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:15.944 00:53:59 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:15.944 00:53:59 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:15.944 00:53:59 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:15.944 00:53:59 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:15.944 00:53:59 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:15.944 00:53:59 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:15.944 00:53:59 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:15.944 00:53:59 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:15.944 00:54:00 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:15.944 00:54:00 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:15.945 00:54:00 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:16.203 00:54:00 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:16.203 00:54:00 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:16.203 00:54:00 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:16.203 00:54:00 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:16.203 00:54:00 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:16.203 00:54:00 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:16.203 00:54:00 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:16.203 00:54:00 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:16.203 00:54:00 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:16.203 00:54:00 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:16.203 00:54:00 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:16.203 00:54:00 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:16.203 00:54:00 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:16.461 00:54:00 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:16.461 00:54:00 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:16.461 00:54:00 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:16.461 00:54:00 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:16.461 00:54:00 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:16.461 00:54:00 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:16.461 00:54:00 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:16.461 00:54:00 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:16.461 00:54:00 -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:16.461 00:54:00 -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:14:16.461 00:54:00 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:16.461 00:54:00 -- nvmf/common.sh@116 -- # sync 00:14:16.461 00:54:00 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:16.461 00:54:00 -- nvmf/common.sh@119 -- # set +e 00:14:16.461 00:54:00 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:16.461 00:54:00 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:16.461 rmmod nvme_tcp 00:14:16.461 rmmod nvme_fabrics 00:14:16.461 rmmod nvme_keyring 00:14:16.461 00:54:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:16.461 00:54:00 -- nvmf/common.sh@123 -- # set -e 00:14:16.461 00:54:00 -- nvmf/common.sh@124 -- # return 0 00:14:16.461 00:54:00 -- nvmf/common.sh@477 -- # '[' -n 3354365 ']' 00:14:16.461 00:54:00 -- nvmf/common.sh@478 -- # killprocess 3354365 00:14:16.461 00:54:00 -- common/autotest_common.sh@926 -- # '[' -z 3354365 ']' 00:14:16.461 00:54:00 -- common/autotest_common.sh@930 -- # kill -0 3354365 00:14:16.461 00:54:00 -- common/autotest_common.sh@931 -- # uname 00:14:16.461 00:54:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:16.461 00:54:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3354365 00:14:16.461 00:54:00 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:14:16.461 00:54:00 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:14:16.461 00:54:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3354365' 00:14:16.461 killing process with pid 3354365 00:14:16.461 00:54:00 -- common/autotest_common.sh@945 -- # kill 3354365 00:14:16.461 00:54:00 -- common/autotest_common.sh@950 -- # wait 3354365 00:14:16.720 00:54:00 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:16.720 00:54:00 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:16.720 00:54:00 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:16.720 00:54:00 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:16.720 00:54:00 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:16.720 00:54:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:16.720 00:54:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:16.720 00:54:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:18.699 00:54:02 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:18.699 00:14:18.699 real 0m45.933s 00:14:18.699 user 3m26.115s 00:14:18.699 sys 0m16.330s 00:14:18.699 00:54:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:18.699 00:54:02 -- common/autotest_common.sh@10 -- # set +x 00:14:18.699 ************************************ 00:14:18.699 END TEST nvmf_ns_hotplug_stress 00:14:18.699 ************************************ 00:14:18.699 00:54:02 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:18.699 00:54:02 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:18.699 00:54:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:18.699 00:54:02 -- common/autotest_common.sh@10 -- # set +x 00:14:18.699 ************************************ 00:14:18.699 START TEST nvmf_connect_stress 00:14:18.699 ************************************ 00:14:18.699 00:54:02 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:18.699 * Looking for test storage... 00:14:18.699 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:18.699 00:54:02 -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:18.699 00:54:02 -- nvmf/common.sh@7 -- # uname -s 00:14:18.699 00:54:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:18.699 00:54:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:18.699 00:54:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:18.699 00:54:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:18.699 00:54:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:18.699 00:54:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:18.699 00:54:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:18.699 00:54:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:18.699 00:54:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:18.699 00:54:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:18.699 00:54:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:18.699 00:54:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:18.699 00:54:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:18.699 00:54:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:18.699 00:54:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:18.699 00:54:02 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:18.699 00:54:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:18.699 00:54:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:18.699 00:54:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:18.699 00:54:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:18.699 00:54:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:18.699 00:54:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:18.699 00:54:02 -- paths/export.sh@5 -- # export PATH 00:14:18.699 00:54:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:18.699 00:54:02 -- nvmf/common.sh@46 -- # : 0 00:14:18.699 00:54:02 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:18.699 00:54:02 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:18.699 00:54:02 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:18.699 00:54:02 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:18.699 00:54:02 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:18.699 00:54:02 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:18.699 00:54:02 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:18.699 00:54:02 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:18.699 00:54:02 -- target/connect_stress.sh@12 -- # nvmftestinit 00:14:18.699 00:54:02 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:18.699 00:54:02 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:18.699 00:54:02 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:18.699 00:54:02 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:18.699 00:54:02 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:18.699 00:54:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:18.699 00:54:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:18.699 00:54:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:18.699 00:54:02 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:18.699 00:54:02 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:18.699 00:54:02 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:18.699 00:54:02 -- common/autotest_common.sh@10 -- # set +x 00:14:21.236 00:54:04 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:21.236 00:54:04 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:21.236 00:54:04 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:21.236 00:54:04 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:21.236 00:54:04 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:21.236 00:54:04 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:21.236 00:54:04 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:21.236 00:54:04 -- nvmf/common.sh@294 -- # net_devs=() 00:14:21.236 00:54:04 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:21.236 00:54:04 -- nvmf/common.sh@295 -- # e810=() 00:14:21.236 00:54:04 -- nvmf/common.sh@295 -- # local -ga e810 00:14:21.236 00:54:04 -- nvmf/common.sh@296 -- # x722=() 00:14:21.236 00:54:04 -- nvmf/common.sh@296 -- # local -ga x722 00:14:21.236 00:54:04 -- nvmf/common.sh@297 -- # mlx=() 00:14:21.236 00:54:04 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:21.236 00:54:04 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:21.236 00:54:04 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:21.236 00:54:04 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:21.236 00:54:04 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:21.236 00:54:04 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:21.236 00:54:04 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:21.236 00:54:04 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:21.236 00:54:04 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:21.236 00:54:04 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:21.236 00:54:04 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:21.236 00:54:04 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:21.236 00:54:04 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:21.236 00:54:04 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:21.236 00:54:04 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:14:21.236 00:54:04 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:14:21.236 00:54:04 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:14:21.236 00:54:04 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:21.236 00:54:04 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:21.236 00:54:04 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:21.236 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:21.236 00:54:04 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:21.236 00:54:04 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:21.236 00:54:04 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:21.236 00:54:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:21.236 00:54:04 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:21.236 00:54:04 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:21.236 00:54:04 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:21.236 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:21.236 00:54:04 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:21.236 00:54:04 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:21.236 00:54:04 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:21.236 00:54:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:21.236 00:54:04 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:21.236 00:54:04 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:21.236 00:54:04 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:14:21.236 00:54:04 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:14:21.236 00:54:04 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:21.236 00:54:04 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:21.236 00:54:04 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:21.236 00:54:04 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:21.236 00:54:04 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:21.236 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:21.236 00:54:04 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:21.236 00:54:04 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:21.236 00:54:04 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:21.236 00:54:04 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:21.236 00:54:04 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:21.236 00:54:04 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:21.236 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:21.236 00:54:04 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:21.236 00:54:04 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:21.236 00:54:04 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:21.236 00:54:04 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:21.236 00:54:04 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:21.236 00:54:04 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:21.236 00:54:04 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:21.236 00:54:04 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:21.236 00:54:04 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:21.236 00:54:04 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:21.236 00:54:04 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:21.236 00:54:04 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:21.236 00:54:04 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:21.236 00:54:04 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:21.236 00:54:04 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:21.236 00:54:04 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:21.236 00:54:04 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:21.236 00:54:04 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:21.236 00:54:04 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:21.236 00:54:04 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:21.236 00:54:04 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:21.236 00:54:04 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:21.236 00:54:04 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:21.236 00:54:04 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:21.236 00:54:04 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:21.236 00:54:04 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:21.236 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:21.236 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.121 ms 00:14:21.236 00:14:21.236 --- 10.0.0.2 ping statistics --- 00:14:21.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:21.236 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:14:21.236 00:54:04 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:21.236 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:21.236 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.184 ms 00:14:21.236 00:14:21.236 --- 10.0.0.1 ping statistics --- 00:14:21.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:21.236 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:14:21.236 00:54:04 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:21.237 00:54:04 -- nvmf/common.sh@410 -- # return 0 00:14:21.237 00:54:04 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:21.237 00:54:04 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:21.237 00:54:04 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:21.237 00:54:04 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:21.237 00:54:04 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:21.237 00:54:04 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:21.237 00:54:04 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:21.237 00:54:05 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:14:21.237 00:54:05 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:21.237 00:54:05 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:21.237 00:54:05 -- common/autotest_common.sh@10 -- # set +x 00:14:21.237 00:54:05 -- nvmf/common.sh@469 -- # nvmfpid=3361736 00:14:21.237 00:54:05 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:21.237 00:54:05 -- nvmf/common.sh@470 -- # waitforlisten 3361736 00:14:21.237 00:54:05 -- common/autotest_common.sh@819 -- # '[' -z 3361736 ']' 00:14:21.237 00:54:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:21.237 00:54:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:21.237 00:54:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:21.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:21.237 00:54:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:21.237 00:54:05 -- common/autotest_common.sh@10 -- # set +x 00:14:21.237 [2024-07-23 00:54:05.050939] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:14:21.237 [2024-07-23 00:54:05.051032] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:21.237 EAL: No free 2048 kB hugepages reported on node 1 00:14:21.237 [2024-07-23 00:54:05.120530] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:21.237 [2024-07-23 00:54:05.208922] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:21.237 [2024-07-23 00:54:05.209098] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:21.237 [2024-07-23 00:54:05.209118] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:21.237 [2024-07-23 00:54:05.209133] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:21.237 [2024-07-23 00:54:05.209234] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:21.237 [2024-07-23 00:54:05.209334] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:21.237 [2024-07-23 00:54:05.209336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:21.805 00:54:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:21.805 00:54:05 -- common/autotest_common.sh@852 -- # return 0 00:14:21.805 00:54:05 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:21.805 00:54:05 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:21.805 00:54:05 -- common/autotest_common.sh@10 -- # set +x 00:14:22.063 00:54:06 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:22.063 00:54:06 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:22.063 00:54:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:22.063 00:54:06 -- common/autotest_common.sh@10 -- # set +x 00:14:22.063 [2024-07-23 00:54:06.015426] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:22.063 00:54:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:22.063 00:54:06 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:22.063 00:54:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:22.063 00:54:06 -- common/autotest_common.sh@10 -- # set +x 00:14:22.063 00:54:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:22.063 00:54:06 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:22.063 00:54:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:22.063 00:54:06 -- common/autotest_common.sh@10 -- # set +x 00:14:22.063 [2024-07-23 00:54:06.050739] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:22.063 00:54:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:22.063 00:54:06 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:22.063 00:54:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:22.063 00:54:06 -- common/autotest_common.sh@10 -- # set +x 00:14:22.063 NULL1 00:14:22.063 00:54:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:22.063 00:54:06 -- target/connect_stress.sh@21 -- # PERF_PID=3361895 00:14:22.063 00:54:06 -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:14:22.063 00:54:06 -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:22.063 00:54:06 -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:22.063 00:54:06 -- target/connect_stress.sh@27 -- # seq 1 20 00:14:22.063 00:54:06 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:22.063 00:54:06 -- target/connect_stress.sh@28 -- # cat 00:14:22.063 00:54:06 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:22.063 00:54:06 -- target/connect_stress.sh@28 -- # cat 00:14:22.063 00:54:06 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:22.063 00:54:06 -- target/connect_stress.sh@28 -- # cat 00:14:22.063 00:54:06 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:22.063 00:54:06 -- target/connect_stress.sh@28 -- # cat 00:14:22.063 00:54:06 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:22.063 00:54:06 -- target/connect_stress.sh@28 -- # cat 00:14:22.063 00:54:06 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:22.063 00:54:06 -- target/connect_stress.sh@28 -- # cat 00:14:22.063 00:54:06 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:22.063 00:54:06 -- target/connect_stress.sh@28 -- # cat 00:14:22.063 00:54:06 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:22.063 00:54:06 -- target/connect_stress.sh@28 -- # cat 00:14:22.063 00:54:06 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:22.063 00:54:06 -- target/connect_stress.sh@28 -- # cat 00:14:22.063 00:54:06 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:22.063 00:54:06 -- target/connect_stress.sh@28 -- # cat 00:14:22.063 00:54:06 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:22.063 00:54:06 -- target/connect_stress.sh@28 -- # cat 00:14:22.063 EAL: No free 2048 kB hugepages reported on node 1 00:14:22.064 00:54:06 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:22.064 00:54:06 -- target/connect_stress.sh@28 -- # cat 00:14:22.064 00:54:06 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:22.064 00:54:06 -- target/connect_stress.sh@28 -- # cat 00:14:22.064 00:54:06 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:22.064 00:54:06 -- target/connect_stress.sh@28 -- # cat 00:14:22.064 00:54:06 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:22.064 00:54:06 -- target/connect_stress.sh@28 -- # cat 00:14:22.064 00:54:06 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:22.064 00:54:06 -- target/connect_stress.sh@28 -- # cat 00:14:22.064 00:54:06 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:22.064 00:54:06 -- target/connect_stress.sh@28 -- # cat 00:14:22.064 00:54:06 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:22.064 00:54:06 -- target/connect_stress.sh@28 -- # cat 00:14:22.064 00:54:06 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:22.064 00:54:06 -- target/connect_stress.sh@28 -- # cat 00:14:22.064 00:54:06 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:22.064 00:54:06 -- target/connect_stress.sh@28 -- # cat 00:14:22.064 00:54:06 -- target/connect_stress.sh@34 -- # kill -0 3361895 00:14:22.064 00:54:06 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:22.064 00:54:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:22.064 00:54:06 -- common/autotest_common.sh@10 -- # set +x 00:14:22.324 00:54:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:22.324 00:54:06 -- target/connect_stress.sh@34 -- # kill -0 3361895 00:14:22.324 00:54:06 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:22.324 00:54:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:22.324 00:54:06 -- common/autotest_common.sh@10 -- # set +x 00:14:22.584 00:54:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:22.584 00:54:06 -- target/connect_stress.sh@34 -- # kill -0 3361895 00:14:22.584 00:54:06 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:22.584 00:54:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:22.584 00:54:06 -- common/autotest_common.sh@10 -- # set +x 00:14:23.151 00:54:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:23.151 00:54:07 -- target/connect_stress.sh@34 -- # kill -0 3361895 00:14:23.151 00:54:07 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:23.151 00:54:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:23.151 00:54:07 -- common/autotest_common.sh@10 -- # set +x 00:14:23.408 00:54:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:23.408 00:54:07 -- target/connect_stress.sh@34 -- # kill -0 3361895 00:14:23.408 00:54:07 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:23.408 00:54:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:23.408 00:54:07 -- common/autotest_common.sh@10 -- # set +x 00:14:23.666 00:54:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:23.666 00:54:07 -- target/connect_stress.sh@34 -- # kill -0 3361895 00:14:23.666 00:54:07 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:23.666 00:54:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:23.666 00:54:07 -- common/autotest_common.sh@10 -- # set +x 00:14:23.925 00:54:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:23.925 00:54:08 -- target/connect_stress.sh@34 -- # kill -0 3361895 00:14:23.925 00:54:08 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:23.925 00:54:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:23.925 00:54:08 -- common/autotest_common.sh@10 -- # set +x 00:14:24.185 00:54:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:24.185 00:54:08 -- target/connect_stress.sh@34 -- # kill -0 3361895 00:14:24.185 00:54:08 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:24.185 00:54:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:24.185 00:54:08 -- common/autotest_common.sh@10 -- # set +x 00:14:24.751 00:54:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:24.751 00:54:08 -- target/connect_stress.sh@34 -- # kill -0 3361895 00:14:24.751 00:54:08 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:24.751 00:54:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:24.751 00:54:08 -- common/autotest_common.sh@10 -- # set +x 00:14:25.008 00:54:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:25.008 00:54:08 -- target/connect_stress.sh@34 -- # kill -0 3361895 00:14:25.008 00:54:08 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:25.008 00:54:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:25.008 00:54:08 -- common/autotest_common.sh@10 -- # set +x 00:14:25.266 00:54:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:25.266 00:54:09 -- target/connect_stress.sh@34 -- # kill -0 3361895 00:14:25.266 00:54:09 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:25.266 00:54:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:25.266 00:54:09 -- common/autotest_common.sh@10 -- # set +x 00:14:25.525 00:54:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:25.525 00:54:09 -- target/connect_stress.sh@34 -- # kill -0 3361895 00:14:25.525 00:54:09 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:25.525 00:54:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:25.525 00:54:09 -- common/autotest_common.sh@10 -- # set +x 00:14:25.784 00:54:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:25.784 00:54:09 -- target/connect_stress.sh@34 -- # kill -0 3361895 00:14:25.784 00:54:09 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:25.784 00:54:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:25.784 00:54:09 -- common/autotest_common.sh@10 -- # set +x 00:14:26.350 00:54:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:26.350 00:54:10 -- target/connect_stress.sh@34 -- # kill -0 3361895 00:14:26.350 00:54:10 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:26.350 00:54:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:26.350 00:54:10 -- common/autotest_common.sh@10 -- # set +x 00:14:26.608 00:54:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:26.608 00:54:10 -- target/connect_stress.sh@34 -- # kill -0 3361895 00:14:26.608 00:54:10 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:26.608 00:54:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:26.608 00:54:10 -- common/autotest_common.sh@10 -- # set +x 00:14:26.867 00:54:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:26.867 00:54:10 -- target/connect_stress.sh@34 -- # kill -0 3361895 00:14:26.867 00:54:10 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:26.867 00:54:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:26.867 00:54:10 -- common/autotest_common.sh@10 -- # set +x 00:14:27.127 00:54:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:27.127 00:54:11 -- target/connect_stress.sh@34 -- # kill -0 3361895 00:14:27.127 00:54:11 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:27.127 00:54:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:27.127 00:54:11 -- common/autotest_common.sh@10 -- # set +x 00:14:27.385 00:54:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:27.385 00:54:11 -- target/connect_stress.sh@34 -- # kill -0 3361895 00:14:27.385 00:54:11 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:27.385 00:54:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:27.385 00:54:11 -- common/autotest_common.sh@10 -- # set +x 00:14:27.952 00:54:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:27.952 00:54:11 -- target/connect_stress.sh@34 -- # kill -0 3361895 00:14:27.952 00:54:11 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:27.952 00:54:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:27.952 00:54:11 -- common/autotest_common.sh@10 -- # set +x 00:14:28.211 00:54:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:28.211 00:54:12 -- target/connect_stress.sh@34 -- # kill -0 3361895 00:14:28.211 00:54:12 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:28.211 00:54:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:28.211 00:54:12 -- common/autotest_common.sh@10 -- # set +x 00:14:28.470 00:54:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:28.470 00:54:12 -- target/connect_stress.sh@34 -- # kill -0 3361895 00:14:28.470 00:54:12 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:28.470 00:54:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:28.470 00:54:12 -- common/autotest_common.sh@10 -- # set +x 00:14:28.728 00:54:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:28.728 00:54:12 -- target/connect_stress.sh@34 -- # kill -0 3361895 00:14:28.728 00:54:12 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:28.728 00:54:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:28.728 00:54:12 -- common/autotest_common.sh@10 -- # set +x 00:14:28.986 00:54:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:28.986 00:54:13 -- target/connect_stress.sh@34 -- # kill -0 3361895 00:14:28.986 00:54:13 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:28.986 00:54:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:28.986 00:54:13 -- common/autotest_common.sh@10 -- # set +x 00:14:29.556 00:54:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:29.556 00:54:13 -- target/connect_stress.sh@34 -- # kill -0 3361895 00:14:29.556 00:54:13 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:29.556 00:54:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:29.556 00:54:13 -- common/autotest_common.sh@10 -- # set +x 00:14:29.815 00:54:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:29.815 00:54:13 -- target/connect_stress.sh@34 -- # kill -0 3361895 00:14:29.815 00:54:13 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:29.815 00:54:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:29.815 00:54:13 -- common/autotest_common.sh@10 -- # set +x 00:14:30.074 00:54:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:30.074 00:54:14 -- target/connect_stress.sh@34 -- # kill -0 3361895 00:14:30.074 00:54:14 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:30.074 00:54:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:30.074 00:54:14 -- common/autotest_common.sh@10 -- # set +x 00:14:30.331 00:54:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:30.331 00:54:14 -- target/connect_stress.sh@34 -- # kill -0 3361895 00:14:30.331 00:54:14 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:30.331 00:54:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:30.331 00:54:14 -- common/autotest_common.sh@10 -- # set +x 00:14:30.590 00:54:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:30.590 00:54:14 -- target/connect_stress.sh@34 -- # kill -0 3361895 00:14:30.590 00:54:14 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:30.590 00:54:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:30.590 00:54:14 -- common/autotest_common.sh@10 -- # set +x 00:14:31.156 00:54:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:31.156 00:54:15 -- target/connect_stress.sh@34 -- # kill -0 3361895 00:14:31.156 00:54:15 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:31.156 00:54:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:31.156 00:54:15 -- common/autotest_common.sh@10 -- # set +x 00:14:31.415 00:54:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:31.415 00:54:15 -- target/connect_stress.sh@34 -- # kill -0 3361895 00:14:31.415 00:54:15 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:31.415 00:54:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:31.415 00:54:15 -- common/autotest_common.sh@10 -- # set +x 00:14:31.674 00:54:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:31.674 00:54:15 -- target/connect_stress.sh@34 -- # kill -0 3361895 00:14:31.674 00:54:15 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:31.674 00:54:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:31.674 00:54:15 -- common/autotest_common.sh@10 -- # set +x 00:14:31.932 00:54:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:31.932 00:54:16 -- target/connect_stress.sh@34 -- # kill -0 3361895 00:14:31.932 00:54:16 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:31.932 00:54:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:31.932 00:54:16 -- common/autotest_common.sh@10 -- # set +x 00:14:32.190 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:32.190 00:54:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:32.190 00:54:16 -- target/connect_stress.sh@34 -- # kill -0 3361895 00:14:32.190 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3361895) - No such process 00:14:32.190 00:54:16 -- target/connect_stress.sh@38 -- # wait 3361895 00:14:32.190 00:54:16 -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:32.190 00:54:16 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:32.190 00:54:16 -- target/connect_stress.sh@43 -- # nvmftestfini 00:14:32.190 00:54:16 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:32.190 00:54:16 -- nvmf/common.sh@116 -- # sync 00:14:32.450 00:54:16 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:32.450 00:54:16 -- nvmf/common.sh@119 -- # set +e 00:14:32.450 00:54:16 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:32.450 00:54:16 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:32.450 rmmod nvme_tcp 00:14:32.450 rmmod nvme_fabrics 00:14:32.450 rmmod nvme_keyring 00:14:32.450 00:54:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:32.450 00:54:16 -- nvmf/common.sh@123 -- # set -e 00:14:32.450 00:54:16 -- nvmf/common.sh@124 -- # return 0 00:14:32.450 00:54:16 -- nvmf/common.sh@477 -- # '[' -n 3361736 ']' 00:14:32.450 00:54:16 -- nvmf/common.sh@478 -- # killprocess 3361736 00:14:32.450 00:54:16 -- common/autotest_common.sh@926 -- # '[' -z 3361736 ']' 00:14:32.450 00:54:16 -- common/autotest_common.sh@930 -- # kill -0 3361736 00:14:32.450 00:54:16 -- common/autotest_common.sh@931 -- # uname 00:14:32.450 00:54:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:32.450 00:54:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3361736 00:14:32.450 00:54:16 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:14:32.450 00:54:16 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:14:32.450 00:54:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3361736' 00:14:32.450 killing process with pid 3361736 00:14:32.450 00:54:16 -- common/autotest_common.sh@945 -- # kill 3361736 00:14:32.450 00:54:16 -- common/autotest_common.sh@950 -- # wait 3361736 00:14:32.711 00:54:16 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:32.711 00:54:16 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:32.711 00:54:16 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:32.711 00:54:16 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:32.711 00:54:16 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:32.711 00:54:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:32.711 00:54:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:32.711 00:54:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:34.650 00:54:18 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:34.650 00:14:34.650 real 0m15.948s 00:14:34.650 user 0m40.399s 00:14:34.650 sys 0m5.904s 00:14:34.650 00:54:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:34.650 00:54:18 -- common/autotest_common.sh@10 -- # set +x 00:14:34.650 ************************************ 00:14:34.650 END TEST nvmf_connect_stress 00:14:34.650 ************************************ 00:14:34.650 00:54:18 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:34.650 00:54:18 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:34.650 00:54:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:34.650 00:54:18 -- common/autotest_common.sh@10 -- # set +x 00:14:34.650 ************************************ 00:14:34.650 START TEST nvmf_fused_ordering 00:14:34.650 ************************************ 00:14:34.651 00:54:18 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:34.651 * Looking for test storage... 00:14:34.651 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:34.651 00:54:18 -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:34.651 00:54:18 -- nvmf/common.sh@7 -- # uname -s 00:14:34.651 00:54:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:34.651 00:54:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:34.651 00:54:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:34.651 00:54:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:34.651 00:54:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:34.651 00:54:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:34.651 00:54:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:34.651 00:54:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:34.651 00:54:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:34.651 00:54:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:34.651 00:54:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:34.651 00:54:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:34.651 00:54:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:34.651 00:54:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:34.651 00:54:18 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:34.651 00:54:18 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:34.651 00:54:18 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:34.651 00:54:18 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:34.651 00:54:18 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:34.651 00:54:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.651 00:54:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.651 00:54:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.651 00:54:18 -- paths/export.sh@5 -- # export PATH 00:14:34.651 00:54:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.651 00:54:18 -- nvmf/common.sh@46 -- # : 0 00:14:34.651 00:54:18 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:34.651 00:54:18 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:34.651 00:54:18 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:34.651 00:54:18 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:34.651 00:54:18 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:34.651 00:54:18 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:34.651 00:54:18 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:34.651 00:54:18 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:34.651 00:54:18 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:34.651 00:54:18 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:34.651 00:54:18 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:34.651 00:54:18 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:34.651 00:54:18 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:34.651 00:54:18 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:34.651 00:54:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:34.651 00:54:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:34.651 00:54:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:34.651 00:54:18 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:34.651 00:54:18 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:34.651 00:54:18 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:34.651 00:54:18 -- common/autotest_common.sh@10 -- # set +x 00:14:37.188 00:54:20 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:37.188 00:54:20 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:37.188 00:54:20 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:37.188 00:54:20 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:37.188 00:54:20 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:37.188 00:54:20 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:37.188 00:54:20 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:37.188 00:54:20 -- nvmf/common.sh@294 -- # net_devs=() 00:14:37.188 00:54:20 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:37.188 00:54:20 -- nvmf/common.sh@295 -- # e810=() 00:14:37.188 00:54:20 -- nvmf/common.sh@295 -- # local -ga e810 00:14:37.188 00:54:20 -- nvmf/common.sh@296 -- # x722=() 00:14:37.188 00:54:20 -- nvmf/common.sh@296 -- # local -ga x722 00:14:37.188 00:54:20 -- nvmf/common.sh@297 -- # mlx=() 00:14:37.188 00:54:20 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:37.188 00:54:20 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:37.188 00:54:20 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:37.188 00:54:20 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:37.188 00:54:20 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:37.188 00:54:20 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:37.188 00:54:20 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:37.188 00:54:20 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:37.188 00:54:20 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:37.188 00:54:20 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:37.188 00:54:20 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:37.188 00:54:20 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:37.188 00:54:20 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:37.188 00:54:20 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:37.188 00:54:20 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:14:37.188 00:54:20 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:14:37.188 00:54:20 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:14:37.188 00:54:20 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:37.188 00:54:20 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:37.188 00:54:20 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:37.188 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:37.188 00:54:20 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:37.188 00:54:20 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:37.188 00:54:20 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:37.188 00:54:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:37.188 00:54:20 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:37.188 00:54:20 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:37.188 00:54:20 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:37.188 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:37.188 00:54:20 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:37.188 00:54:20 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:37.188 00:54:20 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:37.188 00:54:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:37.188 00:54:20 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:37.188 00:54:20 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:37.188 00:54:20 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:14:37.188 00:54:20 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:14:37.188 00:54:20 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:37.188 00:54:20 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:37.188 00:54:20 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:37.188 00:54:20 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:37.188 00:54:20 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:37.188 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:37.188 00:54:20 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:37.188 00:54:20 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:37.188 00:54:20 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:37.188 00:54:20 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:37.188 00:54:20 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:37.188 00:54:20 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:37.188 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:37.188 00:54:20 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:37.188 00:54:20 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:37.188 00:54:20 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:37.188 00:54:20 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:37.188 00:54:20 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:37.188 00:54:20 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:37.188 00:54:20 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:37.188 00:54:20 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:37.188 00:54:20 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:37.188 00:54:20 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:37.188 00:54:20 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:37.188 00:54:20 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:37.188 00:54:20 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:37.188 00:54:20 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:37.188 00:54:20 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:37.188 00:54:20 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:37.188 00:54:20 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:37.188 00:54:20 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:37.189 00:54:20 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:37.189 00:54:20 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:37.189 00:54:20 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:37.189 00:54:20 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:37.189 00:54:20 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:37.189 00:54:20 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:37.189 00:54:20 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:37.189 00:54:20 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:37.189 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:37.189 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.150 ms 00:14:37.189 00:14:37.189 --- 10.0.0.2 ping statistics --- 00:14:37.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:37.189 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:14:37.189 00:54:20 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:37.189 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:37.189 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:14:37.189 00:14:37.189 --- 10.0.0.1 ping statistics --- 00:14:37.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:37.189 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:14:37.189 00:54:20 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:37.189 00:54:20 -- nvmf/common.sh@410 -- # return 0 00:14:37.189 00:54:20 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:37.189 00:54:20 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:37.189 00:54:20 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:37.189 00:54:20 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:37.189 00:54:20 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:37.189 00:54:20 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:37.189 00:54:20 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:37.189 00:54:20 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:37.189 00:54:20 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:37.189 00:54:20 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:37.189 00:54:20 -- common/autotest_common.sh@10 -- # set +x 00:14:37.189 00:54:20 -- nvmf/common.sh@469 -- # nvmfpid=3365591 00:14:37.189 00:54:20 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:37.189 00:54:20 -- nvmf/common.sh@470 -- # waitforlisten 3365591 00:14:37.189 00:54:20 -- common/autotest_common.sh@819 -- # '[' -z 3365591 ']' 00:14:37.189 00:54:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:37.189 00:54:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:37.189 00:54:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:37.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:37.189 00:54:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:37.189 00:54:20 -- common/autotest_common.sh@10 -- # set +x 00:14:37.189 [2024-07-23 00:54:20.982694] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:14:37.189 [2024-07-23 00:54:20.982779] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:37.189 EAL: No free 2048 kB hugepages reported on node 1 00:14:37.189 [2024-07-23 00:54:21.050457] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:37.189 [2024-07-23 00:54:21.139671] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:37.189 [2024-07-23 00:54:21.139816] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:37.189 [2024-07-23 00:54:21.139834] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:37.189 [2024-07-23 00:54:21.139848] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:37.189 [2024-07-23 00:54:21.139877] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:37.757 00:54:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:37.757 00:54:21 -- common/autotest_common.sh@852 -- # return 0 00:14:37.757 00:54:21 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:37.757 00:54:21 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:37.757 00:54:21 -- common/autotest_common.sh@10 -- # set +x 00:14:37.757 00:54:21 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:37.757 00:54:21 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:37.757 00:54:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:37.757 00:54:21 -- common/autotest_common.sh@10 -- # set +x 00:14:37.757 [2024-07-23 00:54:21.946468] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:37.757 00:54:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:37.757 00:54:21 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:37.757 00:54:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:37.757 00:54:21 -- common/autotest_common.sh@10 -- # set +x 00:14:37.757 00:54:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:37.757 00:54:21 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:37.757 00:54:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:37.757 00:54:21 -- common/autotest_common.sh@10 -- # set +x 00:14:38.015 [2024-07-23 00:54:21.962644] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:38.015 00:54:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:38.015 00:54:21 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:38.015 00:54:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:38.015 00:54:21 -- common/autotest_common.sh@10 -- # set +x 00:14:38.015 NULL1 00:14:38.015 00:54:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:38.015 00:54:21 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:38.015 00:54:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:38.015 00:54:21 -- common/autotest_common.sh@10 -- # set +x 00:14:38.015 00:54:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:38.015 00:54:21 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:38.015 00:54:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:38.015 00:54:21 -- common/autotest_common.sh@10 -- # set +x 00:14:38.015 00:54:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:38.015 00:54:21 -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:38.015 [2024-07-23 00:54:22.006285] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:14:38.015 [2024-07-23 00:54:22.006327] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3365748 ] 00:14:38.015 EAL: No free 2048 kB hugepages reported on node 1 00:14:38.584 Attached to nqn.2016-06.io.spdk:cnode1 00:14:38.584 Namespace ID: 1 size: 1GB 00:14:38.584 fused_ordering(0) 00:14:38.584 fused_ordering(1) 00:14:38.584 fused_ordering(2) 00:14:38.584 fused_ordering(3) 00:14:38.584 fused_ordering(4) 00:14:38.584 fused_ordering(5) 00:14:38.584 fused_ordering(6) 00:14:38.584 fused_ordering(7) 00:14:38.584 fused_ordering(8) 00:14:38.584 fused_ordering(9) 00:14:38.584 fused_ordering(10) 00:14:38.584 fused_ordering(11) 00:14:38.584 fused_ordering(12) 00:14:38.584 fused_ordering(13) 00:14:38.584 fused_ordering(14) 00:14:38.584 fused_ordering(15) 00:14:38.584 fused_ordering(16) 00:14:38.584 fused_ordering(17) 00:14:38.584 fused_ordering(18) 00:14:38.584 fused_ordering(19) 00:14:38.584 fused_ordering(20) 00:14:38.584 fused_ordering(21) 00:14:38.584 fused_ordering(22) 00:14:38.584 fused_ordering(23) 00:14:38.584 fused_ordering(24) 00:14:38.584 fused_ordering(25) 00:14:38.584 fused_ordering(26) 00:14:38.584 fused_ordering(27) 00:14:38.584 fused_ordering(28) 00:14:38.584 fused_ordering(29) 00:14:38.584 fused_ordering(30) 00:14:38.584 fused_ordering(31) 00:14:38.584 fused_ordering(32) 00:14:38.584 fused_ordering(33) 00:14:38.584 fused_ordering(34) 00:14:38.584 fused_ordering(35) 00:14:38.584 fused_ordering(36) 00:14:38.584 fused_ordering(37) 00:14:38.584 fused_ordering(38) 00:14:38.584 fused_ordering(39) 00:14:38.584 fused_ordering(40) 00:14:38.584 fused_ordering(41) 00:14:38.584 fused_ordering(42) 00:14:38.584 fused_ordering(43) 00:14:38.584 fused_ordering(44) 00:14:38.584 fused_ordering(45) 00:14:38.584 fused_ordering(46) 00:14:38.584 fused_ordering(47) 00:14:38.584 fused_ordering(48) 00:14:38.584 fused_ordering(49) 00:14:38.584 fused_ordering(50) 00:14:38.584 fused_ordering(51) 00:14:38.584 fused_ordering(52) 00:14:38.584 fused_ordering(53) 00:14:38.584 fused_ordering(54) 00:14:38.584 fused_ordering(55) 00:14:38.584 fused_ordering(56) 00:14:38.584 fused_ordering(57) 00:14:38.584 fused_ordering(58) 00:14:38.584 fused_ordering(59) 00:14:38.584 fused_ordering(60) 00:14:38.584 fused_ordering(61) 00:14:38.584 fused_ordering(62) 00:14:38.584 fused_ordering(63) 00:14:38.584 fused_ordering(64) 00:14:38.584 fused_ordering(65) 00:14:38.584 fused_ordering(66) 00:14:38.584 fused_ordering(67) 00:14:38.584 fused_ordering(68) 00:14:38.584 fused_ordering(69) 00:14:38.584 fused_ordering(70) 00:14:38.584 fused_ordering(71) 00:14:38.584 fused_ordering(72) 00:14:38.584 fused_ordering(73) 00:14:38.584 fused_ordering(74) 00:14:38.584 fused_ordering(75) 00:14:38.584 fused_ordering(76) 00:14:38.584 fused_ordering(77) 00:14:38.584 fused_ordering(78) 00:14:38.584 fused_ordering(79) 00:14:38.584 fused_ordering(80) 00:14:38.584 fused_ordering(81) 00:14:38.584 fused_ordering(82) 00:14:38.584 fused_ordering(83) 00:14:38.584 fused_ordering(84) 00:14:38.584 fused_ordering(85) 00:14:38.584 fused_ordering(86) 00:14:38.584 fused_ordering(87) 00:14:38.584 fused_ordering(88) 00:14:38.584 fused_ordering(89) 00:14:38.584 fused_ordering(90) 00:14:38.584 fused_ordering(91) 00:14:38.584 fused_ordering(92) 00:14:38.584 fused_ordering(93) 00:14:38.584 fused_ordering(94) 00:14:38.585 fused_ordering(95) 00:14:38.585 fused_ordering(96) 00:14:38.585 fused_ordering(97) 00:14:38.585 fused_ordering(98) 00:14:38.585 fused_ordering(99) 00:14:38.585 fused_ordering(100) 00:14:38.585 fused_ordering(101) 00:14:38.585 fused_ordering(102) 00:14:38.585 fused_ordering(103) 00:14:38.585 fused_ordering(104) 00:14:38.585 fused_ordering(105) 00:14:38.585 fused_ordering(106) 00:14:38.585 fused_ordering(107) 00:14:38.585 fused_ordering(108) 00:14:38.585 fused_ordering(109) 00:14:38.585 fused_ordering(110) 00:14:38.585 fused_ordering(111) 00:14:38.585 fused_ordering(112) 00:14:38.585 fused_ordering(113) 00:14:38.585 fused_ordering(114) 00:14:38.585 fused_ordering(115) 00:14:38.585 fused_ordering(116) 00:14:38.585 fused_ordering(117) 00:14:38.585 fused_ordering(118) 00:14:38.585 fused_ordering(119) 00:14:38.585 fused_ordering(120) 00:14:38.585 fused_ordering(121) 00:14:38.585 fused_ordering(122) 00:14:38.585 fused_ordering(123) 00:14:38.585 fused_ordering(124) 00:14:38.585 fused_ordering(125) 00:14:38.585 fused_ordering(126) 00:14:38.585 fused_ordering(127) 00:14:38.585 fused_ordering(128) 00:14:38.585 fused_ordering(129) 00:14:38.585 fused_ordering(130) 00:14:38.585 fused_ordering(131) 00:14:38.585 fused_ordering(132) 00:14:38.585 fused_ordering(133) 00:14:38.585 fused_ordering(134) 00:14:38.585 fused_ordering(135) 00:14:38.585 fused_ordering(136) 00:14:38.585 fused_ordering(137) 00:14:38.585 fused_ordering(138) 00:14:38.585 fused_ordering(139) 00:14:38.585 fused_ordering(140) 00:14:38.585 fused_ordering(141) 00:14:38.585 fused_ordering(142) 00:14:38.585 fused_ordering(143) 00:14:38.585 fused_ordering(144) 00:14:38.585 fused_ordering(145) 00:14:38.585 fused_ordering(146) 00:14:38.585 fused_ordering(147) 00:14:38.585 fused_ordering(148) 00:14:38.585 fused_ordering(149) 00:14:38.585 fused_ordering(150) 00:14:38.585 fused_ordering(151) 00:14:38.585 fused_ordering(152) 00:14:38.585 fused_ordering(153) 00:14:38.585 fused_ordering(154) 00:14:38.585 fused_ordering(155) 00:14:38.585 fused_ordering(156) 00:14:38.585 fused_ordering(157) 00:14:38.585 fused_ordering(158) 00:14:38.585 fused_ordering(159) 00:14:38.585 fused_ordering(160) 00:14:38.585 fused_ordering(161) 00:14:38.585 fused_ordering(162) 00:14:38.585 fused_ordering(163) 00:14:38.585 fused_ordering(164) 00:14:38.585 fused_ordering(165) 00:14:38.585 fused_ordering(166) 00:14:38.585 fused_ordering(167) 00:14:38.585 fused_ordering(168) 00:14:38.585 fused_ordering(169) 00:14:38.585 fused_ordering(170) 00:14:38.585 fused_ordering(171) 00:14:38.585 fused_ordering(172) 00:14:38.585 fused_ordering(173) 00:14:38.585 fused_ordering(174) 00:14:38.585 fused_ordering(175) 00:14:38.585 fused_ordering(176) 00:14:38.585 fused_ordering(177) 00:14:38.585 fused_ordering(178) 00:14:38.585 fused_ordering(179) 00:14:38.585 fused_ordering(180) 00:14:38.585 fused_ordering(181) 00:14:38.585 fused_ordering(182) 00:14:38.585 fused_ordering(183) 00:14:38.585 fused_ordering(184) 00:14:38.585 fused_ordering(185) 00:14:38.585 fused_ordering(186) 00:14:38.585 fused_ordering(187) 00:14:38.585 fused_ordering(188) 00:14:38.585 fused_ordering(189) 00:14:38.585 fused_ordering(190) 00:14:38.585 fused_ordering(191) 00:14:38.585 fused_ordering(192) 00:14:38.585 fused_ordering(193) 00:14:38.585 fused_ordering(194) 00:14:38.585 fused_ordering(195) 00:14:38.585 fused_ordering(196) 00:14:38.585 fused_ordering(197) 00:14:38.585 fused_ordering(198) 00:14:38.585 fused_ordering(199) 00:14:38.585 fused_ordering(200) 00:14:38.585 fused_ordering(201) 00:14:38.585 fused_ordering(202) 00:14:38.585 fused_ordering(203) 00:14:38.585 fused_ordering(204) 00:14:38.585 fused_ordering(205) 00:14:39.153 fused_ordering(206) 00:14:39.153 fused_ordering(207) 00:14:39.153 fused_ordering(208) 00:14:39.153 fused_ordering(209) 00:14:39.153 fused_ordering(210) 00:14:39.153 fused_ordering(211) 00:14:39.153 fused_ordering(212) 00:14:39.153 fused_ordering(213) 00:14:39.153 fused_ordering(214) 00:14:39.153 fused_ordering(215) 00:14:39.153 fused_ordering(216) 00:14:39.153 fused_ordering(217) 00:14:39.153 fused_ordering(218) 00:14:39.153 fused_ordering(219) 00:14:39.153 fused_ordering(220) 00:14:39.153 fused_ordering(221) 00:14:39.153 fused_ordering(222) 00:14:39.153 fused_ordering(223) 00:14:39.153 fused_ordering(224) 00:14:39.153 fused_ordering(225) 00:14:39.153 fused_ordering(226) 00:14:39.153 fused_ordering(227) 00:14:39.153 fused_ordering(228) 00:14:39.153 fused_ordering(229) 00:14:39.153 fused_ordering(230) 00:14:39.153 fused_ordering(231) 00:14:39.153 fused_ordering(232) 00:14:39.153 fused_ordering(233) 00:14:39.153 fused_ordering(234) 00:14:39.153 fused_ordering(235) 00:14:39.153 fused_ordering(236) 00:14:39.153 fused_ordering(237) 00:14:39.153 fused_ordering(238) 00:14:39.153 fused_ordering(239) 00:14:39.153 fused_ordering(240) 00:14:39.153 fused_ordering(241) 00:14:39.153 fused_ordering(242) 00:14:39.153 fused_ordering(243) 00:14:39.153 fused_ordering(244) 00:14:39.153 fused_ordering(245) 00:14:39.153 fused_ordering(246) 00:14:39.153 fused_ordering(247) 00:14:39.153 fused_ordering(248) 00:14:39.153 fused_ordering(249) 00:14:39.153 fused_ordering(250) 00:14:39.153 fused_ordering(251) 00:14:39.153 fused_ordering(252) 00:14:39.153 fused_ordering(253) 00:14:39.153 fused_ordering(254) 00:14:39.153 fused_ordering(255) 00:14:39.153 fused_ordering(256) 00:14:39.153 fused_ordering(257) 00:14:39.153 fused_ordering(258) 00:14:39.153 fused_ordering(259) 00:14:39.153 fused_ordering(260) 00:14:39.153 fused_ordering(261) 00:14:39.153 fused_ordering(262) 00:14:39.153 fused_ordering(263) 00:14:39.153 fused_ordering(264) 00:14:39.153 fused_ordering(265) 00:14:39.153 fused_ordering(266) 00:14:39.153 fused_ordering(267) 00:14:39.153 fused_ordering(268) 00:14:39.153 fused_ordering(269) 00:14:39.153 fused_ordering(270) 00:14:39.153 fused_ordering(271) 00:14:39.153 fused_ordering(272) 00:14:39.153 fused_ordering(273) 00:14:39.153 fused_ordering(274) 00:14:39.153 fused_ordering(275) 00:14:39.153 fused_ordering(276) 00:14:39.153 fused_ordering(277) 00:14:39.153 fused_ordering(278) 00:14:39.153 fused_ordering(279) 00:14:39.153 fused_ordering(280) 00:14:39.153 fused_ordering(281) 00:14:39.153 fused_ordering(282) 00:14:39.153 fused_ordering(283) 00:14:39.153 fused_ordering(284) 00:14:39.153 fused_ordering(285) 00:14:39.153 fused_ordering(286) 00:14:39.153 fused_ordering(287) 00:14:39.153 fused_ordering(288) 00:14:39.153 fused_ordering(289) 00:14:39.153 fused_ordering(290) 00:14:39.153 fused_ordering(291) 00:14:39.153 fused_ordering(292) 00:14:39.153 fused_ordering(293) 00:14:39.153 fused_ordering(294) 00:14:39.153 fused_ordering(295) 00:14:39.153 fused_ordering(296) 00:14:39.153 fused_ordering(297) 00:14:39.153 fused_ordering(298) 00:14:39.153 fused_ordering(299) 00:14:39.153 fused_ordering(300) 00:14:39.153 fused_ordering(301) 00:14:39.153 fused_ordering(302) 00:14:39.153 fused_ordering(303) 00:14:39.153 fused_ordering(304) 00:14:39.153 fused_ordering(305) 00:14:39.153 fused_ordering(306) 00:14:39.153 fused_ordering(307) 00:14:39.153 fused_ordering(308) 00:14:39.153 fused_ordering(309) 00:14:39.153 fused_ordering(310) 00:14:39.153 fused_ordering(311) 00:14:39.153 fused_ordering(312) 00:14:39.153 fused_ordering(313) 00:14:39.153 fused_ordering(314) 00:14:39.154 fused_ordering(315) 00:14:39.154 fused_ordering(316) 00:14:39.154 fused_ordering(317) 00:14:39.154 fused_ordering(318) 00:14:39.154 fused_ordering(319) 00:14:39.154 fused_ordering(320) 00:14:39.154 fused_ordering(321) 00:14:39.154 fused_ordering(322) 00:14:39.154 fused_ordering(323) 00:14:39.154 fused_ordering(324) 00:14:39.154 fused_ordering(325) 00:14:39.154 fused_ordering(326) 00:14:39.154 fused_ordering(327) 00:14:39.154 fused_ordering(328) 00:14:39.154 fused_ordering(329) 00:14:39.154 fused_ordering(330) 00:14:39.154 fused_ordering(331) 00:14:39.154 fused_ordering(332) 00:14:39.154 fused_ordering(333) 00:14:39.154 fused_ordering(334) 00:14:39.154 fused_ordering(335) 00:14:39.154 fused_ordering(336) 00:14:39.154 fused_ordering(337) 00:14:39.154 fused_ordering(338) 00:14:39.154 fused_ordering(339) 00:14:39.154 fused_ordering(340) 00:14:39.154 fused_ordering(341) 00:14:39.154 fused_ordering(342) 00:14:39.154 fused_ordering(343) 00:14:39.154 fused_ordering(344) 00:14:39.154 fused_ordering(345) 00:14:39.154 fused_ordering(346) 00:14:39.154 fused_ordering(347) 00:14:39.154 fused_ordering(348) 00:14:39.154 fused_ordering(349) 00:14:39.154 fused_ordering(350) 00:14:39.154 fused_ordering(351) 00:14:39.154 fused_ordering(352) 00:14:39.154 fused_ordering(353) 00:14:39.154 fused_ordering(354) 00:14:39.154 fused_ordering(355) 00:14:39.154 fused_ordering(356) 00:14:39.154 fused_ordering(357) 00:14:39.154 fused_ordering(358) 00:14:39.154 fused_ordering(359) 00:14:39.154 fused_ordering(360) 00:14:39.154 fused_ordering(361) 00:14:39.154 fused_ordering(362) 00:14:39.154 fused_ordering(363) 00:14:39.154 fused_ordering(364) 00:14:39.154 fused_ordering(365) 00:14:39.154 fused_ordering(366) 00:14:39.154 fused_ordering(367) 00:14:39.154 fused_ordering(368) 00:14:39.154 fused_ordering(369) 00:14:39.154 fused_ordering(370) 00:14:39.154 fused_ordering(371) 00:14:39.154 fused_ordering(372) 00:14:39.154 fused_ordering(373) 00:14:39.154 fused_ordering(374) 00:14:39.154 fused_ordering(375) 00:14:39.154 fused_ordering(376) 00:14:39.154 fused_ordering(377) 00:14:39.154 fused_ordering(378) 00:14:39.154 fused_ordering(379) 00:14:39.154 fused_ordering(380) 00:14:39.154 fused_ordering(381) 00:14:39.154 fused_ordering(382) 00:14:39.154 fused_ordering(383) 00:14:39.154 fused_ordering(384) 00:14:39.154 fused_ordering(385) 00:14:39.154 fused_ordering(386) 00:14:39.154 fused_ordering(387) 00:14:39.154 fused_ordering(388) 00:14:39.154 fused_ordering(389) 00:14:39.154 fused_ordering(390) 00:14:39.154 fused_ordering(391) 00:14:39.154 fused_ordering(392) 00:14:39.154 fused_ordering(393) 00:14:39.154 fused_ordering(394) 00:14:39.154 fused_ordering(395) 00:14:39.154 fused_ordering(396) 00:14:39.154 fused_ordering(397) 00:14:39.154 fused_ordering(398) 00:14:39.154 fused_ordering(399) 00:14:39.154 fused_ordering(400) 00:14:39.154 fused_ordering(401) 00:14:39.154 fused_ordering(402) 00:14:39.154 fused_ordering(403) 00:14:39.154 fused_ordering(404) 00:14:39.154 fused_ordering(405) 00:14:39.154 fused_ordering(406) 00:14:39.154 fused_ordering(407) 00:14:39.154 fused_ordering(408) 00:14:39.154 fused_ordering(409) 00:14:39.154 fused_ordering(410) 00:14:39.721 fused_ordering(411) 00:14:39.721 fused_ordering(412) 00:14:39.721 fused_ordering(413) 00:14:39.721 fused_ordering(414) 00:14:39.721 fused_ordering(415) 00:14:39.721 fused_ordering(416) 00:14:39.721 fused_ordering(417) 00:14:39.721 fused_ordering(418) 00:14:39.721 fused_ordering(419) 00:14:39.721 fused_ordering(420) 00:14:39.721 fused_ordering(421) 00:14:39.721 fused_ordering(422) 00:14:39.721 fused_ordering(423) 00:14:39.721 fused_ordering(424) 00:14:39.721 fused_ordering(425) 00:14:39.721 fused_ordering(426) 00:14:39.721 fused_ordering(427) 00:14:39.721 fused_ordering(428) 00:14:39.721 fused_ordering(429) 00:14:39.721 fused_ordering(430) 00:14:39.721 fused_ordering(431) 00:14:39.721 fused_ordering(432) 00:14:39.721 fused_ordering(433) 00:14:39.721 fused_ordering(434) 00:14:39.721 fused_ordering(435) 00:14:39.721 fused_ordering(436) 00:14:39.721 fused_ordering(437) 00:14:39.721 fused_ordering(438) 00:14:39.721 fused_ordering(439) 00:14:39.721 fused_ordering(440) 00:14:39.721 fused_ordering(441) 00:14:39.721 fused_ordering(442) 00:14:39.721 fused_ordering(443) 00:14:39.721 fused_ordering(444) 00:14:39.721 fused_ordering(445) 00:14:39.721 fused_ordering(446) 00:14:39.721 fused_ordering(447) 00:14:39.721 fused_ordering(448) 00:14:39.721 fused_ordering(449) 00:14:39.721 fused_ordering(450) 00:14:39.721 fused_ordering(451) 00:14:39.721 fused_ordering(452) 00:14:39.721 fused_ordering(453) 00:14:39.721 fused_ordering(454) 00:14:39.721 fused_ordering(455) 00:14:39.721 fused_ordering(456) 00:14:39.721 fused_ordering(457) 00:14:39.721 fused_ordering(458) 00:14:39.721 fused_ordering(459) 00:14:39.721 fused_ordering(460) 00:14:39.721 fused_ordering(461) 00:14:39.721 fused_ordering(462) 00:14:39.721 fused_ordering(463) 00:14:39.721 fused_ordering(464) 00:14:39.721 fused_ordering(465) 00:14:39.721 fused_ordering(466) 00:14:39.721 fused_ordering(467) 00:14:39.721 fused_ordering(468) 00:14:39.721 fused_ordering(469) 00:14:39.721 fused_ordering(470) 00:14:39.721 fused_ordering(471) 00:14:39.721 fused_ordering(472) 00:14:39.721 fused_ordering(473) 00:14:39.721 fused_ordering(474) 00:14:39.721 fused_ordering(475) 00:14:39.721 fused_ordering(476) 00:14:39.721 fused_ordering(477) 00:14:39.721 fused_ordering(478) 00:14:39.721 fused_ordering(479) 00:14:39.721 fused_ordering(480) 00:14:39.721 fused_ordering(481) 00:14:39.721 fused_ordering(482) 00:14:39.721 fused_ordering(483) 00:14:39.721 fused_ordering(484) 00:14:39.721 fused_ordering(485) 00:14:39.721 fused_ordering(486) 00:14:39.721 fused_ordering(487) 00:14:39.721 fused_ordering(488) 00:14:39.721 fused_ordering(489) 00:14:39.721 fused_ordering(490) 00:14:39.721 fused_ordering(491) 00:14:39.721 fused_ordering(492) 00:14:39.721 fused_ordering(493) 00:14:39.721 fused_ordering(494) 00:14:39.721 fused_ordering(495) 00:14:39.721 fused_ordering(496) 00:14:39.721 fused_ordering(497) 00:14:39.721 fused_ordering(498) 00:14:39.721 fused_ordering(499) 00:14:39.721 fused_ordering(500) 00:14:39.721 fused_ordering(501) 00:14:39.721 fused_ordering(502) 00:14:39.721 fused_ordering(503) 00:14:39.721 fused_ordering(504) 00:14:39.721 fused_ordering(505) 00:14:39.721 fused_ordering(506) 00:14:39.721 fused_ordering(507) 00:14:39.721 fused_ordering(508) 00:14:39.721 fused_ordering(509) 00:14:39.721 fused_ordering(510) 00:14:39.721 fused_ordering(511) 00:14:39.721 fused_ordering(512) 00:14:39.721 fused_ordering(513) 00:14:39.721 fused_ordering(514) 00:14:39.721 fused_ordering(515) 00:14:39.721 fused_ordering(516) 00:14:39.721 fused_ordering(517) 00:14:39.721 fused_ordering(518) 00:14:39.721 fused_ordering(519) 00:14:39.721 fused_ordering(520) 00:14:39.721 fused_ordering(521) 00:14:39.721 fused_ordering(522) 00:14:39.721 fused_ordering(523) 00:14:39.721 fused_ordering(524) 00:14:39.721 fused_ordering(525) 00:14:39.721 fused_ordering(526) 00:14:39.721 fused_ordering(527) 00:14:39.721 fused_ordering(528) 00:14:39.721 fused_ordering(529) 00:14:39.721 fused_ordering(530) 00:14:39.721 fused_ordering(531) 00:14:39.721 fused_ordering(532) 00:14:39.721 fused_ordering(533) 00:14:39.721 fused_ordering(534) 00:14:39.721 fused_ordering(535) 00:14:39.721 fused_ordering(536) 00:14:39.721 fused_ordering(537) 00:14:39.721 fused_ordering(538) 00:14:39.721 fused_ordering(539) 00:14:39.721 fused_ordering(540) 00:14:39.721 fused_ordering(541) 00:14:39.721 fused_ordering(542) 00:14:39.721 fused_ordering(543) 00:14:39.721 fused_ordering(544) 00:14:39.721 fused_ordering(545) 00:14:39.721 fused_ordering(546) 00:14:39.721 fused_ordering(547) 00:14:39.721 fused_ordering(548) 00:14:39.721 fused_ordering(549) 00:14:39.721 fused_ordering(550) 00:14:39.721 fused_ordering(551) 00:14:39.721 fused_ordering(552) 00:14:39.721 fused_ordering(553) 00:14:39.721 fused_ordering(554) 00:14:39.721 fused_ordering(555) 00:14:39.721 fused_ordering(556) 00:14:39.721 fused_ordering(557) 00:14:39.721 fused_ordering(558) 00:14:39.721 fused_ordering(559) 00:14:39.721 fused_ordering(560) 00:14:39.721 fused_ordering(561) 00:14:39.721 fused_ordering(562) 00:14:39.721 fused_ordering(563) 00:14:39.721 fused_ordering(564) 00:14:39.721 fused_ordering(565) 00:14:39.721 fused_ordering(566) 00:14:39.721 fused_ordering(567) 00:14:39.721 fused_ordering(568) 00:14:39.721 fused_ordering(569) 00:14:39.721 fused_ordering(570) 00:14:39.721 fused_ordering(571) 00:14:39.721 fused_ordering(572) 00:14:39.721 fused_ordering(573) 00:14:39.721 fused_ordering(574) 00:14:39.721 fused_ordering(575) 00:14:39.721 fused_ordering(576) 00:14:39.721 fused_ordering(577) 00:14:39.721 fused_ordering(578) 00:14:39.721 fused_ordering(579) 00:14:39.721 fused_ordering(580) 00:14:39.721 fused_ordering(581) 00:14:39.721 fused_ordering(582) 00:14:39.721 fused_ordering(583) 00:14:39.722 fused_ordering(584) 00:14:39.722 fused_ordering(585) 00:14:39.722 fused_ordering(586) 00:14:39.722 fused_ordering(587) 00:14:39.722 fused_ordering(588) 00:14:39.722 fused_ordering(589) 00:14:39.722 fused_ordering(590) 00:14:39.722 fused_ordering(591) 00:14:39.722 fused_ordering(592) 00:14:39.722 fused_ordering(593) 00:14:39.722 fused_ordering(594) 00:14:39.722 fused_ordering(595) 00:14:39.722 fused_ordering(596) 00:14:39.722 fused_ordering(597) 00:14:39.722 fused_ordering(598) 00:14:39.722 fused_ordering(599) 00:14:39.722 fused_ordering(600) 00:14:39.722 fused_ordering(601) 00:14:39.722 fused_ordering(602) 00:14:39.722 fused_ordering(603) 00:14:39.722 fused_ordering(604) 00:14:39.722 fused_ordering(605) 00:14:39.722 fused_ordering(606) 00:14:39.722 fused_ordering(607) 00:14:39.722 fused_ordering(608) 00:14:39.722 fused_ordering(609) 00:14:39.722 fused_ordering(610) 00:14:39.722 fused_ordering(611) 00:14:39.722 fused_ordering(612) 00:14:39.722 fused_ordering(613) 00:14:39.722 fused_ordering(614) 00:14:39.722 fused_ordering(615) 00:14:40.659 fused_ordering(616) 00:14:40.659 fused_ordering(617) 00:14:40.659 fused_ordering(618) 00:14:40.659 fused_ordering(619) 00:14:40.659 fused_ordering(620) 00:14:40.659 fused_ordering(621) 00:14:40.659 fused_ordering(622) 00:14:40.659 fused_ordering(623) 00:14:40.659 fused_ordering(624) 00:14:40.659 fused_ordering(625) 00:14:40.659 fused_ordering(626) 00:14:40.659 fused_ordering(627) 00:14:40.659 fused_ordering(628) 00:14:40.659 fused_ordering(629) 00:14:40.659 fused_ordering(630) 00:14:40.659 fused_ordering(631) 00:14:40.659 fused_ordering(632) 00:14:40.659 fused_ordering(633) 00:14:40.659 fused_ordering(634) 00:14:40.659 fused_ordering(635) 00:14:40.659 fused_ordering(636) 00:14:40.659 fused_ordering(637) 00:14:40.659 fused_ordering(638) 00:14:40.659 fused_ordering(639) 00:14:40.659 fused_ordering(640) 00:14:40.659 fused_ordering(641) 00:14:40.659 fused_ordering(642) 00:14:40.659 fused_ordering(643) 00:14:40.659 fused_ordering(644) 00:14:40.659 fused_ordering(645) 00:14:40.659 fused_ordering(646) 00:14:40.659 fused_ordering(647) 00:14:40.659 fused_ordering(648) 00:14:40.659 fused_ordering(649) 00:14:40.659 fused_ordering(650) 00:14:40.659 fused_ordering(651) 00:14:40.659 fused_ordering(652) 00:14:40.659 fused_ordering(653) 00:14:40.659 fused_ordering(654) 00:14:40.659 fused_ordering(655) 00:14:40.659 fused_ordering(656) 00:14:40.659 fused_ordering(657) 00:14:40.659 fused_ordering(658) 00:14:40.659 fused_ordering(659) 00:14:40.659 fused_ordering(660) 00:14:40.659 fused_ordering(661) 00:14:40.659 fused_ordering(662) 00:14:40.659 fused_ordering(663) 00:14:40.659 fused_ordering(664) 00:14:40.659 fused_ordering(665) 00:14:40.659 fused_ordering(666) 00:14:40.659 fused_ordering(667) 00:14:40.659 fused_ordering(668) 00:14:40.659 fused_ordering(669) 00:14:40.659 fused_ordering(670) 00:14:40.659 fused_ordering(671) 00:14:40.659 fused_ordering(672) 00:14:40.659 fused_ordering(673) 00:14:40.659 fused_ordering(674) 00:14:40.659 fused_ordering(675) 00:14:40.659 fused_ordering(676) 00:14:40.659 fused_ordering(677) 00:14:40.659 fused_ordering(678) 00:14:40.659 fused_ordering(679) 00:14:40.659 fused_ordering(680) 00:14:40.659 fused_ordering(681) 00:14:40.659 fused_ordering(682) 00:14:40.659 fused_ordering(683) 00:14:40.659 fused_ordering(684) 00:14:40.659 fused_ordering(685) 00:14:40.659 fused_ordering(686) 00:14:40.659 fused_ordering(687) 00:14:40.659 fused_ordering(688) 00:14:40.659 fused_ordering(689) 00:14:40.659 fused_ordering(690) 00:14:40.659 fused_ordering(691) 00:14:40.659 fused_ordering(692) 00:14:40.659 fused_ordering(693) 00:14:40.659 fused_ordering(694) 00:14:40.659 fused_ordering(695) 00:14:40.659 fused_ordering(696) 00:14:40.659 fused_ordering(697) 00:14:40.659 fused_ordering(698) 00:14:40.659 fused_ordering(699) 00:14:40.659 fused_ordering(700) 00:14:40.659 fused_ordering(701) 00:14:40.659 fused_ordering(702) 00:14:40.659 fused_ordering(703) 00:14:40.659 fused_ordering(704) 00:14:40.659 fused_ordering(705) 00:14:40.659 fused_ordering(706) 00:14:40.659 fused_ordering(707) 00:14:40.659 fused_ordering(708) 00:14:40.659 fused_ordering(709) 00:14:40.659 fused_ordering(710) 00:14:40.659 fused_ordering(711) 00:14:40.660 fused_ordering(712) 00:14:40.660 fused_ordering(713) 00:14:40.660 fused_ordering(714) 00:14:40.660 fused_ordering(715) 00:14:40.660 fused_ordering(716) 00:14:40.660 fused_ordering(717) 00:14:40.660 fused_ordering(718) 00:14:40.660 fused_ordering(719) 00:14:40.660 fused_ordering(720) 00:14:40.660 fused_ordering(721) 00:14:40.660 fused_ordering(722) 00:14:40.660 fused_ordering(723) 00:14:40.660 fused_ordering(724) 00:14:40.660 fused_ordering(725) 00:14:40.660 fused_ordering(726) 00:14:40.660 fused_ordering(727) 00:14:40.660 fused_ordering(728) 00:14:40.660 fused_ordering(729) 00:14:40.660 fused_ordering(730) 00:14:40.660 fused_ordering(731) 00:14:40.660 fused_ordering(732) 00:14:40.660 fused_ordering(733) 00:14:40.660 fused_ordering(734) 00:14:40.660 fused_ordering(735) 00:14:40.660 fused_ordering(736) 00:14:40.660 fused_ordering(737) 00:14:40.660 fused_ordering(738) 00:14:40.660 fused_ordering(739) 00:14:40.660 fused_ordering(740) 00:14:40.660 fused_ordering(741) 00:14:40.660 fused_ordering(742) 00:14:40.660 fused_ordering(743) 00:14:40.660 fused_ordering(744) 00:14:40.660 fused_ordering(745) 00:14:40.660 fused_ordering(746) 00:14:40.660 fused_ordering(747) 00:14:40.660 fused_ordering(748) 00:14:40.660 fused_ordering(749) 00:14:40.660 fused_ordering(750) 00:14:40.660 fused_ordering(751) 00:14:40.660 fused_ordering(752) 00:14:40.660 fused_ordering(753) 00:14:40.660 fused_ordering(754) 00:14:40.660 fused_ordering(755) 00:14:40.660 fused_ordering(756) 00:14:40.660 fused_ordering(757) 00:14:40.660 fused_ordering(758) 00:14:40.660 fused_ordering(759) 00:14:40.660 fused_ordering(760) 00:14:40.660 fused_ordering(761) 00:14:40.660 fused_ordering(762) 00:14:40.660 fused_ordering(763) 00:14:40.660 fused_ordering(764) 00:14:40.660 fused_ordering(765) 00:14:40.660 fused_ordering(766) 00:14:40.660 fused_ordering(767) 00:14:40.660 fused_ordering(768) 00:14:40.660 fused_ordering(769) 00:14:40.660 fused_ordering(770) 00:14:40.660 fused_ordering(771) 00:14:40.660 fused_ordering(772) 00:14:40.660 fused_ordering(773) 00:14:40.660 fused_ordering(774) 00:14:40.660 fused_ordering(775) 00:14:40.660 fused_ordering(776) 00:14:40.660 fused_ordering(777) 00:14:40.660 fused_ordering(778) 00:14:40.660 fused_ordering(779) 00:14:40.660 fused_ordering(780) 00:14:40.660 fused_ordering(781) 00:14:40.660 fused_ordering(782) 00:14:40.660 fused_ordering(783) 00:14:40.660 fused_ordering(784) 00:14:40.660 fused_ordering(785) 00:14:40.660 fused_ordering(786) 00:14:40.660 fused_ordering(787) 00:14:40.660 fused_ordering(788) 00:14:40.660 fused_ordering(789) 00:14:40.660 fused_ordering(790) 00:14:40.660 fused_ordering(791) 00:14:40.660 fused_ordering(792) 00:14:40.660 fused_ordering(793) 00:14:40.660 fused_ordering(794) 00:14:40.660 fused_ordering(795) 00:14:40.660 fused_ordering(796) 00:14:40.660 fused_ordering(797) 00:14:40.660 fused_ordering(798) 00:14:40.660 fused_ordering(799) 00:14:40.660 fused_ordering(800) 00:14:40.660 fused_ordering(801) 00:14:40.660 fused_ordering(802) 00:14:40.660 fused_ordering(803) 00:14:40.660 fused_ordering(804) 00:14:40.660 fused_ordering(805) 00:14:40.660 fused_ordering(806) 00:14:40.660 fused_ordering(807) 00:14:40.660 fused_ordering(808) 00:14:40.660 fused_ordering(809) 00:14:40.660 fused_ordering(810) 00:14:40.660 fused_ordering(811) 00:14:40.660 fused_ordering(812) 00:14:40.660 fused_ordering(813) 00:14:40.660 fused_ordering(814) 00:14:40.660 fused_ordering(815) 00:14:40.660 fused_ordering(816) 00:14:40.660 fused_ordering(817) 00:14:40.660 fused_ordering(818) 00:14:40.660 fused_ordering(819) 00:14:40.660 fused_ordering(820) 00:14:41.231 fused_ordering(821) 00:14:41.231 fused_ordering(822) 00:14:41.231 fused_ordering(823) 00:14:41.231 fused_ordering(824) 00:14:41.231 fused_ordering(825) 00:14:41.231 fused_ordering(826) 00:14:41.231 fused_ordering(827) 00:14:41.231 fused_ordering(828) 00:14:41.231 fused_ordering(829) 00:14:41.231 fused_ordering(830) 00:14:41.231 fused_ordering(831) 00:14:41.231 fused_ordering(832) 00:14:41.231 fused_ordering(833) 00:14:41.231 fused_ordering(834) 00:14:41.231 fused_ordering(835) 00:14:41.231 fused_ordering(836) 00:14:41.231 fused_ordering(837) 00:14:41.231 fused_ordering(838) 00:14:41.231 fused_ordering(839) 00:14:41.231 fused_ordering(840) 00:14:41.231 fused_ordering(841) 00:14:41.231 fused_ordering(842) 00:14:41.231 fused_ordering(843) 00:14:41.231 fused_ordering(844) 00:14:41.231 fused_ordering(845) 00:14:41.231 fused_ordering(846) 00:14:41.231 fused_ordering(847) 00:14:41.231 fused_ordering(848) 00:14:41.231 fused_ordering(849) 00:14:41.231 fused_ordering(850) 00:14:41.231 fused_ordering(851) 00:14:41.231 fused_ordering(852) 00:14:41.231 fused_ordering(853) 00:14:41.231 fused_ordering(854) 00:14:41.231 fused_ordering(855) 00:14:41.231 fused_ordering(856) 00:14:41.231 fused_ordering(857) 00:14:41.231 fused_ordering(858) 00:14:41.231 fused_ordering(859) 00:14:41.231 fused_ordering(860) 00:14:41.231 fused_ordering(861) 00:14:41.231 fused_ordering(862) 00:14:41.231 fused_ordering(863) 00:14:41.231 fused_ordering(864) 00:14:41.231 fused_ordering(865) 00:14:41.231 fused_ordering(866) 00:14:41.231 fused_ordering(867) 00:14:41.231 fused_ordering(868) 00:14:41.231 fused_ordering(869) 00:14:41.231 fused_ordering(870) 00:14:41.231 fused_ordering(871) 00:14:41.231 fused_ordering(872) 00:14:41.231 fused_ordering(873) 00:14:41.231 fused_ordering(874) 00:14:41.231 fused_ordering(875) 00:14:41.231 fused_ordering(876) 00:14:41.231 fused_ordering(877) 00:14:41.231 fused_ordering(878) 00:14:41.231 fused_ordering(879) 00:14:41.231 fused_ordering(880) 00:14:41.231 fused_ordering(881) 00:14:41.231 fused_ordering(882) 00:14:41.231 fused_ordering(883) 00:14:41.231 fused_ordering(884) 00:14:41.231 fused_ordering(885) 00:14:41.231 fused_ordering(886) 00:14:41.231 fused_ordering(887) 00:14:41.231 fused_ordering(888) 00:14:41.231 fused_ordering(889) 00:14:41.231 fused_ordering(890) 00:14:41.231 fused_ordering(891) 00:14:41.231 fused_ordering(892) 00:14:41.231 fused_ordering(893) 00:14:41.231 fused_ordering(894) 00:14:41.231 fused_ordering(895) 00:14:41.231 fused_ordering(896) 00:14:41.231 fused_ordering(897) 00:14:41.231 fused_ordering(898) 00:14:41.231 fused_ordering(899) 00:14:41.231 fused_ordering(900) 00:14:41.231 fused_ordering(901) 00:14:41.231 fused_ordering(902) 00:14:41.231 fused_ordering(903) 00:14:41.231 fused_ordering(904) 00:14:41.231 fused_ordering(905) 00:14:41.231 fused_ordering(906) 00:14:41.231 fused_ordering(907) 00:14:41.231 fused_ordering(908) 00:14:41.231 fused_ordering(909) 00:14:41.231 fused_ordering(910) 00:14:41.231 fused_ordering(911) 00:14:41.231 fused_ordering(912) 00:14:41.231 fused_ordering(913) 00:14:41.231 fused_ordering(914) 00:14:41.231 fused_ordering(915) 00:14:41.231 fused_ordering(916) 00:14:41.231 fused_ordering(917) 00:14:41.231 fused_ordering(918) 00:14:41.231 fused_ordering(919) 00:14:41.231 fused_ordering(920) 00:14:41.231 fused_ordering(921) 00:14:41.231 fused_ordering(922) 00:14:41.231 fused_ordering(923) 00:14:41.231 fused_ordering(924) 00:14:41.231 fused_ordering(925) 00:14:41.231 fused_ordering(926) 00:14:41.231 fused_ordering(927) 00:14:41.231 fused_ordering(928) 00:14:41.231 fused_ordering(929) 00:14:41.231 fused_ordering(930) 00:14:41.231 fused_ordering(931) 00:14:41.231 fused_ordering(932) 00:14:41.231 fused_ordering(933) 00:14:41.231 fused_ordering(934) 00:14:41.231 fused_ordering(935) 00:14:41.231 fused_ordering(936) 00:14:41.231 fused_ordering(937) 00:14:41.231 fused_ordering(938) 00:14:41.231 fused_ordering(939) 00:14:41.231 fused_ordering(940) 00:14:41.231 fused_ordering(941) 00:14:41.231 fused_ordering(942) 00:14:41.231 fused_ordering(943) 00:14:41.231 fused_ordering(944) 00:14:41.231 fused_ordering(945) 00:14:41.231 fused_ordering(946) 00:14:41.231 fused_ordering(947) 00:14:41.231 fused_ordering(948) 00:14:41.231 fused_ordering(949) 00:14:41.231 fused_ordering(950) 00:14:41.231 fused_ordering(951) 00:14:41.231 fused_ordering(952) 00:14:41.231 fused_ordering(953) 00:14:41.231 fused_ordering(954) 00:14:41.231 fused_ordering(955) 00:14:41.231 fused_ordering(956) 00:14:41.231 fused_ordering(957) 00:14:41.231 fused_ordering(958) 00:14:41.231 fused_ordering(959) 00:14:41.231 fused_ordering(960) 00:14:41.231 fused_ordering(961) 00:14:41.231 fused_ordering(962) 00:14:41.231 fused_ordering(963) 00:14:41.231 fused_ordering(964) 00:14:41.231 fused_ordering(965) 00:14:41.231 fused_ordering(966) 00:14:41.231 fused_ordering(967) 00:14:41.231 fused_ordering(968) 00:14:41.231 fused_ordering(969) 00:14:41.231 fused_ordering(970) 00:14:41.231 fused_ordering(971) 00:14:41.231 fused_ordering(972) 00:14:41.231 fused_ordering(973) 00:14:41.231 fused_ordering(974) 00:14:41.231 fused_ordering(975) 00:14:41.231 fused_ordering(976) 00:14:41.231 fused_ordering(977) 00:14:41.231 fused_ordering(978) 00:14:41.231 fused_ordering(979) 00:14:41.231 fused_ordering(980) 00:14:41.231 fused_ordering(981) 00:14:41.231 fused_ordering(982) 00:14:41.231 fused_ordering(983) 00:14:41.231 fused_ordering(984) 00:14:41.231 fused_ordering(985) 00:14:41.231 fused_ordering(986) 00:14:41.231 fused_ordering(987) 00:14:41.231 fused_ordering(988) 00:14:41.231 fused_ordering(989) 00:14:41.231 fused_ordering(990) 00:14:41.231 fused_ordering(991) 00:14:41.231 fused_ordering(992) 00:14:41.231 fused_ordering(993) 00:14:41.231 fused_ordering(994) 00:14:41.231 fused_ordering(995) 00:14:41.231 fused_ordering(996) 00:14:41.231 fused_ordering(997) 00:14:41.231 fused_ordering(998) 00:14:41.231 fused_ordering(999) 00:14:41.231 fused_ordering(1000) 00:14:41.231 fused_ordering(1001) 00:14:41.231 fused_ordering(1002) 00:14:41.231 fused_ordering(1003) 00:14:41.231 fused_ordering(1004) 00:14:41.231 fused_ordering(1005) 00:14:41.231 fused_ordering(1006) 00:14:41.231 fused_ordering(1007) 00:14:41.231 fused_ordering(1008) 00:14:41.231 fused_ordering(1009) 00:14:41.231 fused_ordering(1010) 00:14:41.231 fused_ordering(1011) 00:14:41.231 fused_ordering(1012) 00:14:41.231 fused_ordering(1013) 00:14:41.231 fused_ordering(1014) 00:14:41.231 fused_ordering(1015) 00:14:41.231 fused_ordering(1016) 00:14:41.231 fused_ordering(1017) 00:14:41.231 fused_ordering(1018) 00:14:41.231 fused_ordering(1019) 00:14:41.231 fused_ordering(1020) 00:14:41.231 fused_ordering(1021) 00:14:41.231 fused_ordering(1022) 00:14:41.231 fused_ordering(1023) 00:14:41.231 00:54:25 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:41.231 00:54:25 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:41.231 00:54:25 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:41.231 00:54:25 -- nvmf/common.sh@116 -- # sync 00:14:41.231 00:54:25 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:41.231 00:54:25 -- nvmf/common.sh@119 -- # set +e 00:14:41.231 00:54:25 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:41.231 00:54:25 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:41.231 rmmod nvme_tcp 00:14:41.231 rmmod nvme_fabrics 00:14:41.231 rmmod nvme_keyring 00:14:41.231 00:54:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:41.231 00:54:25 -- nvmf/common.sh@123 -- # set -e 00:14:41.232 00:54:25 -- nvmf/common.sh@124 -- # return 0 00:14:41.232 00:54:25 -- nvmf/common.sh@477 -- # '[' -n 3365591 ']' 00:14:41.232 00:54:25 -- nvmf/common.sh@478 -- # killprocess 3365591 00:14:41.232 00:54:25 -- common/autotest_common.sh@926 -- # '[' -z 3365591 ']' 00:14:41.232 00:54:25 -- common/autotest_common.sh@930 -- # kill -0 3365591 00:14:41.232 00:54:25 -- common/autotest_common.sh@931 -- # uname 00:14:41.232 00:54:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:41.232 00:54:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3365591 00:14:41.232 00:54:25 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:14:41.232 00:54:25 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:14:41.232 00:54:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3365591' 00:14:41.232 killing process with pid 3365591 00:14:41.232 00:54:25 -- common/autotest_common.sh@945 -- # kill 3365591 00:14:41.232 00:54:25 -- common/autotest_common.sh@950 -- # wait 3365591 00:14:41.490 00:54:25 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:41.490 00:54:25 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:41.490 00:54:25 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:41.490 00:54:25 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:41.490 00:54:25 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:41.490 00:54:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:41.490 00:54:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:41.490 00:54:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:44.028 00:54:27 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:44.028 00:14:44.028 real 0m8.897s 00:14:44.028 user 0m7.057s 00:14:44.028 sys 0m3.614s 00:14:44.028 00:54:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:44.028 00:54:27 -- common/autotest_common.sh@10 -- # set +x 00:14:44.028 ************************************ 00:14:44.028 END TEST nvmf_fused_ordering 00:14:44.028 ************************************ 00:14:44.028 00:54:27 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:44.028 00:54:27 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:44.028 00:54:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:44.028 00:54:27 -- common/autotest_common.sh@10 -- # set +x 00:14:44.028 ************************************ 00:14:44.028 START TEST nvmf_delete_subsystem 00:14:44.028 ************************************ 00:14:44.028 00:54:27 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:44.028 * Looking for test storage... 00:14:44.028 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:44.028 00:54:27 -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:44.028 00:54:27 -- nvmf/common.sh@7 -- # uname -s 00:14:44.028 00:54:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:44.028 00:54:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:44.028 00:54:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:44.028 00:54:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:44.028 00:54:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:44.028 00:54:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:44.028 00:54:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:44.028 00:54:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:44.028 00:54:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:44.028 00:54:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:44.028 00:54:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:44.028 00:54:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:44.028 00:54:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:44.028 00:54:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:44.028 00:54:27 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:44.028 00:54:27 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:44.028 00:54:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:44.028 00:54:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:44.028 00:54:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:44.028 00:54:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:44.028 00:54:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:44.028 00:54:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:44.028 00:54:27 -- paths/export.sh@5 -- # export PATH 00:14:44.028 00:54:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:44.028 00:54:27 -- nvmf/common.sh@46 -- # : 0 00:14:44.028 00:54:27 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:44.028 00:54:27 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:44.028 00:54:27 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:44.028 00:54:27 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:44.028 00:54:27 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:44.028 00:54:27 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:44.028 00:54:27 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:44.028 00:54:27 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:44.028 00:54:27 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:14:44.028 00:54:27 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:44.028 00:54:27 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:44.028 00:54:27 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:44.028 00:54:27 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:44.028 00:54:27 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:44.028 00:54:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:44.028 00:54:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:44.028 00:54:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:44.028 00:54:27 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:44.028 00:54:27 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:44.028 00:54:27 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:44.028 00:54:27 -- common/autotest_common.sh@10 -- # set +x 00:14:45.935 00:54:29 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:45.935 00:54:29 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:45.935 00:54:29 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:45.935 00:54:29 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:45.935 00:54:29 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:45.935 00:54:29 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:45.935 00:54:29 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:45.935 00:54:29 -- nvmf/common.sh@294 -- # net_devs=() 00:14:45.935 00:54:29 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:45.935 00:54:29 -- nvmf/common.sh@295 -- # e810=() 00:14:45.935 00:54:29 -- nvmf/common.sh@295 -- # local -ga e810 00:14:45.935 00:54:29 -- nvmf/common.sh@296 -- # x722=() 00:14:45.935 00:54:29 -- nvmf/common.sh@296 -- # local -ga x722 00:14:45.935 00:54:29 -- nvmf/common.sh@297 -- # mlx=() 00:14:45.935 00:54:29 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:45.935 00:54:29 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:45.935 00:54:29 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:45.935 00:54:29 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:45.935 00:54:29 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:45.935 00:54:29 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:45.935 00:54:29 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:45.935 00:54:29 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:45.935 00:54:29 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:45.935 00:54:29 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:45.935 00:54:29 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:45.935 00:54:29 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:45.935 00:54:29 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:45.935 00:54:29 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:45.935 00:54:29 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:14:45.935 00:54:29 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:14:45.935 00:54:29 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:14:45.935 00:54:29 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:45.935 00:54:29 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:45.935 00:54:29 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:45.935 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:45.935 00:54:29 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:45.935 00:54:29 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:45.935 00:54:29 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:45.935 00:54:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:45.935 00:54:29 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:45.935 00:54:29 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:45.936 00:54:29 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:45.936 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:45.936 00:54:29 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:45.936 00:54:29 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:45.936 00:54:29 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:45.936 00:54:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:45.936 00:54:29 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:45.936 00:54:29 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:45.936 00:54:29 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:14:45.936 00:54:29 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:14:45.936 00:54:29 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:45.936 00:54:29 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:45.936 00:54:29 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:45.936 00:54:29 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:45.936 00:54:29 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:45.936 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:45.936 00:54:29 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:45.936 00:54:29 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:45.936 00:54:29 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:45.936 00:54:29 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:45.936 00:54:29 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:45.936 00:54:29 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:45.936 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:45.936 00:54:29 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:45.936 00:54:29 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:45.936 00:54:29 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:45.936 00:54:29 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:45.936 00:54:29 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:45.936 00:54:29 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:45.936 00:54:29 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:45.936 00:54:29 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:45.936 00:54:29 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:45.936 00:54:29 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:45.936 00:54:29 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:45.936 00:54:29 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:45.936 00:54:29 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:45.936 00:54:29 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:45.936 00:54:29 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:45.936 00:54:29 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:45.936 00:54:29 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:45.936 00:54:29 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:45.936 00:54:29 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:45.936 00:54:29 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:45.936 00:54:29 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:45.936 00:54:29 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:45.936 00:54:29 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:45.936 00:54:29 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:45.936 00:54:29 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:45.936 00:54:29 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:45.936 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:45.936 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.118 ms 00:14:45.936 00:14:45.936 --- 10.0.0.2 ping statistics --- 00:14:45.936 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:45.936 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:14:45.936 00:54:29 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:45.936 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:45.936 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:14:45.936 00:14:45.936 --- 10.0.0.1 ping statistics --- 00:14:45.936 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:45.936 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:14:45.936 00:54:29 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:45.936 00:54:29 -- nvmf/common.sh@410 -- # return 0 00:14:45.936 00:54:29 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:45.936 00:54:29 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:45.936 00:54:29 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:45.936 00:54:29 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:45.936 00:54:29 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:45.936 00:54:29 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:45.936 00:54:29 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:45.936 00:54:29 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:14:45.936 00:54:29 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:45.936 00:54:29 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:45.936 00:54:29 -- common/autotest_common.sh@10 -- # set +x 00:14:45.936 00:54:29 -- nvmf/common.sh@469 -- # nvmfpid=3368103 00:14:45.936 00:54:29 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:14:45.936 00:54:29 -- nvmf/common.sh@470 -- # waitforlisten 3368103 00:14:45.936 00:54:29 -- common/autotest_common.sh@819 -- # '[' -z 3368103 ']' 00:14:45.936 00:54:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:45.936 00:54:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:45.936 00:54:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:45.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:45.936 00:54:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:45.936 00:54:29 -- common/autotest_common.sh@10 -- # set +x 00:14:45.936 [2024-07-23 00:54:29.890112] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:14:45.936 [2024-07-23 00:54:29.890191] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:45.936 EAL: No free 2048 kB hugepages reported on node 1 00:14:45.936 [2024-07-23 00:54:29.955778] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:45.936 [2024-07-23 00:54:30.050061] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:45.936 [2024-07-23 00:54:30.050234] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:45.936 [2024-07-23 00:54:30.050253] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:45.936 [2024-07-23 00:54:30.050267] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:45.936 [2024-07-23 00:54:30.050320] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:45.936 [2024-07-23 00:54:30.050326] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:46.875 00:54:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:46.875 00:54:30 -- common/autotest_common.sh@852 -- # return 0 00:14:46.875 00:54:30 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:46.875 00:54:30 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:46.875 00:54:30 -- common/autotest_common.sh@10 -- # set +x 00:14:46.875 00:54:30 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:46.875 00:54:30 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:46.875 00:54:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:46.875 00:54:30 -- common/autotest_common.sh@10 -- # set +x 00:14:46.875 [2024-07-23 00:54:30.885813] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:46.875 00:54:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:46.875 00:54:30 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:46.875 00:54:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:46.875 00:54:30 -- common/autotest_common.sh@10 -- # set +x 00:14:46.875 00:54:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:46.875 00:54:30 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:46.875 00:54:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:46.875 00:54:30 -- common/autotest_common.sh@10 -- # set +x 00:14:46.875 [2024-07-23 00:54:30.901995] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:46.875 00:54:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:46.875 00:54:30 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:46.875 00:54:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:46.875 00:54:30 -- common/autotest_common.sh@10 -- # set +x 00:14:46.875 NULL1 00:14:46.875 00:54:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:46.875 00:54:30 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:46.875 00:54:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:46.875 00:54:30 -- common/autotest_common.sh@10 -- # set +x 00:14:46.875 Delay0 00:14:46.875 00:54:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:46.875 00:54:30 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:46.875 00:54:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:46.875 00:54:30 -- common/autotest_common.sh@10 -- # set +x 00:14:46.875 00:54:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:46.875 00:54:30 -- target/delete_subsystem.sh@28 -- # perf_pid=3368260 00:14:46.875 00:54:30 -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:46.875 00:54:30 -- target/delete_subsystem.sh@30 -- # sleep 2 00:14:46.875 EAL: No free 2048 kB hugepages reported on node 1 00:14:46.875 [2024-07-23 00:54:30.976813] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:48.781 00:54:32 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:48.781 00:54:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:48.781 00:54:32 -- common/autotest_common.sh@10 -- # set +x 00:14:49.041 Read completed with error (sct=0, sc=8) 00:14:49.041 starting I/O failed: -6 00:14:49.041 Read completed with error (sct=0, sc=8) 00:14:49.041 Write completed with error (sct=0, sc=8) 00:14:49.041 Read completed with error (sct=0, sc=8) 00:14:49.041 Read completed with error (sct=0, sc=8) 00:14:49.041 starting I/O failed: -6 00:14:49.041 Read completed with error (sct=0, sc=8) 00:14:49.041 Read completed with error (sct=0, sc=8) 00:14:49.041 Read completed with error (sct=0, sc=8) 00:14:49.041 Read completed with error (sct=0, sc=8) 00:14:49.041 starting I/O failed: -6 00:14:49.041 Write completed with error (sct=0, sc=8) 00:14:49.041 Read completed with error (sct=0, sc=8) 00:14:49.041 Read completed with error (sct=0, sc=8) 00:14:49.041 Read completed with error (sct=0, sc=8) 00:14:49.041 starting I/O failed: -6 00:14:49.041 Read completed with error (sct=0, sc=8) 00:14:49.041 Read completed with error (sct=0, sc=8) 00:14:49.041 Read completed with error (sct=0, sc=8) 00:14:49.041 Read completed with error (sct=0, sc=8) 00:14:49.041 starting I/O failed: -6 00:14:49.041 Read completed with error (sct=0, sc=8) 00:14:49.041 Write completed with error (sct=0, sc=8) 00:14:49.041 Write completed with error (sct=0, sc=8) 00:14:49.041 Read completed with error (sct=0, sc=8) 00:14:49.041 starting I/O failed: -6 00:14:49.041 Read completed with error (sct=0, sc=8) 00:14:49.041 Read completed with error (sct=0, sc=8) 00:14:49.041 Read completed with error (sct=0, sc=8) 00:14:49.041 Read completed with error (sct=0, sc=8) 00:14:49.041 starting I/O failed: -6 00:14:49.041 Read completed with error (sct=0, sc=8) 00:14:49.041 Write completed with error (sct=0, sc=8) 00:14:49.041 Read completed with error (sct=0, sc=8) 00:14:49.041 Read completed with error (sct=0, sc=8) 00:14:49.041 starting I/O failed: -6 00:14:49.041 Read completed with error (sct=0, sc=8) 00:14:49.041 Read completed with error (sct=0, sc=8) 00:14:49.041 Read completed with error (sct=0, sc=8) 00:14:49.041 Read completed with error (sct=0, sc=8) 00:14:49.041 starting I/O failed: -6 00:14:49.041 Read completed with error (sct=0, sc=8) 00:14:49.041 Read completed with error (sct=0, sc=8) 00:14:49.041 Read completed with error (sct=0, sc=8) 00:14:49.041 Read completed with error (sct=0, sc=8) 00:14:49.041 starting I/O failed: -6 00:14:49.041 Read completed with error (sct=0, sc=8) 00:14:49.041 Read completed with error (sct=0, sc=8) 00:14:49.041 Read completed with error (sct=0, sc=8) 00:14:49.041 Read completed with error (sct=0, sc=8) 00:14:49.041 starting I/O failed: -6 00:14:49.041 Read completed with error (sct=0, sc=8) 00:14:49.041 Read completed with error (sct=0, sc=8) 00:14:49.041 Read completed with error (sct=0, sc=8) 00:14:49.041 Read completed with error (sct=0, sc=8) 00:14:49.041 starting I/O failed: -6 00:14:49.041 Read completed with error (sct=0, sc=8) 00:14:49.041 Read completed with error (sct=0, sc=8) 00:14:49.041 Read completed with error (sct=0, sc=8) 00:14:49.041 [2024-07-23 00:54:33.108316] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52dd30 is same with the state(5) to be set 00:14:49.041 Write completed with error (sct=0, sc=8) 00:14:49.041 starting I/O failed: -6 00:14:49.041 Read completed with error (sct=0, sc=8) 00:14:49.041 Read completed with error (sct=0, sc=8) 00:14:49.041 starting I/O failed: -6 00:14:49.042 Write completed with error (sct=0, sc=8) 00:14:49.042 Read completed with error (sct=0, sc=8) 00:14:49.042 starting I/O failed: -6 00:14:49.042 Read completed with error (sct=0, sc=8) 00:14:49.042 Read completed with error (sct=0, sc=8) 00:14:49.042 starting I/O failed: -6 00:14:49.042 Write completed with error (sct=0, sc=8) 00:14:49.042 Read completed with error (sct=0, sc=8) 00:14:49.042 starting I/O failed: -6 00:14:49.042 Read completed with error (sct=0, sc=8) 00:14:49.042 Write completed with error (sct=0, sc=8) 00:14:49.042 starting I/O failed: -6 00:14:49.042 Read completed with error (sct=0, sc=8) 00:14:49.042 Read completed with error (sct=0, sc=8) 00:14:49.042 starting I/O failed: -6 00:14:49.042 Read completed with error (sct=0, sc=8) 00:14:49.042 Read completed with error (sct=0, sc=8) 00:14:49.042 starting I/O failed: -6 00:14:49.042 Read completed with error (sct=0, sc=8) 00:14:49.042 Read completed with error (sct=0, sc=8) 00:14:49.042 starting I/O failed: -6 00:14:49.042 Read completed with error (sct=0, sc=8) 00:14:49.042 Read completed with error (sct=0, sc=8) 00:14:49.042 starting I/O failed: -6 00:14:49.042 Write completed with error (sct=0, sc=8) 00:14:49.042 Write completed with error (sct=0, sc=8) 00:14:49.042 starting I/O failed: -6 00:14:49.042 Read completed with error (sct=0, sc=8) 00:14:49.042 Write completed with error (sct=0, sc=8) 00:14:49.042 starting I/O failed: -6 00:14:49.042 Read completed with error (sct=0, sc=8) 00:14:49.042 Read completed with error (sct=0, sc=8) 00:14:49.042 starting I/O failed: -6 00:14:49.042 Read completed with error (sct=0, sc=8) 00:14:49.042 Read completed with error (sct=0, sc=8) 00:14:49.042 starting I/O failed: -6 00:14:49.042 Read completed with error (sct=0, sc=8) 00:14:49.042 Read completed with error (sct=0, sc=8) 00:14:49.042 starting I/O failed: -6 00:14:49.042 Read completed with error (sct=0, sc=8) 00:14:49.042 Read completed with error (sct=0, sc=8) 00:14:49.042 starting I/O failed: -6 00:14:49.042 Write completed with error (sct=0, sc=8) 00:14:49.042 Read completed with error (sct=0, sc=8) 00:14:49.042 starting I/O failed: -6 00:14:49.042 Read completed with error (sct=0, sc=8) 00:14:49.042 Read completed with error (sct=0, sc=8) 00:14:49.042 starting I/O failed: -6 00:14:49.042 Read completed with error (sct=0, sc=8) 00:14:49.042 Write completed with error (sct=0, sc=8) 00:14:49.042 starting I/O failed: -6 00:14:49.042 Read completed with error (sct=0, sc=8) 00:14:49.042 Read completed with error (sct=0, sc=8) 00:14:49.042 starting I/O failed: -6 00:14:49.042 Read completed with error (sct=0, sc=8) 00:14:49.042 Read completed with error (sct=0, sc=8) 00:14:49.042 starting I/O failed: -6 00:14:49.042 Read completed with error (sct=0, sc=8) 00:14:49.042 Read completed with error (sct=0, sc=8) 00:14:49.042 starting I/O failed: -6 00:14:49.042 Write completed with error (sct=0, sc=8) 00:14:49.042 Write completed with error (sct=0, sc=8) 00:14:49.042 starting I/O failed: -6 00:14:49.042 Read completed with error (sct=0, sc=8) 00:14:49.042 Write completed with error (sct=0, sc=8) 00:14:49.042 starting I/O failed: -6 00:14:49.042 Read completed with error (sct=0, sc=8) 00:14:49.042 Read completed with error (sct=0, sc=8) 00:14:49.042 starting I/O failed: -6 00:14:49.042 Read completed with error (sct=0, sc=8) 00:14:49.042 Write completed with error (sct=0, sc=8) 00:14:49.042 starting I/O failed: -6 00:14:49.042 Write completed with error (sct=0, sc=8) 00:14:49.042 Read completed with error (sct=0, sc=8) 00:14:49.042 starting I/O failed: -6 00:14:49.042 Read completed with error (sct=0, sc=8) 00:14:49.042 Write completed with error (sct=0, sc=8) 00:14:49.042 starting I/O failed: -6 00:14:49.042 Read completed with error (sct=0, sc=8) 00:14:49.042 Write completed with error (sct=0, sc=8) 00:14:49.042 starting I/O failed: -6 00:14:49.042 Write completed with error (sct=0, sc=8) 00:14:49.042 Read completed with error (sct=0, sc=8) 00:14:49.042 starting I/O failed: -6 00:14:49.042 Read completed with error (sct=0, sc=8) 00:14:49.042 [2024-07-23 00:54:33.109073] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e3a0 is same with the state(5) to be set 00:14:49.042 Read completed with error (sct=0, sc=8) 00:14:49.042 Read completed with error (sct=0, sc=8) 00:14:49.042 Write completed with error (sct=0, sc=8) 00:14:49.042 Read completed with error (sct=0, sc=8) 00:14:49.042 starting I/O failed: -6 00:14:49.042 Write completed with error (sct=0, sc=8) 00:14:49.042 Read completed with error (sct=0, sc=8) 00:14:49.042 Read completed with error (sct=0, sc=8) 00:14:49.042 Read completed with error (sct=0, sc=8) 00:14:49.042 starting I/O failed: -6 00:14:49.042 Read completed with error (sct=0, sc=8) 00:14:49.042 Read completed with error (sct=0, sc=8) 00:14:49.042 Read completed with error (sct=0, sc=8) 00:14:49.042 Write completed with error (sct=0, sc=8) 00:14:49.042 starting I/O failed: -6 00:14:49.042 Write completed with error (sct=0, sc=8) 00:14:49.042 Read completed with error (sct=0, sc=8) 00:14:49.042 Read completed with error (sct=0, sc=8) 00:14:49.042 Read completed with error (sct=0, sc=8) 00:14:49.042 starting I/O failed: -6 00:14:49.042 Read completed with error (sct=0, sc=8) 00:14:49.042 Write completed with error (sct=0, sc=8) 00:14:49.042 Write completed with error (sct=0, sc=8) 00:14:49.042 Write completed with error (sct=0, sc=8) 00:14:49.042 starting I/O failed: -6 00:14:49.042 Write completed with error (sct=0, sc=8) 00:14:49.042 Read completed with error (sct=0, sc=8) 00:14:49.042 Read completed with error (sct=0, sc=8) 00:14:49.042 Write completed with error (sct=0, sc=8) 00:14:49.042 starting I/O failed: -6 00:14:49.042 Read completed with error (sct=0, sc=8) 00:14:49.042 Write completed with error (sct=0, sc=8) 00:14:49.042 Read completed with error (sct=0, sc=8) 00:14:49.042 Read completed with error (sct=0, sc=8) 00:14:49.042 starting I/O failed: -6 00:14:49.042 Read completed with error (sct=0, sc=8) 00:14:49.042 Write completed with error (sct=0, sc=8) 00:14:49.042 Read completed with error (sct=0, sc=8) 00:14:49.042 Read completed with error (sct=0, sc=8) 00:14:49.042 starting I/O failed: -6 00:14:49.042 Read completed with error (sct=0, sc=8) 00:14:49.042 Read completed with error (sct=0, sc=8) 00:14:49.042 Read completed with error (sct=0, sc=8) 00:14:49.042 Read completed with error (sct=0, sc=8) 00:14:49.042 starting I/O failed: -6 00:14:49.042 Read completed with error (sct=0, sc=8) 00:14:49.042 Write completed with error (sct=0, sc=8) 00:14:49.042 [2024-07-23 00:54:33.109578] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7738000c00 is same with the state(5) to be set 00:14:49.042 Write completed with error (sct=0, sc=8) 00:14:49.042 Write completed with error (sct=0, sc=8) 00:14:49.042 Read completed with error (sct=0, sc=8) 00:14:49.042 Read completed with error (sct=0, sc=8) 00:14:49.042 Write completed with error (sct=0, sc=8) 00:14:49.042 Read completed with error (sct=0, sc=8) 00:14:49.042 Read completed with error (sct=0, sc=8) 00:14:49.042 Read completed with error (sct=0, sc=8) 00:14:49.042 Read completed with error (sct=0, sc=8) 00:14:49.042 Read completed with error (sct=0, sc=8) 00:14:49.042 Read completed with error (sct=0, sc=8) 00:14:49.042 Read completed with error (sct=0, sc=8) 00:14:49.042 Read completed with error (sct=0, sc=8) 00:14:49.042 Read completed with error (sct=0, sc=8) 00:14:49.042 Read completed with error (sct=0, sc=8) 00:14:49.042 Read completed with error (sct=0, sc=8) 00:14:49.042 Read completed with error (sct=0, sc=8) 00:14:49.042 Read completed with error (sct=0, sc=8) 00:14:49.042 Read completed with error (sct=0, sc=8) 00:14:49.042 Read completed with error (sct=0, sc=8) 00:14:49.042 Write completed with error (sct=0, sc=8) 00:14:49.042 Read completed with error (sct=0, sc=8) 00:14:49.042 Read completed with error (sct=0, sc=8) 00:14:49.042 Read completed with error (sct=0, sc=8) 00:14:49.042 Read completed with error (sct=0, sc=8) 00:14:49.042 Read completed with error (sct=0, sc=8) 00:14:49.042 Write completed with error (sct=0, sc=8) 00:14:49.042 Read completed with error (sct=0, sc=8) 00:14:49.042 Read completed with error (sct=0, sc=8) 00:14:49.042 Read completed with error (sct=0, sc=8) 00:14:49.042 Read completed with error (sct=0, sc=8) 00:14:49.042 Read completed with error (sct=0, sc=8) 00:14:49.042 Read completed with error (sct=0, sc=8) 00:14:49.042 Read completed with error (sct=0, sc=8) 00:14:49.042 Read completed with error (sct=0, sc=8) 00:14:49.042 Write completed with error (sct=0, sc=8) 00:14:49.042 Read completed with error (sct=0, sc=8) 00:14:49.042 Write completed with error (sct=0, sc=8) 00:14:49.042 Read completed with error (sct=0, sc=8) 00:14:49.042 Read completed with error (sct=0, sc=8) 00:14:49.042 Read completed with error (sct=0, sc=8) 00:14:49.042 Write completed with error (sct=0, sc=8) 00:14:49.042 Read completed with error (sct=0, sc=8) 00:14:49.042 Read completed with error (sct=0, sc=8) 00:14:49.042 Read completed with error (sct=0, sc=8) 00:14:49.042 Read completed with error (sct=0, sc=8) 00:14:49.042 Read completed with error (sct=0, sc=8) 00:14:49.980 [2024-07-23 00:54:34.074998] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5315e0 is same with the state(5) to be set 00:14:49.980 Read completed with error (sct=0, sc=8) 00:14:49.980 Write completed with error (sct=0, sc=8) 00:14:49.980 Write completed with error (sct=0, sc=8) 00:14:49.980 Read completed with error (sct=0, sc=8) 00:14:49.980 Read completed with error (sct=0, sc=8) 00:14:49.980 Read completed with error (sct=0, sc=8) 00:14:49.980 Write completed with error (sct=0, sc=8) 00:14:49.980 Write completed with error (sct=0, sc=8) 00:14:49.981 Read completed with error (sct=0, sc=8) 00:14:49.981 Write completed with error (sct=0, sc=8) 00:14:49.981 Read completed with error (sct=0, sc=8) 00:14:49.981 Read completed with error (sct=0, sc=8) 00:14:49.981 Read completed with error (sct=0, sc=8) 00:14:49.981 Read completed with error (sct=0, sc=8) 00:14:49.981 Read completed with error (sct=0, sc=8) 00:14:49.981 Read completed with error (sct=0, sc=8) 00:14:49.981 Read completed with error (sct=0, sc=8) 00:14:49.981 Read completed with error (sct=0, sc=8) 00:14:49.981 Write completed with error (sct=0, sc=8) 00:14:49.981 Read completed with error (sct=0, sc=8) 00:14:49.981 Read completed with error (sct=0, sc=8) 00:14:49.981 Read completed with error (sct=0, sc=8) 00:14:49.981 Write completed with error (sct=0, sc=8) 00:14:49.981 Read completed with error (sct=0, sc=8) 00:14:49.981 Write completed with error (sct=0, sc=8) 00:14:49.981 Read completed with error (sct=0, sc=8) 00:14:49.981 Write completed with error (sct=0, sc=8) 00:14:49.981 Write completed with error (sct=0, sc=8) 00:14:49.981 Read completed with error (sct=0, sc=8) 00:14:49.981 Read completed with error (sct=0, sc=8) 00:14:49.981 Write completed with error (sct=0, sc=8) 00:14:49.981 Read completed with error (sct=0, sc=8) 00:14:49.981 Write completed with error (sct=0, sc=8) 00:14:49.981 Read completed with error (sct=0, sc=8) 00:14:49.981 Write completed with error (sct=0, sc=8) 00:14:49.981 Read completed with error (sct=0, sc=8) 00:14:49.981 Read completed with error (sct=0, sc=8) 00:14:49.981 Read completed with error (sct=0, sc=8) 00:14:49.981 Write completed with error (sct=0, sc=8) 00:14:49.981 Read completed with error (sct=0, sc=8) 00:14:49.981 Read completed with error (sct=0, sc=8) 00:14:49.981 Write completed with error (sct=0, sc=8) 00:14:49.981 Read completed with error (sct=0, sc=8) 00:14:49.981 [2024-07-23 00:54:34.109116] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e650 is same with the state(5) to be set 00:14:49.981 Read completed with error (sct=0, sc=8) 00:14:49.981 Write completed with error (sct=0, sc=8) 00:14:49.981 Write completed with error (sct=0, sc=8) 00:14:49.981 Read completed with error (sct=0, sc=8) 00:14:49.981 Read completed with error (sct=0, sc=8) 00:14:49.981 Write completed with error (sct=0, sc=8) 00:14:49.981 Write completed with error (sct=0, sc=8) 00:14:49.981 Write completed with error (sct=0, sc=8) 00:14:49.981 Read completed with error (sct=0, sc=8) 00:14:49.981 Write completed with error (sct=0, sc=8) 00:14:49.981 Read completed with error (sct=0, sc=8) 00:14:49.981 Write completed with error (sct=0, sc=8) 00:14:49.981 Write completed with error (sct=0, sc=8) 00:14:49.981 Read completed with error (sct=0, sc=8) 00:14:49.981 Read completed with error (sct=0, sc=8) 00:14:49.981 Read completed with error (sct=0, sc=8) 00:14:49.981 Read completed with error (sct=0, sc=8) 00:14:49.981 Read completed with error (sct=0, sc=8) 00:14:49.981 Read completed with error (sct=0, sc=8) 00:14:49.981 Read completed with error (sct=0, sc=8) 00:14:49.981 Write completed with error (sct=0, sc=8) 00:14:49.981 Read completed with error (sct=0, sc=8) 00:14:49.981 Write completed with error (sct=0, sc=8) 00:14:49.981 Read completed with error (sct=0, sc=8) 00:14:49.981 Write completed with error (sct=0, sc=8) 00:14:49.981 Read completed with error (sct=0, sc=8) 00:14:49.981 Read completed with error (sct=0, sc=8) 00:14:49.981 Read completed with error (sct=0, sc=8) 00:14:49.981 Read completed with error (sct=0, sc=8) 00:14:49.981 Read completed with error (sct=0, sc=8) 00:14:49.981 Read completed with error (sct=0, sc=8) 00:14:49.981 Read completed with error (sct=0, sc=8) 00:14:49.981 Read completed with error (sct=0, sc=8) 00:14:49.981 Read completed with error (sct=0, sc=8) 00:14:49.981 Write completed with error (sct=0, sc=8) 00:14:49.981 Read completed with error (sct=0, sc=8) 00:14:49.981 Read completed with error (sct=0, sc=8) 00:14:49.981 Read completed with error (sct=0, sc=8) 00:14:49.981 Read completed with error (sct=0, sc=8) 00:14:49.981 Read completed with error (sct=0, sc=8) 00:14:49.981 Read completed with error (sct=0, sc=8) 00:14:49.981 Read completed with error (sct=0, sc=8) 00:14:49.981 Read completed with error (sct=0, sc=8) 00:14:49.981 [2024-07-23 00:54:34.109386] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e060 is same with the state(5) to be set 00:14:49.981 Read completed with error (sct=0, sc=8) 00:14:49.981 Read completed with error (sct=0, sc=8) 00:14:49.981 Read completed with error (sct=0, sc=8) 00:14:49.981 Write completed with error (sct=0, sc=8) 00:14:49.981 Read completed with error (sct=0, sc=8) 00:14:49.981 Read completed with error (sct=0, sc=8) 00:14:49.981 Read completed with error (sct=0, sc=8) 00:14:49.981 Read completed with error (sct=0, sc=8) 00:14:49.981 Read completed with error (sct=0, sc=8) 00:14:49.981 Write completed with error (sct=0, sc=8) 00:14:49.981 Write completed with error (sct=0, sc=8) 00:14:49.981 Write completed with error (sct=0, sc=8) 00:14:49.981 Read completed with error (sct=0, sc=8) 00:14:49.981 Read completed with error (sct=0, sc=8) 00:14:49.981 Write completed with error (sct=0, sc=8) 00:14:49.981 [2024-07-23 00:54:34.110526] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f773800bf20 is same with the state(5) to be set 00:14:49.981 Write completed with error (sct=0, sc=8) 00:14:49.981 Write completed with error (sct=0, sc=8) 00:14:49.981 Read completed with error (sct=0, sc=8) 00:14:49.981 Read completed with error (sct=0, sc=8) 00:14:49.981 Read completed with error (sct=0, sc=8) 00:14:49.981 Read completed with error (sct=0, sc=8) 00:14:49.981 Read completed with error (sct=0, sc=8) 00:14:49.981 Write completed with error (sct=0, sc=8) 00:14:49.981 Read completed with error (sct=0, sc=8) 00:14:49.981 Write completed with error (sct=0, sc=8) 00:14:49.981 Read completed with error (sct=0, sc=8) 00:14:49.981 Write completed with error (sct=0, sc=8) 00:14:49.981 Read completed with error (sct=0, sc=8) 00:14:49.981 Write completed with error (sct=0, sc=8) 00:14:49.981 [2024-07-23 00:54:34.111156] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f773800c480 is same with the state(5) to be set 00:14:49.981 [2024-07-23 00:54:34.111693] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5315e0 (9): Bad file descriptor 00:14:49.981 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:14:49.981 00:54:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:49.981 00:54:34 -- target/delete_subsystem.sh@34 -- # delay=0 00:14:49.981 00:54:34 -- target/delete_subsystem.sh@35 -- # kill -0 3368260 00:14:49.981 00:54:34 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:14:49.981 Initializing NVMe Controllers 00:14:49.981 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:49.981 Controller IO queue size 128, less than required. 00:14:49.981 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:49.981 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:49.981 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:49.981 Initialization complete. Launching workers. 00:14:49.981 ======================================================== 00:14:49.981 Latency(us) 00:14:49.981 Device Information : IOPS MiB/s Average min max 00:14:49.981 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 191.59 0.09 890632.33 766.19 1012093.74 00:14:49.981 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 151.88 0.07 938136.37 382.70 1012038.03 00:14:49.981 ======================================================== 00:14:49.981 Total : 343.47 0.17 911638.45 382.70 1012093.74 00:14:49.981 00:14:50.552 00:54:34 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:14:50.552 00:54:34 -- target/delete_subsystem.sh@35 -- # kill -0 3368260 00:14:50.552 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3368260) - No such process 00:14:50.552 00:54:34 -- target/delete_subsystem.sh@45 -- # NOT wait 3368260 00:14:50.552 00:54:34 -- common/autotest_common.sh@640 -- # local es=0 00:14:50.552 00:54:34 -- common/autotest_common.sh@642 -- # valid_exec_arg wait 3368260 00:14:50.552 00:54:34 -- common/autotest_common.sh@628 -- # local arg=wait 00:14:50.552 00:54:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:50.552 00:54:34 -- common/autotest_common.sh@632 -- # type -t wait 00:14:50.552 00:54:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:50.552 00:54:34 -- common/autotest_common.sh@643 -- # wait 3368260 00:14:50.552 00:54:34 -- common/autotest_common.sh@643 -- # es=1 00:14:50.552 00:54:34 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:14:50.552 00:54:34 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:14:50.552 00:54:34 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:14:50.552 00:54:34 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:50.552 00:54:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:50.552 00:54:34 -- common/autotest_common.sh@10 -- # set +x 00:14:50.552 00:54:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:50.552 00:54:34 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:50.552 00:54:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:50.552 00:54:34 -- common/autotest_common.sh@10 -- # set +x 00:14:50.552 [2024-07-23 00:54:34.636190] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:50.552 00:54:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:50.552 00:54:34 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:50.552 00:54:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:50.552 00:54:34 -- common/autotest_common.sh@10 -- # set +x 00:14:50.552 00:54:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:50.552 00:54:34 -- target/delete_subsystem.sh@54 -- # perf_pid=3368671 00:14:50.552 00:54:34 -- target/delete_subsystem.sh@56 -- # delay=0 00:14:50.552 00:54:34 -- target/delete_subsystem.sh@57 -- # kill -0 3368671 00:14:50.552 00:54:34 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:50.552 00:54:34 -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:50.552 EAL: No free 2048 kB hugepages reported on node 1 00:14:50.552 [2024-07-23 00:54:34.698709] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:51.121 00:54:35 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:51.121 00:54:35 -- target/delete_subsystem.sh@57 -- # kill -0 3368671 00:14:51.121 00:54:35 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:51.700 00:54:35 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:51.700 00:54:35 -- target/delete_subsystem.sh@57 -- # kill -0 3368671 00:14:51.700 00:54:35 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:52.017 00:54:36 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:52.017 00:54:36 -- target/delete_subsystem.sh@57 -- # kill -0 3368671 00:14:52.017 00:54:36 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:52.586 00:54:36 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:52.586 00:54:36 -- target/delete_subsystem.sh@57 -- # kill -0 3368671 00:14:52.586 00:54:36 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:53.153 00:54:37 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:53.153 00:54:37 -- target/delete_subsystem.sh@57 -- # kill -0 3368671 00:14:53.154 00:54:37 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:53.721 00:54:37 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:53.721 00:54:37 -- target/delete_subsystem.sh@57 -- # kill -0 3368671 00:14:53.721 00:54:37 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:53.721 Initializing NVMe Controllers 00:14:53.721 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:53.721 Controller IO queue size 128, less than required. 00:14:53.721 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:53.721 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:53.721 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:53.721 Initialization complete. Launching workers. 00:14:53.721 ======================================================== 00:14:53.721 Latency(us) 00:14:53.721 Device Information : IOPS MiB/s Average min max 00:14:53.721 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003500.74 1000235.06 1040834.18 00:14:53.721 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005373.71 1000248.49 1041972.76 00:14:53.721 ======================================================== 00:14:53.721 Total : 256.00 0.12 1004437.23 1000235.06 1041972.76 00:14:53.721 00:14:53.980 00:54:38 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:53.980 00:54:38 -- target/delete_subsystem.sh@57 -- # kill -0 3368671 00:14:53.980 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3368671) - No such process 00:14:53.980 00:54:38 -- target/delete_subsystem.sh@67 -- # wait 3368671 00:14:53.980 00:54:38 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:14:53.980 00:54:38 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:14:53.980 00:54:38 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:53.980 00:54:38 -- nvmf/common.sh@116 -- # sync 00:14:53.980 00:54:38 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:53.980 00:54:38 -- nvmf/common.sh@119 -- # set +e 00:14:53.980 00:54:38 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:53.980 00:54:38 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:53.980 rmmod nvme_tcp 00:14:54.237 rmmod nvme_fabrics 00:14:54.237 rmmod nvme_keyring 00:14:54.237 00:54:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:54.237 00:54:38 -- nvmf/common.sh@123 -- # set -e 00:14:54.238 00:54:38 -- nvmf/common.sh@124 -- # return 0 00:14:54.238 00:54:38 -- nvmf/common.sh@477 -- # '[' -n 3368103 ']' 00:14:54.238 00:54:38 -- nvmf/common.sh@478 -- # killprocess 3368103 00:14:54.238 00:54:38 -- common/autotest_common.sh@926 -- # '[' -z 3368103 ']' 00:14:54.238 00:54:38 -- common/autotest_common.sh@930 -- # kill -0 3368103 00:14:54.238 00:54:38 -- common/autotest_common.sh@931 -- # uname 00:14:54.238 00:54:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:54.238 00:54:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3368103 00:14:54.238 00:54:38 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:54.238 00:54:38 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:54.238 00:54:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3368103' 00:14:54.238 killing process with pid 3368103 00:14:54.238 00:54:38 -- common/autotest_common.sh@945 -- # kill 3368103 00:14:54.238 00:54:38 -- common/autotest_common.sh@950 -- # wait 3368103 00:14:54.497 00:54:38 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:54.497 00:54:38 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:54.497 00:54:38 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:54.497 00:54:38 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:54.497 00:54:38 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:54.497 00:54:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:54.497 00:54:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:54.497 00:54:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:56.404 00:54:40 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:56.404 00:14:56.404 real 0m12.806s 00:14:56.404 user 0m29.206s 00:14:56.404 sys 0m2.884s 00:14:56.404 00:54:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:56.404 00:54:40 -- common/autotest_common.sh@10 -- # set +x 00:14:56.404 ************************************ 00:14:56.404 END TEST nvmf_delete_subsystem 00:14:56.404 ************************************ 00:14:56.404 00:54:40 -- nvmf/nvmf.sh@36 -- # [[ 1 -eq 1 ]] 00:14:56.404 00:54:40 -- nvmf/nvmf.sh@37 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:56.404 00:54:40 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:56.404 00:54:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:56.404 00:54:40 -- common/autotest_common.sh@10 -- # set +x 00:14:56.404 ************************************ 00:14:56.404 START TEST nvmf_nvme_cli 00:14:56.404 ************************************ 00:14:56.404 00:54:40 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:56.404 * Looking for test storage... 00:14:56.404 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:56.404 00:54:40 -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:56.404 00:54:40 -- nvmf/common.sh@7 -- # uname -s 00:14:56.404 00:54:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:56.404 00:54:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:56.404 00:54:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:56.405 00:54:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:56.405 00:54:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:56.405 00:54:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:56.405 00:54:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:56.405 00:54:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:56.405 00:54:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:56.405 00:54:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:56.405 00:54:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:56.405 00:54:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:56.405 00:54:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:56.405 00:54:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:56.405 00:54:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:56.405 00:54:40 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:56.405 00:54:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:56.405 00:54:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:56.405 00:54:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:56.405 00:54:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.405 00:54:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.405 00:54:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.405 00:54:40 -- paths/export.sh@5 -- # export PATH 00:14:56.405 00:54:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.405 00:54:40 -- nvmf/common.sh@46 -- # : 0 00:14:56.405 00:54:40 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:56.405 00:54:40 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:56.405 00:54:40 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:56.405 00:54:40 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:56.405 00:54:40 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:56.405 00:54:40 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:56.405 00:54:40 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:56.405 00:54:40 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:56.405 00:54:40 -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:56.405 00:54:40 -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:56.405 00:54:40 -- target/nvme_cli.sh@14 -- # devs=() 00:14:56.405 00:54:40 -- target/nvme_cli.sh@16 -- # nvmftestinit 00:14:56.405 00:54:40 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:56.405 00:54:40 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:56.405 00:54:40 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:56.405 00:54:40 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:56.405 00:54:40 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:56.405 00:54:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:56.405 00:54:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:56.405 00:54:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:56.405 00:54:40 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:56.405 00:54:40 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:56.405 00:54:40 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:56.405 00:54:40 -- common/autotest_common.sh@10 -- # set +x 00:14:58.943 00:54:42 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:58.943 00:54:42 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:58.943 00:54:42 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:58.943 00:54:42 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:58.943 00:54:42 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:58.943 00:54:42 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:58.943 00:54:42 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:58.943 00:54:42 -- nvmf/common.sh@294 -- # net_devs=() 00:14:58.943 00:54:42 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:58.943 00:54:42 -- nvmf/common.sh@295 -- # e810=() 00:14:58.943 00:54:42 -- nvmf/common.sh@295 -- # local -ga e810 00:14:58.943 00:54:42 -- nvmf/common.sh@296 -- # x722=() 00:14:58.943 00:54:42 -- nvmf/common.sh@296 -- # local -ga x722 00:14:58.943 00:54:42 -- nvmf/common.sh@297 -- # mlx=() 00:14:58.943 00:54:42 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:58.943 00:54:42 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:58.943 00:54:42 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:58.943 00:54:42 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:58.943 00:54:42 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:58.943 00:54:42 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:58.943 00:54:42 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:58.943 00:54:42 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:58.943 00:54:42 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:58.943 00:54:42 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:58.943 00:54:42 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:58.943 00:54:42 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:58.943 00:54:42 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:58.943 00:54:42 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:58.943 00:54:42 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:14:58.943 00:54:42 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:14:58.944 00:54:42 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:14:58.944 00:54:42 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:58.944 00:54:42 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:58.944 00:54:42 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:58.944 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:58.944 00:54:42 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:58.944 00:54:42 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:58.944 00:54:42 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:58.944 00:54:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:58.944 00:54:42 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:58.944 00:54:42 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:58.944 00:54:42 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:58.944 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:58.944 00:54:42 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:58.944 00:54:42 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:58.944 00:54:42 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:58.944 00:54:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:58.944 00:54:42 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:58.944 00:54:42 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:58.944 00:54:42 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:14:58.944 00:54:42 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:14:58.944 00:54:42 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:58.944 00:54:42 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:58.944 00:54:42 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:58.944 00:54:42 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:58.944 00:54:42 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:58.944 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:58.944 00:54:42 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:58.944 00:54:42 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:58.944 00:54:42 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:58.944 00:54:42 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:58.944 00:54:42 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:58.944 00:54:42 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:58.944 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:58.944 00:54:42 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:58.944 00:54:42 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:58.944 00:54:42 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:58.944 00:54:42 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:58.944 00:54:42 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:58.944 00:54:42 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:58.944 00:54:42 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:58.944 00:54:42 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:58.944 00:54:42 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:58.944 00:54:42 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:58.944 00:54:42 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:58.944 00:54:42 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:58.944 00:54:42 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:58.944 00:54:42 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:58.944 00:54:42 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:58.944 00:54:42 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:58.944 00:54:42 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:58.944 00:54:42 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:58.944 00:54:42 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:58.944 00:54:42 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:58.944 00:54:42 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:58.944 00:54:42 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:58.944 00:54:42 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:58.944 00:54:42 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:58.944 00:54:42 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:58.944 00:54:42 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:58.944 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:58.944 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.156 ms 00:14:58.944 00:14:58.944 --- 10.0.0.2 ping statistics --- 00:14:58.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:58.944 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:14:58.944 00:54:42 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:58.944 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:58.944 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.248 ms 00:14:58.944 00:14:58.944 --- 10.0.0.1 ping statistics --- 00:14:58.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:58.944 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:14:58.944 00:54:42 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:58.944 00:54:42 -- nvmf/common.sh@410 -- # return 0 00:14:58.944 00:54:42 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:58.944 00:54:42 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:58.944 00:54:42 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:58.944 00:54:42 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:58.944 00:54:42 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:58.944 00:54:42 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:58.944 00:54:42 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:58.944 00:54:42 -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:14:58.944 00:54:42 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:58.944 00:54:42 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:58.944 00:54:42 -- common/autotest_common.sh@10 -- # set +x 00:14:58.944 00:54:42 -- nvmf/common.sh@469 -- # nvmfpid=3371036 00:14:58.944 00:54:42 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:58.944 00:54:42 -- nvmf/common.sh@470 -- # waitforlisten 3371036 00:14:58.944 00:54:42 -- common/autotest_common.sh@819 -- # '[' -z 3371036 ']' 00:14:58.944 00:54:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:58.944 00:54:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:58.944 00:54:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:58.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:58.944 00:54:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:58.944 00:54:42 -- common/autotest_common.sh@10 -- # set +x 00:14:58.944 [2024-07-23 00:54:42.714895] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:14:58.944 [2024-07-23 00:54:42.715000] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:58.944 EAL: No free 2048 kB hugepages reported on node 1 00:14:58.944 [2024-07-23 00:54:42.783532] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:58.944 [2024-07-23 00:54:42.874135] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:58.944 [2024-07-23 00:54:42.874317] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:58.944 [2024-07-23 00:54:42.874337] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:58.944 [2024-07-23 00:54:42.874352] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:58.944 [2024-07-23 00:54:42.874454] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:58.944 [2024-07-23 00:54:42.874512] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:58.944 [2024-07-23 00:54:42.874611] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:58.944 [2024-07-23 00:54:42.874618] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:59.512 00:54:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:59.512 00:54:43 -- common/autotest_common.sh@852 -- # return 0 00:14:59.512 00:54:43 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:59.512 00:54:43 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:59.512 00:54:43 -- common/autotest_common.sh@10 -- # set +x 00:14:59.512 00:54:43 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:59.512 00:54:43 -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:59.512 00:54:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:59.512 00:54:43 -- common/autotest_common.sh@10 -- # set +x 00:14:59.771 [2024-07-23 00:54:43.718338] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:59.771 00:54:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:59.771 00:54:43 -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:59.771 00:54:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:59.771 00:54:43 -- common/autotest_common.sh@10 -- # set +x 00:14:59.771 Malloc0 00:14:59.771 00:54:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:59.771 00:54:43 -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:59.771 00:54:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:59.771 00:54:43 -- common/autotest_common.sh@10 -- # set +x 00:14:59.771 Malloc1 00:14:59.771 00:54:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:59.771 00:54:43 -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:14:59.771 00:54:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:59.771 00:54:43 -- common/autotest_common.sh@10 -- # set +x 00:14:59.771 00:54:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:59.771 00:54:43 -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:59.771 00:54:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:59.772 00:54:43 -- common/autotest_common.sh@10 -- # set +x 00:14:59.772 00:54:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:59.772 00:54:43 -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:59.772 00:54:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:59.772 00:54:43 -- common/autotest_common.sh@10 -- # set +x 00:14:59.772 00:54:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:59.772 00:54:43 -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:59.772 00:54:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:59.772 00:54:43 -- common/autotest_common.sh@10 -- # set +x 00:14:59.772 [2024-07-23 00:54:43.801469] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:59.772 00:54:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:59.772 00:54:43 -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:59.772 00:54:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:59.772 00:54:43 -- common/autotest_common.sh@10 -- # set +x 00:14:59.772 00:54:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:59.772 00:54:43 -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:14:59.772 00:14:59.772 Discovery Log Number of Records 2, Generation counter 2 00:14:59.772 =====Discovery Log Entry 0====== 00:14:59.772 trtype: tcp 00:14:59.772 adrfam: ipv4 00:14:59.772 subtype: current discovery subsystem 00:14:59.772 treq: not required 00:14:59.772 portid: 0 00:14:59.772 trsvcid: 4420 00:14:59.772 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:59.772 traddr: 10.0.0.2 00:14:59.772 eflags: explicit discovery connections, duplicate discovery information 00:14:59.772 sectype: none 00:14:59.772 =====Discovery Log Entry 1====== 00:14:59.772 trtype: tcp 00:14:59.772 adrfam: ipv4 00:14:59.772 subtype: nvme subsystem 00:14:59.772 treq: not required 00:14:59.772 portid: 0 00:14:59.772 trsvcid: 4420 00:14:59.772 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:59.772 traddr: 10.0.0.2 00:14:59.772 eflags: none 00:14:59.772 sectype: none 00:14:59.772 00:54:43 -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:59.772 00:54:43 -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:59.772 00:54:43 -- nvmf/common.sh@510 -- # local dev _ 00:14:59.772 00:54:43 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:59.772 00:54:43 -- nvmf/common.sh@509 -- # nvme list 00:14:59.772 00:54:43 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:14:59.772 00:54:43 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:59.772 00:54:43 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:14:59.772 00:54:43 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:59.772 00:54:43 -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:14:59.772 00:54:43 -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:00.711 00:54:44 -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:00.711 00:54:44 -- common/autotest_common.sh@1177 -- # local i=0 00:15:00.711 00:54:44 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:15:00.711 00:54:44 -- common/autotest_common.sh@1179 -- # [[ -n 2 ]] 00:15:00.711 00:54:44 -- common/autotest_common.sh@1180 -- # nvme_device_counter=2 00:15:00.711 00:54:44 -- common/autotest_common.sh@1184 -- # sleep 2 00:15:02.617 00:54:46 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:15:02.617 00:54:46 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:15:02.617 00:54:46 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:15:02.617 00:54:46 -- common/autotest_common.sh@1186 -- # nvme_devices=2 00:15:02.617 00:54:46 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:15:02.617 00:54:46 -- common/autotest_common.sh@1187 -- # return 0 00:15:02.617 00:54:46 -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:15:02.617 00:54:46 -- nvmf/common.sh@510 -- # local dev _ 00:15:02.617 00:54:46 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:02.617 00:54:46 -- nvmf/common.sh@509 -- # nvme list 00:15:02.617 00:54:46 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:15:02.617 00:54:46 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:02.617 00:54:46 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:15:02.617 00:54:46 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:02.617 00:54:46 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:02.617 00:54:46 -- nvmf/common.sh@514 -- # echo /dev/nvme0n2 00:15:02.617 00:54:46 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:02.617 00:54:46 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:02.617 00:54:46 -- nvmf/common.sh@514 -- # echo /dev/nvme0n1 00:15:02.617 00:54:46 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:02.617 00:54:46 -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:15:02.617 /dev/nvme0n1 ]] 00:15:02.617 00:54:46 -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:15:02.617 00:54:46 -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:15:02.617 00:54:46 -- nvmf/common.sh@510 -- # local dev _ 00:15:02.617 00:54:46 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:02.617 00:54:46 -- nvmf/common.sh@509 -- # nvme list 00:15:02.617 00:54:46 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:15:02.617 00:54:46 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:02.617 00:54:46 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:15:02.618 00:54:46 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:02.618 00:54:46 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:02.618 00:54:46 -- nvmf/common.sh@514 -- # echo /dev/nvme0n2 00:15:02.618 00:54:46 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:02.618 00:54:46 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:02.618 00:54:46 -- nvmf/common.sh@514 -- # echo /dev/nvme0n1 00:15:02.618 00:54:46 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:02.618 00:54:46 -- target/nvme_cli.sh@59 -- # nvme_num=2 00:15:02.618 00:54:46 -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:02.618 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:02.618 00:54:46 -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:02.618 00:54:46 -- common/autotest_common.sh@1198 -- # local i=0 00:15:02.618 00:54:46 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:15:02.618 00:54:46 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:02.618 00:54:46 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:15:02.618 00:54:46 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:02.618 00:54:46 -- common/autotest_common.sh@1210 -- # return 0 00:15:02.618 00:54:46 -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:15:02.618 00:54:46 -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:02.618 00:54:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:02.618 00:54:46 -- common/autotest_common.sh@10 -- # set +x 00:15:02.618 00:54:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:02.618 00:54:46 -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:15:02.618 00:54:46 -- target/nvme_cli.sh@70 -- # nvmftestfini 00:15:02.618 00:54:46 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:02.618 00:54:46 -- nvmf/common.sh@116 -- # sync 00:15:02.618 00:54:46 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:02.618 00:54:46 -- nvmf/common.sh@119 -- # set +e 00:15:02.618 00:54:46 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:02.618 00:54:46 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:02.618 rmmod nvme_tcp 00:15:02.618 rmmod nvme_fabrics 00:15:02.618 rmmod nvme_keyring 00:15:02.618 00:54:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:02.618 00:54:46 -- nvmf/common.sh@123 -- # set -e 00:15:02.618 00:54:46 -- nvmf/common.sh@124 -- # return 0 00:15:02.618 00:54:46 -- nvmf/common.sh@477 -- # '[' -n 3371036 ']' 00:15:02.618 00:54:46 -- nvmf/common.sh@478 -- # killprocess 3371036 00:15:02.618 00:54:46 -- common/autotest_common.sh@926 -- # '[' -z 3371036 ']' 00:15:02.618 00:54:46 -- common/autotest_common.sh@930 -- # kill -0 3371036 00:15:02.618 00:54:46 -- common/autotest_common.sh@931 -- # uname 00:15:02.618 00:54:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:02.618 00:54:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3371036 00:15:02.618 00:54:46 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:02.618 00:54:46 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:02.618 00:54:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3371036' 00:15:02.618 killing process with pid 3371036 00:15:02.618 00:54:46 -- common/autotest_common.sh@945 -- # kill 3371036 00:15:02.618 00:54:46 -- common/autotest_common.sh@950 -- # wait 3371036 00:15:02.878 00:54:47 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:02.878 00:54:47 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:02.878 00:54:47 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:02.878 00:54:47 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:02.878 00:54:47 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:02.878 00:54:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:02.878 00:54:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:02.878 00:54:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:05.421 00:54:49 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:05.421 00:15:05.421 real 0m8.586s 00:15:05.421 user 0m17.360s 00:15:05.421 sys 0m2.142s 00:15:05.421 00:54:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:05.421 00:54:49 -- common/autotest_common.sh@10 -- # set +x 00:15:05.421 ************************************ 00:15:05.421 END TEST nvmf_nvme_cli 00:15:05.421 ************************************ 00:15:05.421 00:54:49 -- nvmf/nvmf.sh@39 -- # [[ 1 -eq 1 ]] 00:15:05.421 00:54:49 -- nvmf/nvmf.sh@40 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:05.421 00:54:49 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:05.421 00:54:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:05.421 00:54:49 -- common/autotest_common.sh@10 -- # set +x 00:15:05.421 ************************************ 00:15:05.421 START TEST nvmf_vfio_user 00:15:05.421 ************************************ 00:15:05.421 00:54:49 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:05.421 * Looking for test storage... 00:15:05.421 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:05.421 00:54:49 -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:05.421 00:54:49 -- nvmf/common.sh@7 -- # uname -s 00:15:05.421 00:54:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:05.422 00:54:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:05.422 00:54:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:05.422 00:54:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:05.422 00:54:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:05.422 00:54:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:05.422 00:54:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:05.422 00:54:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:05.422 00:54:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:05.422 00:54:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:05.422 00:54:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:05.422 00:54:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:05.422 00:54:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:05.422 00:54:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:05.422 00:54:49 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:05.422 00:54:49 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:05.422 00:54:49 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:05.422 00:54:49 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:05.422 00:54:49 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:05.422 00:54:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:05.422 00:54:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:05.422 00:54:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:05.422 00:54:49 -- paths/export.sh@5 -- # export PATH 00:15:05.422 00:54:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:05.422 00:54:49 -- nvmf/common.sh@46 -- # : 0 00:15:05.422 00:54:49 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:05.422 00:54:49 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:05.422 00:54:49 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:05.422 00:54:49 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:05.422 00:54:49 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:05.422 00:54:49 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:05.422 00:54:49 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:05.422 00:54:49 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:05.422 00:54:49 -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:05.422 00:54:49 -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:05.422 00:54:49 -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:15:05.422 00:54:49 -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:05.422 00:54:49 -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:05.422 00:54:49 -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:05.422 00:54:49 -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:15:05.422 00:54:49 -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:15:05.422 00:54:49 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:15:05.422 00:54:49 -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:15:05.422 00:54:49 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3371985 00:15:05.422 00:54:49 -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:15:05.422 00:54:49 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3371985' 00:15:05.422 Process pid: 3371985 00:15:05.422 00:54:49 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:05.422 00:54:49 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3371985 00:15:05.422 00:54:49 -- common/autotest_common.sh@819 -- # '[' -z 3371985 ']' 00:15:05.422 00:54:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:05.422 00:54:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:05.422 00:54:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:05.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:05.422 00:54:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:05.422 00:54:49 -- common/autotest_common.sh@10 -- # set +x 00:15:05.422 [2024-07-23 00:54:49.267193] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:15:05.422 [2024-07-23 00:54:49.267267] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:05.422 EAL: No free 2048 kB hugepages reported on node 1 00:15:05.422 [2024-07-23 00:54:49.324330] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:05.422 [2024-07-23 00:54:49.408026] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:05.422 [2024-07-23 00:54:49.408164] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:05.422 [2024-07-23 00:54:49.408181] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:05.422 [2024-07-23 00:54:49.408192] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:05.422 [2024-07-23 00:54:49.408251] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:05.422 [2024-07-23 00:54:49.408310] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:05.422 [2024-07-23 00:54:49.408375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:05.422 [2024-07-23 00:54:49.408377] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:06.359 00:54:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:06.359 00:54:50 -- common/autotest_common.sh@852 -- # return 0 00:15:06.359 00:54:50 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:07.296 00:54:51 -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:15:07.555 00:54:51 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:07.555 00:54:51 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:07.555 00:54:51 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:07.555 00:54:51 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:07.555 00:54:51 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:07.813 Malloc1 00:15:07.813 00:54:51 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:08.070 00:54:52 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:08.328 00:54:52 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:08.586 00:54:52 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:08.586 00:54:52 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:08.586 00:54:52 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:08.846 Malloc2 00:15:08.846 00:54:52 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:09.105 00:54:53 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:09.105 00:54:53 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:09.362 00:54:53 -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:15:09.362 00:54:53 -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:15:09.362 00:54:53 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:09.362 00:54:53 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:09.362 00:54:53 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:15:09.362 00:54:53 -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:09.362 [2024-07-23 00:54:53.552673] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:15:09.362 [2024-07-23 00:54:53.552717] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3372513 ] 00:15:09.362 EAL: No free 2048 kB hugepages reported on node 1 00:15:09.623 [2024-07-23 00:54:53.588075] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:15:09.623 [2024-07-23 00:54:53.591016] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:09.623 [2024-07-23 00:54:53.591044] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f7435207000 00:15:09.623 [2024-07-23 00:54:53.592012] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:09.623 [2024-07-23 00:54:53.592992] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:09.623 [2024-07-23 00:54:53.593982] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:09.623 [2024-07-23 00:54:53.595008] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:09.623 [2024-07-23 00:54:53.596010] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:09.623 [2024-07-23 00:54:53.597006] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:09.623 [2024-07-23 00:54:53.598010] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:09.623 [2024-07-23 00:54:53.599018] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:09.624 [2024-07-23 00:54:53.600029] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:09.624 [2024-07-23 00:54:53.600049] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f7433fbd000 00:15:09.624 [2024-07-23 00:54:53.601195] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:09.624 [2024-07-23 00:54:53.615374] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:15:09.624 [2024-07-23 00:54:53.615407] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:15:09.624 [2024-07-23 00:54:53.620149] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:09.624 [2024-07-23 00:54:53.620199] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:09.624 [2024-07-23 00:54:53.620285] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:15:09.624 [2024-07-23 00:54:53.620314] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:15:09.624 [2024-07-23 00:54:53.620324] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:15:09.624 [2024-07-23 00:54:53.622625] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:15:09.624 [2024-07-23 00:54:53.622645] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:15:09.624 [2024-07-23 00:54:53.622657] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:15:09.624 [2024-07-23 00:54:53.623156] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:09.624 [2024-07-23 00:54:53.623172] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:15:09.624 [2024-07-23 00:54:53.623185] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:15:09.624 [2024-07-23 00:54:53.624163] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:15:09.624 [2024-07-23 00:54:53.624182] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:09.624 [2024-07-23 00:54:53.625167] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:15:09.624 [2024-07-23 00:54:53.625184] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:15:09.624 [2024-07-23 00:54:53.625193] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:15:09.624 [2024-07-23 00:54:53.625203] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:09.624 [2024-07-23 00:54:53.625312] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:15:09.624 [2024-07-23 00:54:53.625324] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:09.624 [2024-07-23 00:54:53.625333] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:15:09.624 [2024-07-23 00:54:53.626175] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:15:09.624 [2024-07-23 00:54:53.627179] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:15:09.624 [2024-07-23 00:54:53.628188] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:09.624 [2024-07-23 00:54:53.629228] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:09.624 [2024-07-23 00:54:53.630203] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:15:09.624 [2024-07-23 00:54:53.630220] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:09.624 [2024-07-23 00:54:53.630228] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:15:09.624 [2024-07-23 00:54:53.630251] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:15:09.624 [2024-07-23 00:54:53.630264] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:15:09.624 [2024-07-23 00:54:53.630283] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:09.624 [2024-07-23 00:54:53.630292] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:09.624 [2024-07-23 00:54:53.630309] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:09.624 [2024-07-23 00:54:53.630366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:09.624 [2024-07-23 00:54:53.630381] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:15:09.624 [2024-07-23 00:54:53.630389] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:15:09.624 [2024-07-23 00:54:53.630396] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:15:09.624 [2024-07-23 00:54:53.630403] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:09.624 [2024-07-23 00:54:53.630411] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:15:09.624 [2024-07-23 00:54:53.630418] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:15:09.624 [2024-07-23 00:54:53.630425] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:15:09.624 [2024-07-23 00:54:53.630440] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:15:09.624 [2024-07-23 00:54:53.630455] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:09.624 [2024-07-23 00:54:53.630473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:09.624 [2024-07-23 00:54:53.630494] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:09.624 [2024-07-23 00:54:53.630507] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:09.624 [2024-07-23 00:54:53.630518] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:09.624 [2024-07-23 00:54:53.630530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:09.624 [2024-07-23 00:54:53.630537] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:15:09.624 [2024-07-23 00:54:53.630551] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:09.624 [2024-07-23 00:54:53.630564] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:09.624 [2024-07-23 00:54:53.630578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:09.624 [2024-07-23 00:54:53.630588] nvme_ctrlr.c:2878:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:15:09.624 [2024-07-23 00:54:53.630611] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:09.624 [2024-07-23 00:54:53.630633] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:15:09.624 [2024-07-23 00:54:53.630649] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:15:09.624 [2024-07-23 00:54:53.630664] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:09.624 [2024-07-23 00:54:53.630678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:09.624 [2024-07-23 00:54:53.630740] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:15:09.624 [2024-07-23 00:54:53.630753] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:15:09.624 [2024-07-23 00:54:53.630765] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:09.624 [2024-07-23 00:54:53.630773] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:09.624 [2024-07-23 00:54:53.630783] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:09.624 [2024-07-23 00:54:53.630796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:09.624 [2024-07-23 00:54:53.630815] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:15:09.624 [2024-07-23 00:54:53.630832] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:15:09.625 [2024-07-23 00:54:53.630846] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:15:09.625 [2024-07-23 00:54:53.630857] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:09.625 [2024-07-23 00:54:53.630865] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:09.625 [2024-07-23 00:54:53.630874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:09.625 [2024-07-23 00:54:53.630894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:09.625 [2024-07-23 00:54:53.630929] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:09.625 [2024-07-23 00:54:53.630943] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:09.625 [2024-07-23 00:54:53.630955] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:09.625 [2024-07-23 00:54:53.630962] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:09.625 [2024-07-23 00:54:53.630973] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:09.625 [2024-07-23 00:54:53.630990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:09.625 [2024-07-23 00:54:53.631003] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:09.625 [2024-07-23 00:54:53.631013] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:15:09.625 [2024-07-23 00:54:53.631026] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:15:09.625 [2024-07-23 00:54:53.631035] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:09.625 [2024-07-23 00:54:53.631043] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:15:09.625 [2024-07-23 00:54:53.631050] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:15:09.625 [2024-07-23 00:54:53.631058] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:15:09.625 [2024-07-23 00:54:53.631065] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:15:09.625 [2024-07-23 00:54:53.631089] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:09.625 [2024-07-23 00:54:53.631106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:09.625 [2024-07-23 00:54:53.631124] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:09.625 [2024-07-23 00:54:53.631135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:09.625 [2024-07-23 00:54:53.631150] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:09.625 [2024-07-23 00:54:53.631166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:09.625 [2024-07-23 00:54:53.631181] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:09.625 [2024-07-23 00:54:53.631191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:09.625 [2024-07-23 00:54:53.631207] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:09.625 [2024-07-23 00:54:53.631216] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:09.625 [2024-07-23 00:54:53.631224] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:09.625 [2024-07-23 00:54:53.631230] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:09.625 [2024-07-23 00:54:53.631239] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:09.625 [2024-07-23 00:54:53.631250] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:09.625 [2024-07-23 00:54:53.631257] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:09.625 [2024-07-23 00:54:53.631265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:09.625 [2024-07-23 00:54:53.631275] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:09.625 [2024-07-23 00:54:53.631282] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:09.625 [2024-07-23 00:54:53.631291] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:09.625 [2024-07-23 00:54:53.631302] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:09.625 [2024-07-23 00:54:53.631309] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:09.625 [2024-07-23 00:54:53.631317] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:09.625 [2024-07-23 00:54:53.631328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:09.625 [2024-07-23 00:54:53.631347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:09.625 [2024-07-23 00:54:53.631361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:09.625 [2024-07-23 00:54:53.631372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:09.625 ===================================================== 00:15:09.625 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:09.625 ===================================================== 00:15:09.625 Controller Capabilities/Features 00:15:09.625 ================================ 00:15:09.625 Vendor ID: 4e58 00:15:09.625 Subsystem Vendor ID: 4e58 00:15:09.625 Serial Number: SPDK1 00:15:09.625 Model Number: SPDK bdev Controller 00:15:09.625 Firmware Version: 24.01.1 00:15:09.625 Recommended Arb Burst: 6 00:15:09.625 IEEE OUI Identifier: 8d 6b 50 00:15:09.625 Multi-path I/O 00:15:09.625 May have multiple subsystem ports: Yes 00:15:09.625 May have multiple controllers: Yes 00:15:09.625 Associated with SR-IOV VF: No 00:15:09.625 Max Data Transfer Size: 131072 00:15:09.625 Max Number of Namespaces: 32 00:15:09.625 Max Number of I/O Queues: 127 00:15:09.625 NVMe Specification Version (VS): 1.3 00:15:09.625 NVMe Specification Version (Identify): 1.3 00:15:09.625 Maximum Queue Entries: 256 00:15:09.625 Contiguous Queues Required: Yes 00:15:09.625 Arbitration Mechanisms Supported 00:15:09.625 Weighted Round Robin: Not Supported 00:15:09.625 Vendor Specific: Not Supported 00:15:09.625 Reset Timeout: 15000 ms 00:15:09.625 Doorbell Stride: 4 bytes 00:15:09.625 NVM Subsystem Reset: Not Supported 00:15:09.625 Command Sets Supported 00:15:09.625 NVM Command Set: Supported 00:15:09.625 Boot Partition: Not Supported 00:15:09.625 Memory Page Size Minimum: 4096 bytes 00:15:09.625 Memory Page Size Maximum: 4096 bytes 00:15:09.625 Persistent Memory Region: Not Supported 00:15:09.625 Optional Asynchronous Events Supported 00:15:09.625 Namespace Attribute Notices: Supported 00:15:09.625 Firmware Activation Notices: Not Supported 00:15:09.625 ANA Change Notices: Not Supported 00:15:09.625 PLE Aggregate Log Change Notices: Not Supported 00:15:09.625 LBA Status Info Alert Notices: Not Supported 00:15:09.625 EGE Aggregate Log Change Notices: Not Supported 00:15:09.625 Normal NVM Subsystem Shutdown event: Not Supported 00:15:09.625 Zone Descriptor Change Notices: Not Supported 00:15:09.625 Discovery Log Change Notices: Not Supported 00:15:09.625 Controller Attributes 00:15:09.625 128-bit Host Identifier: Supported 00:15:09.625 Non-Operational Permissive Mode: Not Supported 00:15:09.625 NVM Sets: Not Supported 00:15:09.625 Read Recovery Levels: Not Supported 00:15:09.625 Endurance Groups: Not Supported 00:15:09.625 Predictable Latency Mode: Not Supported 00:15:09.625 Traffic Based Keep ALive: Not Supported 00:15:09.625 Namespace Granularity: Not Supported 00:15:09.625 SQ Associations: Not Supported 00:15:09.625 UUID List: Not Supported 00:15:09.625 Multi-Domain Subsystem: Not Supported 00:15:09.625 Fixed Capacity Management: Not Supported 00:15:09.625 Variable Capacity Management: Not Supported 00:15:09.625 Delete Endurance Group: Not Supported 00:15:09.625 Delete NVM Set: Not Supported 00:15:09.625 Extended LBA Formats Supported: Not Supported 00:15:09.626 Flexible Data Placement Supported: Not Supported 00:15:09.626 00:15:09.626 Controller Memory Buffer Support 00:15:09.626 ================================ 00:15:09.626 Supported: No 00:15:09.626 00:15:09.626 Persistent Memory Region Support 00:15:09.626 ================================ 00:15:09.626 Supported: No 00:15:09.626 00:15:09.626 Admin Command Set Attributes 00:15:09.626 ============================ 00:15:09.626 Security Send/Receive: Not Supported 00:15:09.626 Format NVM: Not Supported 00:15:09.626 Firmware Activate/Download: Not Supported 00:15:09.626 Namespace Management: Not Supported 00:15:09.626 Device Self-Test: Not Supported 00:15:09.626 Directives: Not Supported 00:15:09.626 NVMe-MI: Not Supported 00:15:09.626 Virtualization Management: Not Supported 00:15:09.626 Doorbell Buffer Config: Not Supported 00:15:09.626 Get LBA Status Capability: Not Supported 00:15:09.626 Command & Feature Lockdown Capability: Not Supported 00:15:09.626 Abort Command Limit: 4 00:15:09.626 Async Event Request Limit: 4 00:15:09.626 Number of Firmware Slots: N/A 00:15:09.626 Firmware Slot 1 Read-Only: N/A 00:15:09.626 Firmware Activation Without Reset: N/A 00:15:09.626 Multiple Update Detection Support: N/A 00:15:09.626 Firmware Update Granularity: No Information Provided 00:15:09.626 Per-Namespace SMART Log: No 00:15:09.626 Asymmetric Namespace Access Log Page: Not Supported 00:15:09.626 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:15:09.626 Command Effects Log Page: Supported 00:15:09.626 Get Log Page Extended Data: Supported 00:15:09.626 Telemetry Log Pages: Not Supported 00:15:09.626 Persistent Event Log Pages: Not Supported 00:15:09.626 Supported Log Pages Log Page: May Support 00:15:09.626 Commands Supported & Effects Log Page: Not Supported 00:15:09.626 Feature Identifiers & Effects Log Page:May Support 00:15:09.626 NVMe-MI Commands & Effects Log Page: May Support 00:15:09.626 Data Area 4 for Telemetry Log: Not Supported 00:15:09.626 Error Log Page Entries Supported: 128 00:15:09.626 Keep Alive: Supported 00:15:09.626 Keep Alive Granularity: 10000 ms 00:15:09.626 00:15:09.626 NVM Command Set Attributes 00:15:09.626 ========================== 00:15:09.626 Submission Queue Entry Size 00:15:09.626 Max: 64 00:15:09.626 Min: 64 00:15:09.626 Completion Queue Entry Size 00:15:09.626 Max: 16 00:15:09.626 Min: 16 00:15:09.626 Number of Namespaces: 32 00:15:09.626 Compare Command: Supported 00:15:09.626 Write Uncorrectable Command: Not Supported 00:15:09.626 Dataset Management Command: Supported 00:15:09.626 Write Zeroes Command: Supported 00:15:09.626 Set Features Save Field: Not Supported 00:15:09.626 Reservations: Not Supported 00:15:09.626 Timestamp: Not Supported 00:15:09.626 Copy: Supported 00:15:09.626 Volatile Write Cache: Present 00:15:09.626 Atomic Write Unit (Normal): 1 00:15:09.626 Atomic Write Unit (PFail): 1 00:15:09.626 Atomic Compare & Write Unit: 1 00:15:09.626 Fused Compare & Write: Supported 00:15:09.626 Scatter-Gather List 00:15:09.626 SGL Command Set: Supported (Dword aligned) 00:15:09.626 SGL Keyed: Not Supported 00:15:09.626 SGL Bit Bucket Descriptor: Not Supported 00:15:09.626 SGL Metadata Pointer: Not Supported 00:15:09.626 Oversized SGL: Not Supported 00:15:09.626 SGL Metadata Address: Not Supported 00:15:09.626 SGL Offset: Not Supported 00:15:09.626 Transport SGL Data Block: Not Supported 00:15:09.626 Replay Protected Memory Block: Not Supported 00:15:09.626 00:15:09.626 Firmware Slot Information 00:15:09.626 ========================= 00:15:09.626 Active slot: 1 00:15:09.626 Slot 1 Firmware Revision: 24.01.1 00:15:09.626 00:15:09.626 00:15:09.626 Commands Supported and Effects 00:15:09.626 ============================== 00:15:09.626 Admin Commands 00:15:09.626 -------------- 00:15:09.626 Get Log Page (02h): Supported 00:15:09.626 Identify (06h): Supported 00:15:09.626 Abort (08h): Supported 00:15:09.626 Set Features (09h): Supported 00:15:09.626 Get Features (0Ah): Supported 00:15:09.626 Asynchronous Event Request (0Ch): Supported 00:15:09.626 Keep Alive (18h): Supported 00:15:09.626 I/O Commands 00:15:09.626 ------------ 00:15:09.626 Flush (00h): Supported LBA-Change 00:15:09.626 Write (01h): Supported LBA-Change 00:15:09.626 Read (02h): Supported 00:15:09.626 Compare (05h): Supported 00:15:09.626 Write Zeroes (08h): Supported LBA-Change 00:15:09.626 Dataset Management (09h): Supported LBA-Change 00:15:09.626 Copy (19h): Supported LBA-Change 00:15:09.626 Unknown (79h): Supported LBA-Change 00:15:09.626 Unknown (7Ah): Supported 00:15:09.626 00:15:09.626 Error Log 00:15:09.626 ========= 00:15:09.626 00:15:09.626 Arbitration 00:15:09.626 =========== 00:15:09.626 Arbitration Burst: 1 00:15:09.626 00:15:09.626 Power Management 00:15:09.626 ================ 00:15:09.626 Number of Power States: 1 00:15:09.626 Current Power State: Power State #0 00:15:09.626 Power State #0: 00:15:09.626 Max Power: 0.00 W 00:15:09.626 Non-Operational State: Operational 00:15:09.626 Entry Latency: Not Reported 00:15:09.626 Exit Latency: Not Reported 00:15:09.626 Relative Read Throughput: 0 00:15:09.626 Relative Read Latency: 0 00:15:09.626 Relative Write Throughput: 0 00:15:09.626 Relative Write Latency: 0 00:15:09.626 Idle Power: Not Reported 00:15:09.626 Active Power: Not Reported 00:15:09.626 Non-Operational Permissive Mode: Not Supported 00:15:09.626 00:15:09.626 Health Information 00:15:09.626 ================== 00:15:09.626 Critical Warnings: 00:15:09.626 Available Spare Space: OK 00:15:09.626 Temperature: OK 00:15:09.626 Device Reliability: OK 00:15:09.626 Read Only: No 00:15:09.626 Volatile Memory Backup: OK 00:15:09.626 Current Temperature: 0 Kelvin[2024-07-23 00:54:53.631497] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:09.626 [2024-07-23 00:54:53.631513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:09.626 [2024-07-23 00:54:53.631550] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:15:09.626 [2024-07-23 00:54:53.631566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:09.626 [2024-07-23 00:54:53.631585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:09.626 [2024-07-23 00:54:53.631610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:09.626 [2024-07-23 00:54:53.631629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:09.626 [2024-07-23 00:54:53.634623] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:09.626 [2024-07-23 00:54:53.634644] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:15:09.626 [2024-07-23 00:54:53.635272] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:15:09.626 [2024-07-23 00:54:53.635284] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:15:09.627 [2024-07-23 00:54:53.636239] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:15:09.627 [2024-07-23 00:54:53.636266] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:15:09.627 [2024-07-23 00:54:53.636317] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:15:09.627 [2024-07-23 00:54:53.638278] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:09.627 (-273 Celsius) 00:15:09.627 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:09.627 Available Spare: 0% 00:15:09.627 Available Spare Threshold: 0% 00:15:09.627 Life Percentage Used: 0% 00:15:09.627 Data Units Read: 0 00:15:09.627 Data Units Written: 0 00:15:09.627 Host Read Commands: 0 00:15:09.627 Host Write Commands: 0 00:15:09.627 Controller Busy Time: 0 minutes 00:15:09.627 Power Cycles: 0 00:15:09.627 Power On Hours: 0 hours 00:15:09.627 Unsafe Shutdowns: 0 00:15:09.627 Unrecoverable Media Errors: 0 00:15:09.627 Lifetime Error Log Entries: 0 00:15:09.627 Warning Temperature Time: 0 minutes 00:15:09.627 Critical Temperature Time: 0 minutes 00:15:09.627 00:15:09.627 Number of Queues 00:15:09.627 ================ 00:15:09.627 Number of I/O Submission Queues: 127 00:15:09.627 Number of I/O Completion Queues: 127 00:15:09.627 00:15:09.627 Active Namespaces 00:15:09.627 ================= 00:15:09.627 Namespace ID:1 00:15:09.627 Error Recovery Timeout: Unlimited 00:15:09.627 Command Set Identifier: NVM (00h) 00:15:09.627 Deallocate: Supported 00:15:09.627 Deallocated/Unwritten Error: Not Supported 00:15:09.627 Deallocated Read Value: Unknown 00:15:09.627 Deallocate in Write Zeroes: Not Supported 00:15:09.627 Deallocated Guard Field: 0xFFFF 00:15:09.627 Flush: Supported 00:15:09.627 Reservation: Supported 00:15:09.627 Namespace Sharing Capabilities: Multiple Controllers 00:15:09.627 Size (in LBAs): 131072 (0GiB) 00:15:09.627 Capacity (in LBAs): 131072 (0GiB) 00:15:09.627 Utilization (in LBAs): 131072 (0GiB) 00:15:09.627 NGUID: 7842F0BD6C4C40C387F1FBDE37C894A7 00:15:09.627 UUID: 7842f0bd-6c4c-40c3-87f1-fbde37c894a7 00:15:09.627 Thin Provisioning: Not Supported 00:15:09.627 Per-NS Atomic Units: Yes 00:15:09.627 Atomic Boundary Size (Normal): 0 00:15:09.627 Atomic Boundary Size (PFail): 0 00:15:09.627 Atomic Boundary Offset: 0 00:15:09.627 Maximum Single Source Range Length: 65535 00:15:09.627 Maximum Copy Length: 65535 00:15:09.627 Maximum Source Range Count: 1 00:15:09.627 NGUID/EUI64 Never Reused: No 00:15:09.627 Namespace Write Protected: No 00:15:09.627 Number of LBA Formats: 1 00:15:09.627 Current LBA Format: LBA Format #00 00:15:09.627 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:09.627 00:15:09.627 00:54:53 -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:09.627 EAL: No free 2048 kB hugepages reported on node 1 00:15:14.959 Initializing NVMe Controllers 00:15:14.959 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:14.959 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:14.959 Initialization complete. Launching workers. 00:15:14.959 ======================================================== 00:15:14.959 Latency(us) 00:15:14.959 Device Information : IOPS MiB/s Average min max 00:15:14.959 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 37120.51 145.00 3448.27 1167.62 7387.70 00:15:14.959 ======================================================== 00:15:14.959 Total : 37120.51 145.00 3448.27 1167.62 7387.70 00:15:14.959 00:15:14.959 00:54:58 -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:14.959 EAL: No free 2048 kB hugepages reported on node 1 00:15:20.236 Initializing NVMe Controllers 00:15:20.236 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:20.236 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:20.236 Initialization complete. Launching workers. 00:15:20.236 ======================================================== 00:15:20.236 Latency(us) 00:15:20.236 Device Information : IOPS MiB/s Average min max 00:15:20.236 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 15971.80 62.39 8023.88 5971.53 15975.56 00:15:20.236 ======================================================== 00:15:20.236 Total : 15971.80 62.39 8023.88 5971.53 15975.56 00:15:20.236 00:15:20.236 00:55:04 -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:20.236 EAL: No free 2048 kB hugepages reported on node 1 00:15:25.514 Initializing NVMe Controllers 00:15:25.514 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:25.514 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:25.514 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:15:25.514 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:15:25.514 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:15:25.514 Initialization complete. Launching workers. 00:15:25.514 Starting thread on core 2 00:15:25.514 Starting thread on core 3 00:15:25.514 Starting thread on core 1 00:15:25.514 00:55:09 -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:15:25.514 EAL: No free 2048 kB hugepages reported on node 1 00:15:28.809 Initializing NVMe Controllers 00:15:28.810 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:28.810 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:28.810 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:15:28.810 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:15:28.810 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:15:28.810 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:15:28.810 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:28.810 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:28.810 Initialization complete. Launching workers. 00:15:28.810 Starting thread on core 1 with urgent priority queue 00:15:28.810 Starting thread on core 2 with urgent priority queue 00:15:28.810 Starting thread on core 3 with urgent priority queue 00:15:28.810 Starting thread on core 0 with urgent priority queue 00:15:28.810 SPDK bdev Controller (SPDK1 ) core 0: 5916.00 IO/s 16.90 secs/100000 ios 00:15:28.810 SPDK bdev Controller (SPDK1 ) core 1: 6140.33 IO/s 16.29 secs/100000 ios 00:15:28.810 SPDK bdev Controller (SPDK1 ) core 2: 6066.67 IO/s 16.48 secs/100000 ios 00:15:28.810 SPDK bdev Controller (SPDK1 ) core 3: 5999.67 IO/s 16.67 secs/100000 ios 00:15:28.810 ======================================================== 00:15:28.810 00:15:28.810 00:55:12 -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:28.810 EAL: No free 2048 kB hugepages reported on node 1 00:15:29.069 Initializing NVMe Controllers 00:15:29.069 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:29.069 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:29.069 Namespace ID: 1 size: 0GB 00:15:29.069 Initialization complete. 00:15:29.069 INFO: using host memory buffer for IO 00:15:29.069 Hello world! 00:15:29.069 00:55:13 -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:29.069 EAL: No free 2048 kB hugepages reported on node 1 00:15:30.446 Initializing NVMe Controllers 00:15:30.446 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:30.446 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:30.446 Initialization complete. Launching workers. 00:15:30.446 submit (in ns) avg, min, max = 7909.0, 3463.3, 4015006.7 00:15:30.446 complete (in ns) avg, min, max = 24606.8, 2045.6, 4045351.1 00:15:30.446 00:15:30.446 Submit histogram 00:15:30.446 ================ 00:15:30.446 Range in us Cumulative Count 00:15:30.446 3.461 - 3.484: 0.3676% ( 51) 00:15:30.446 3.484 - 3.508: 1.2397% ( 121) 00:15:30.446 3.508 - 3.532: 3.6687% ( 337) 00:15:30.446 3.532 - 3.556: 8.8871% ( 724) 00:15:30.446 3.556 - 3.579: 17.1111% ( 1141) 00:15:30.446 3.579 - 3.603: 25.6523% ( 1185) 00:15:30.446 3.603 - 3.627: 34.0998% ( 1172) 00:15:30.446 3.627 - 3.650: 43.0373% ( 1240) 00:15:30.446 3.650 - 3.674: 50.7712% ( 1073) 00:15:30.446 3.674 - 3.698: 57.2510% ( 899) 00:15:30.446 3.698 - 3.721: 62.3324% ( 705) 00:15:30.446 3.721 - 3.745: 66.5201% ( 581) 00:15:30.446 3.745 - 3.769: 69.7852% ( 453) 00:15:30.446 3.769 - 3.793: 73.3098% ( 489) 00:15:30.446 3.793 - 3.816: 76.3803% ( 426) 00:15:30.446 3.816 - 3.840: 80.0923% ( 515) 00:15:30.446 3.840 - 3.864: 83.3790% ( 456) 00:15:30.446 3.864 - 3.887: 86.0242% ( 367) 00:15:30.446 3.887 - 3.911: 88.2586% ( 310) 00:15:30.446 3.911 - 3.935: 90.2624% ( 278) 00:15:30.446 3.935 - 3.959: 91.7183% ( 202) 00:15:30.446 3.959 - 3.982: 93.0157% ( 180) 00:15:30.446 3.982 - 4.006: 94.0608% ( 145) 00:15:30.446 4.006 - 4.030: 94.8681% ( 112) 00:15:30.446 4.030 - 4.053: 95.5096% ( 89) 00:15:30.446 4.053 - 4.077: 95.9925% ( 67) 00:15:30.446 4.077 - 4.101: 96.3529% ( 50) 00:15:30.446 4.101 - 4.124: 96.5980% ( 34) 00:15:30.446 4.124 - 4.148: 96.8286% ( 32) 00:15:30.446 4.148 - 4.172: 96.9007% ( 10) 00:15:30.446 4.172 - 4.196: 96.9583% ( 8) 00:15:30.446 4.196 - 4.219: 97.0881% ( 18) 00:15:30.446 4.219 - 4.243: 97.1457% ( 8) 00:15:30.446 4.243 - 4.267: 97.2611% ( 16) 00:15:30.446 4.267 - 4.290: 97.3548% ( 13) 00:15:30.446 4.290 - 4.314: 97.3980% ( 6) 00:15:30.446 4.314 - 4.338: 97.4845% ( 12) 00:15:30.446 4.338 - 4.361: 97.5205% ( 5) 00:15:30.446 4.361 - 4.385: 97.5494% ( 4) 00:15:30.446 4.385 - 4.409: 97.5710% ( 3) 00:15:30.446 4.433 - 4.456: 97.5782% ( 1) 00:15:30.446 4.456 - 4.480: 97.5926% ( 2) 00:15:30.446 4.480 - 4.504: 97.5998% ( 1) 00:15:30.446 4.504 - 4.527: 97.6070% ( 1) 00:15:30.446 4.551 - 4.575: 97.6359% ( 4) 00:15:30.446 4.599 - 4.622: 97.6431% ( 1) 00:15:30.446 4.622 - 4.646: 97.6719% ( 4) 00:15:30.446 4.646 - 4.670: 97.6863% ( 2) 00:15:30.446 4.670 - 4.693: 97.7440% ( 8) 00:15:30.446 4.693 - 4.717: 97.7944% ( 7) 00:15:30.446 4.717 - 4.741: 97.8305% ( 5) 00:15:30.446 4.741 - 4.764: 97.9170% ( 12) 00:15:30.446 4.764 - 4.788: 97.9530% ( 5) 00:15:30.446 4.788 - 4.812: 98.0323% ( 11) 00:15:30.446 4.812 - 4.836: 98.0827% ( 7) 00:15:30.446 4.836 - 4.859: 98.1188% ( 5) 00:15:30.446 4.859 - 4.883: 98.1476% ( 4) 00:15:30.446 4.883 - 4.907: 98.1620% ( 2) 00:15:30.446 4.907 - 4.930: 98.2485% ( 12) 00:15:30.446 4.930 - 4.954: 98.2629% ( 2) 00:15:30.446 4.954 - 4.978: 98.2918% ( 4) 00:15:30.446 4.978 - 5.001: 98.3062% ( 2) 00:15:30.446 5.001 - 5.025: 98.3278% ( 3) 00:15:30.446 5.025 - 5.049: 98.3494% ( 3) 00:15:30.446 5.049 - 5.073: 98.3566% ( 1) 00:15:30.446 5.096 - 5.120: 98.3855% ( 4) 00:15:30.446 5.167 - 5.191: 98.4071% ( 3) 00:15:30.446 5.310 - 5.333: 98.4143% ( 1) 00:15:30.446 5.476 - 5.499: 98.4215% ( 1) 00:15:30.446 5.547 - 5.570: 98.4287% ( 1) 00:15:30.446 5.784 - 5.807: 98.4359% ( 1) 00:15:30.446 6.068 - 6.116: 98.4431% ( 1) 00:15:30.446 6.258 - 6.305: 98.4503% ( 1) 00:15:30.446 6.447 - 6.495: 98.4575% ( 1) 00:15:30.446 6.684 - 6.732: 98.4648% ( 1) 00:15:30.446 6.779 - 6.827: 98.4720% ( 1) 00:15:30.446 6.921 - 6.969: 98.4792% ( 1) 00:15:30.446 6.969 - 7.016: 98.4864% ( 1) 00:15:30.446 7.064 - 7.111: 98.4936% ( 1) 00:15:30.446 7.111 - 7.159: 98.5008% ( 1) 00:15:30.446 7.253 - 7.301: 98.5080% ( 1) 00:15:30.446 7.301 - 7.348: 98.5152% ( 1) 00:15:30.446 7.348 - 7.396: 98.5224% ( 1) 00:15:30.446 7.443 - 7.490: 98.5296% ( 1) 00:15:30.446 7.490 - 7.538: 98.5368% ( 1) 00:15:30.446 7.538 - 7.585: 98.5657% ( 4) 00:15:30.446 7.585 - 7.633: 98.5801% ( 2) 00:15:30.446 7.633 - 7.680: 98.6017% ( 3) 00:15:30.446 7.680 - 7.727: 98.6089% ( 1) 00:15:30.446 7.727 - 7.775: 98.6233% ( 2) 00:15:30.446 7.775 - 7.822: 98.6305% ( 1) 00:15:30.446 7.822 - 7.870: 98.6594% ( 4) 00:15:30.446 7.870 - 7.917: 98.6666% ( 1) 00:15:30.446 7.964 - 8.012: 98.6882% ( 3) 00:15:30.446 8.012 - 8.059: 98.6954% ( 1) 00:15:30.446 8.059 - 8.107: 98.7098% ( 2) 00:15:30.446 8.154 - 8.201: 98.7170% ( 1) 00:15:30.446 8.296 - 8.344: 98.7242% ( 1) 00:15:30.446 8.344 - 8.391: 98.7314% ( 1) 00:15:30.446 8.439 - 8.486: 98.7459% ( 2) 00:15:30.446 8.486 - 8.533: 98.7531% ( 1) 00:15:30.447 8.533 - 8.581: 98.7603% ( 1) 00:15:30.447 8.628 - 8.676: 98.7675% ( 1) 00:15:30.447 8.676 - 8.723: 98.7747% ( 1) 00:15:30.447 8.818 - 8.865: 98.7819% ( 1) 00:15:30.447 8.960 - 9.007: 98.7963% ( 2) 00:15:30.447 9.102 - 9.150: 98.8035% ( 1) 00:15:30.447 9.339 - 9.387: 98.8179% ( 2) 00:15:30.447 9.387 - 9.434: 98.8323% ( 2) 00:15:30.447 9.434 - 9.481: 98.8396% ( 1) 00:15:30.447 9.481 - 9.529: 98.8468% ( 1) 00:15:30.447 9.861 - 9.908: 98.8540% ( 1) 00:15:30.447 9.956 - 10.003: 98.8612% ( 1) 00:15:30.447 10.050 - 10.098: 98.8684% ( 1) 00:15:30.447 10.098 - 10.145: 98.8756% ( 1) 00:15:30.447 10.145 - 10.193: 98.8828% ( 1) 00:15:30.447 10.193 - 10.240: 98.8900% ( 1) 00:15:30.447 10.240 - 10.287: 98.8972% ( 1) 00:15:30.447 10.572 - 10.619: 98.9044% ( 1) 00:15:30.447 10.667 - 10.714: 98.9116% ( 1) 00:15:30.447 10.856 - 10.904: 98.9188% ( 1) 00:15:30.447 10.904 - 10.951: 98.9333% ( 2) 00:15:30.447 11.473 - 11.520: 98.9477% ( 2) 00:15:30.447 11.520 - 11.567: 98.9549% ( 1) 00:15:30.447 11.757 - 11.804: 98.9621% ( 1) 00:15:30.447 11.804 - 11.852: 98.9693% ( 1) 00:15:30.447 12.136 - 12.231: 98.9765% ( 1) 00:15:30.447 12.231 - 12.326: 98.9837% ( 1) 00:15:30.447 12.326 - 12.421: 98.9909% ( 1) 00:15:30.447 12.610 - 12.705: 98.9981% ( 1) 00:15:30.447 12.800 - 12.895: 99.0053% ( 1) 00:15:30.447 13.369 - 13.464: 99.0125% ( 1) 00:15:30.447 13.748 - 13.843: 99.0342% ( 3) 00:15:30.447 14.222 - 14.317: 99.0414% ( 1) 00:15:30.447 14.317 - 14.412: 99.0486% ( 1) 00:15:30.447 14.696 - 14.791: 99.0558% ( 1) 00:15:30.447 14.791 - 14.886: 99.0630% ( 1) 00:15:30.447 14.886 - 14.981: 99.0702% ( 1) 00:15:30.447 15.265 - 15.360: 99.0918% ( 3) 00:15:30.447 16.877 - 16.972: 99.0990% ( 1) 00:15:30.447 17.067 - 17.161: 99.1207% ( 3) 00:15:30.447 17.256 - 17.351: 99.1351% ( 2) 00:15:30.447 17.351 - 17.446: 99.1567% ( 3) 00:15:30.447 17.446 - 17.541: 99.1855% ( 4) 00:15:30.447 17.541 - 17.636: 99.2504% ( 9) 00:15:30.447 17.636 - 17.730: 99.2864% ( 5) 00:15:30.447 17.730 - 17.825: 99.3369% ( 7) 00:15:30.447 17.825 - 17.920: 99.3513% ( 2) 00:15:30.447 17.920 - 18.015: 99.4090% ( 8) 00:15:30.447 18.015 - 18.110: 99.4450% ( 5) 00:15:30.447 18.110 - 18.204: 99.4810% ( 5) 00:15:30.447 18.204 - 18.299: 99.5531% ( 10) 00:15:30.447 18.299 - 18.394: 99.5820% ( 4) 00:15:30.447 18.394 - 18.489: 99.6468% ( 9) 00:15:30.447 18.489 - 18.584: 99.6901% ( 6) 00:15:30.447 18.584 - 18.679: 99.7189% ( 4) 00:15:30.447 18.679 - 18.773: 99.7477% ( 4) 00:15:30.447 18.773 - 18.868: 99.7549% ( 1) 00:15:30.447 18.868 - 18.963: 99.7838% ( 4) 00:15:30.447 19.058 - 19.153: 99.7910% ( 1) 00:15:30.447 19.153 - 19.247: 99.7982% ( 1) 00:15:30.447 19.342 - 19.437: 99.8198% ( 3) 00:15:30.447 19.437 - 19.532: 99.8270% ( 1) 00:15:30.447 19.532 - 19.627: 99.8342% ( 1) 00:15:30.447 19.627 - 19.721: 99.8414% ( 1) 00:15:30.447 19.911 - 20.006: 99.8486% ( 1) 00:15:30.447 20.196 - 20.290: 99.8558% ( 1) 00:15:30.447 23.230 - 23.324: 99.8631% ( 1) 00:15:30.447 25.979 - 26.169: 99.8703% ( 1) 00:15:30.447 26.359 - 26.548: 99.8775% ( 1) 00:15:30.447 26.548 - 26.738: 99.8847% ( 1) 00:15:30.447 28.444 - 28.634: 99.8919% ( 1) 00:15:30.447 30.151 - 30.341: 99.8991% ( 1) 00:15:30.447 3980.705 - 4004.978: 99.9928% ( 13) 00:15:30.447 4004.978 - 4029.250: 100.0000% ( 1) 00:15:30.447 00:15:30.447 Complete histogram 00:15:30.447 ================== 00:15:30.447 Range in us Cumulative Count 00:15:30.447 2.039 - 2.050: 0.3099% ( 43) 00:15:30.447 2.050 - 2.062: 17.1832% ( 2341) 00:15:30.447 2.062 - 2.074: 31.2887% ( 1957) 00:15:30.447 2.074 - 2.086: 35.3107% ( 558) 00:15:30.447 2.086 - 2.098: 53.9426% ( 2585) 00:15:30.447 2.098 - 2.110: 61.4963% ( 1048) 00:15:30.447 2.110 - 2.121: 64.0767% ( 358) 00:15:30.447 2.121 - 2.133: 72.0701% ( 1109) 00:15:30.447 2.133 - 2.145: 74.6865% ( 363) 00:15:30.447 2.145 - 2.157: 78.2327% ( 492) 00:15:30.447 2.157 - 2.169: 87.3072% ( 1259) 00:15:30.447 2.169 - 2.181: 89.6281% ( 322) 00:15:30.447 2.181 - 2.193: 90.9183% ( 179) 00:15:30.447 2.193 - 2.204: 92.4968% ( 219) 00:15:30.447 2.204 - 2.216: 93.0518% ( 77) 00:15:30.447 2.216 - 2.228: 94.3780% ( 184) 00:15:30.447 2.228 - 2.240: 95.5096% ( 157) 00:15:30.447 2.240 - 2.252: 95.7330% ( 31) 00:15:30.447 2.252 - 2.264: 95.9565% ( 31) 00:15:30.447 2.264 - 2.276: 96.0646% ( 15) 00:15:30.447 2.276 - 2.287: 96.1583% ( 13) 00:15:30.447 2.287 - 2.299: 96.2664% ( 15) 00:15:30.447 2.299 - 2.311: 96.3745% ( 15) 00:15:30.447 2.311 - 2.323: 96.4466% ( 10) 00:15:30.447 2.323 - 2.335: 96.6700% ( 31) 00:15:30.447 2.335 - 2.347: 96.9007% ( 32) 00:15:30.447 2.347 - 2.359: 97.2466% ( 48) 00:15:30.447 2.359 - 2.370: 97.5422% ( 41) 00:15:30.447 2.370 - 2.382: 97.8233% ( 39) 00:15:30.447 2.382 - 2.394: 98.0611% ( 33) 00:15:30.447 2.394 - 2.406: 98.2269% ( 23) 00:15:30.447 2.406 - 2.418: 98.2846% ( 8) 00:15:30.447 2.418 - 2.430: 98.3422% ( 8) 00:15:30.447 2.430 - 2.441: 98.3855% ( 6) 00:15:30.447 2.441 - 2.453: 98.4287% ( 6) 00:15:30.447 2.453 - 2.465: 98.4792% ( 7) 00:15:30.447 2.465 - 2.477: 98.5008% ( 3) 00:15:30.447 2.477 - 2.489: 98.5152% ( 2) 00:15:30.447 2.489 - 2.501: 98.5224% ( 1) 00:15:30.447 2.513 - 2.524: 98.5296% ( 1) 00:15:30.447 2.524 - 2.536: 98.5440% ( 2) 00:15:30.447 2.536 - 2.548: 98.5512% ( 1) 00:15:30.447 2.572 - 2.584: 98.5657% ( 2) 00:15:30.447 2.619 - 2.631: 98.5729% ( 1) 00:15:30.447 2.631 - 2.643: 98.5801% ( 1) 00:15:30.447 2.702 - 2.714: 98.5873% ( 1) 00:15:30.447 3.153 - 3.176: 98.5945% ( 1) 00:15:30.447 3.200 - 3.224: 98.6089% ( 2) 00:15:30.447 3.224 - 3.247: 98.6233% ( 2) 00:15:30.447 3.271 - 3.295: 98.6377% ( 2) 00:15:30.447 3.295 - 3.319: 98.6522% ( 2) 00:15:30.447 3.319 - 3.342: 98.6666% ( 2) 00:15:30.447 3.342 - 3.366: 98.6738% ( 1) 00:15:30.447 3.366 - 3.390: 98.6882% ( 2) 00:15:30.447 3.437 - 3.461: 98.6954% ( 1) 00:15:30.447 3.461 - 3.484: 98.7026% ( 1) 00:15:30.447 3.484 - 3.508: 98.7170% ( 2) 00:15:30.447 3.508 - 3.532: 98.7242% ( 1) 00:15:30.447 3.556 - 3.579: 98.7314% ( 1) 00:15:30.447 3.579 - 3.603: 98.7386% ( 1) 00:15:30.447 3.603 - 3.627: 98.7531% ( 2) 00:15:30.447 3.650 - 3.674: 98.7603% ( 1) 00:15:30.447 3.721 - 3.745: 98.7675% ( 1) 00:15:30.447 3.793 - 3.816: 98.7819% ( 2) 00:15:30.447 3.816 - 3.840: 98.7891% ( 1) 00:15:30.447 3.935 - 3.959: 98.7963% ( 1) 00:15:30.447 4.219 - 4.243: 98.8035% ( 1) 00:15:30.447 4.314 - 4.338: 98.8107% ( 1) 00:15:30.447 4.764 - 4.788: 98.8179% ( 1) 00:15:30.447 5.144 - 5.167: 98.8251% ( 1) 00:15:30.447 5.428 - 5.452: 98.8323% ( 1) 00:15:30.447 5.499 - 5.523: 98.8396% ( 1) 00:15:30.447 5.523 - 5.547: 98.8468% ( 1) 00:15:30.447 5.570 - 5.594: 98.8540% ( 1) 00:15:30.447 5.973 - 5.997: 98.8612% ( 1) 00:15:30.447 6.495 - 6.542: 98.8756% ( 2) 00:15:30.447 6.542 - 6.590: 98.8828% ( 1) 00:15:30.447 6.827 - 6.874: 98.8900% ( 1) 00:15:30.447 6.921 - 6.969: 98.8972% ( 1) 00:15:30.447 7.111 - 7.159: 98.9044% ( 1) 00:15:30.447 7.206 - 7.253: 98.9116% ( 1) 00:15:30.447 8.059 - 8.107: 98.9188% ( 1) 00:15:30.447 8.628 - 8.676: 98.9260% ( 1) 00:15:30.447 12.231 - 12.326: 98.9333% ( 1) 00:15:30.447 13.938 - 14.033: 98.9405% ( 1) 00:15:30.447 15.739 - 15.834: 98.9693% ( 4) 00:15:30.447 15.834 - 15.929: 99.0053% ( 5) 00:15:30.447 15.929 - 16.024: 99.0270% ( 3) 00:15:30.447 16.024 - 16.119: 99.0774% ( 7) 00:15:30.447 16.119 - 16.213: 99.1062% ( 4) 00:15:30.447 16.213 - 16.308: 99.1207% ( 2) 00:15:30.447 16.308 - 16.403: 99.1279% ( 1) 00:15:30.447 16.403 - 16.498: 99.1783% ( 7) 00:15:30.447 16.498 - 16.593: 99.2144% ( 5) 00:15:30.447 16.593 - 16.687: 99.2216% ( 1) 00:15:30.447 16.687 - 16.782: 99.2432% ( 3) 00:15:30.447 16.782 - 16.877: 99.2792% ( 5) 00:15:30.447 16.877 - 16.972: 99.3153% ( 5) 00:15:30.447 16.972 - 17.067: 99.3441% ( 4) 00:15:30.447 17.067 - 17.161: 99.3657% ( 3) 00:15:30.447 17.256 - 17.351: 99.3729% ( 1) 00:15:30.447 17.351 - 17.446: 99.3873% ( 2) 00:15:30.447 17.541 - 17.636: 99.3946% ( 1) 00:15:30.447 17.730 - 17.825: 99.4018% ( 1) 00:15:30.447 17.825 - 17.920: 99.4090% ( 1) 00:15:30.448 18.110 - 18.204: 99.4162% ( 1) 00:15:30.448 19.437 - 19.532: 99.4234% ( 1) 00:15:30.448 25.600 - 25.790: 99.4306% ( 1) 00:15:30.448 48.166 - 48.356: 99.4378% ( 1) 00:15:30.448 3009.801 - 3021.938: 99.4450% ( 1) 00:15:30.448 3980.705 - 4004.978: 99.9495% ( 70) 00:15:30.448 4004.978 - 4029.250: 99.9928% ( 6) 00:15:30.448 4029.250 - 4053.523: 100.0000% ( 1) 00:15:30.448 00:15:30.448 00:55:14 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:15:30.448 00:55:14 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:30.448 00:55:14 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:15:30.448 00:55:14 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:15:30.448 00:55:14 -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:30.706 [2024-07-23 00:55:14.705508] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:15:30.706 [ 00:15:30.706 { 00:15:30.706 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:30.706 "subtype": "Discovery", 00:15:30.706 "listen_addresses": [], 00:15:30.706 "allow_any_host": true, 00:15:30.706 "hosts": [] 00:15:30.706 }, 00:15:30.706 { 00:15:30.706 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:30.706 "subtype": "NVMe", 00:15:30.706 "listen_addresses": [ 00:15:30.706 { 00:15:30.706 "transport": "VFIOUSER", 00:15:30.706 "trtype": "VFIOUSER", 00:15:30.706 "adrfam": "IPv4", 00:15:30.706 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:30.706 "trsvcid": "0" 00:15:30.706 } 00:15:30.706 ], 00:15:30.706 "allow_any_host": true, 00:15:30.706 "hosts": [], 00:15:30.706 "serial_number": "SPDK1", 00:15:30.706 "model_number": "SPDK bdev Controller", 00:15:30.706 "max_namespaces": 32, 00:15:30.706 "min_cntlid": 1, 00:15:30.706 "max_cntlid": 65519, 00:15:30.706 "namespaces": [ 00:15:30.706 { 00:15:30.706 "nsid": 1, 00:15:30.706 "bdev_name": "Malloc1", 00:15:30.706 "name": "Malloc1", 00:15:30.706 "nguid": "7842F0BD6C4C40C387F1FBDE37C894A7", 00:15:30.706 "uuid": "7842f0bd-6c4c-40c3-87f1-fbde37c894a7" 00:15:30.706 } 00:15:30.706 ] 00:15:30.706 }, 00:15:30.706 { 00:15:30.706 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:30.706 "subtype": "NVMe", 00:15:30.706 "listen_addresses": [ 00:15:30.706 { 00:15:30.706 "transport": "VFIOUSER", 00:15:30.706 "trtype": "VFIOUSER", 00:15:30.706 "adrfam": "IPv4", 00:15:30.706 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:30.706 "trsvcid": "0" 00:15:30.706 } 00:15:30.706 ], 00:15:30.706 "allow_any_host": true, 00:15:30.706 "hosts": [], 00:15:30.706 "serial_number": "SPDK2", 00:15:30.706 "model_number": "SPDK bdev Controller", 00:15:30.706 "max_namespaces": 32, 00:15:30.706 "min_cntlid": 1, 00:15:30.706 "max_cntlid": 65519, 00:15:30.706 "namespaces": [ 00:15:30.706 { 00:15:30.706 "nsid": 1, 00:15:30.706 "bdev_name": "Malloc2", 00:15:30.706 "name": "Malloc2", 00:15:30.706 "nguid": "BCE5FF1FE17845AC9275B9124160BA90", 00:15:30.706 "uuid": "bce5ff1f-e178-45ac-9275-b9124160ba90" 00:15:30.706 } 00:15:30.706 ] 00:15:30.706 } 00:15:30.706 ] 00:15:30.706 00:55:14 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:30.706 00:55:14 -- target/nvmf_vfio_user.sh@34 -- # aerpid=3375036 00:15:30.706 00:55:14 -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:15:30.706 00:55:14 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:30.706 00:55:14 -- common/autotest_common.sh@1244 -- # local i=0 00:15:30.706 00:55:14 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:30.706 00:55:14 -- common/autotest_common.sh@1251 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:30.706 00:55:14 -- common/autotest_common.sh@1255 -- # return 0 00:15:30.706 00:55:14 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:30.706 00:55:14 -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:15:30.706 EAL: No free 2048 kB hugepages reported on node 1 00:15:30.964 Malloc3 00:15:30.964 00:55:15 -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:15:31.222 00:55:15 -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:31.222 Asynchronous Event Request test 00:15:31.222 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:31.222 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:31.222 Registering asynchronous event callbacks... 00:15:31.222 Starting namespace attribute notice tests for all controllers... 00:15:31.222 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:31.222 aer_cb - Changed Namespace 00:15:31.222 Cleaning up... 00:15:31.484 [ 00:15:31.484 { 00:15:31.484 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:31.484 "subtype": "Discovery", 00:15:31.484 "listen_addresses": [], 00:15:31.484 "allow_any_host": true, 00:15:31.484 "hosts": [] 00:15:31.484 }, 00:15:31.484 { 00:15:31.484 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:31.484 "subtype": "NVMe", 00:15:31.484 "listen_addresses": [ 00:15:31.484 { 00:15:31.484 "transport": "VFIOUSER", 00:15:31.484 "trtype": "VFIOUSER", 00:15:31.484 "adrfam": "IPv4", 00:15:31.484 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:31.484 "trsvcid": "0" 00:15:31.484 } 00:15:31.484 ], 00:15:31.484 "allow_any_host": true, 00:15:31.484 "hosts": [], 00:15:31.484 "serial_number": "SPDK1", 00:15:31.484 "model_number": "SPDK bdev Controller", 00:15:31.484 "max_namespaces": 32, 00:15:31.484 "min_cntlid": 1, 00:15:31.484 "max_cntlid": 65519, 00:15:31.484 "namespaces": [ 00:15:31.484 { 00:15:31.484 "nsid": 1, 00:15:31.484 "bdev_name": "Malloc1", 00:15:31.484 "name": "Malloc1", 00:15:31.484 "nguid": "7842F0BD6C4C40C387F1FBDE37C894A7", 00:15:31.484 "uuid": "7842f0bd-6c4c-40c3-87f1-fbde37c894a7" 00:15:31.484 }, 00:15:31.484 { 00:15:31.484 "nsid": 2, 00:15:31.484 "bdev_name": "Malloc3", 00:15:31.484 "name": "Malloc3", 00:15:31.484 "nguid": "B3FB288D348D4143997B64F637A2A5A6", 00:15:31.484 "uuid": "b3fb288d-348d-4143-997b-64f637a2a5a6" 00:15:31.484 } 00:15:31.484 ] 00:15:31.484 }, 00:15:31.484 { 00:15:31.484 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:31.484 "subtype": "NVMe", 00:15:31.484 "listen_addresses": [ 00:15:31.484 { 00:15:31.484 "transport": "VFIOUSER", 00:15:31.484 "trtype": "VFIOUSER", 00:15:31.484 "adrfam": "IPv4", 00:15:31.484 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:31.484 "trsvcid": "0" 00:15:31.484 } 00:15:31.484 ], 00:15:31.484 "allow_any_host": true, 00:15:31.484 "hosts": [], 00:15:31.484 "serial_number": "SPDK2", 00:15:31.484 "model_number": "SPDK bdev Controller", 00:15:31.484 "max_namespaces": 32, 00:15:31.484 "min_cntlid": 1, 00:15:31.484 "max_cntlid": 65519, 00:15:31.484 "namespaces": [ 00:15:31.484 { 00:15:31.484 "nsid": 1, 00:15:31.484 "bdev_name": "Malloc2", 00:15:31.484 "name": "Malloc2", 00:15:31.484 "nguid": "BCE5FF1FE17845AC9275B9124160BA90", 00:15:31.484 "uuid": "bce5ff1f-e178-45ac-9275-b9124160ba90" 00:15:31.484 } 00:15:31.484 ] 00:15:31.484 } 00:15:31.484 ] 00:15:31.484 00:55:15 -- target/nvmf_vfio_user.sh@44 -- # wait 3375036 00:15:31.484 00:55:15 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:31.484 00:55:15 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:31.484 00:55:15 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:15:31.484 00:55:15 -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:31.484 [2024-07-23 00:55:15.493507] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:15:31.485 [2024-07-23 00:55:15.493542] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3375176 ] 00:15:31.485 EAL: No free 2048 kB hugepages reported on node 1 00:15:31.485 [2024-07-23 00:55:15.524516] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:15:31.485 [2024-07-23 00:55:15.533986] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:31.485 [2024-07-23 00:55:15.534015] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f26c5925000 00:15:31.485 [2024-07-23 00:55:15.534987] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:31.485 [2024-07-23 00:55:15.535998] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:31.485 [2024-07-23 00:55:15.536997] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:31.485 [2024-07-23 00:55:15.537999] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:31.485 [2024-07-23 00:55:15.539006] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:31.485 [2024-07-23 00:55:15.540013] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:31.485 [2024-07-23 00:55:15.541019] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:31.485 [2024-07-23 00:55:15.542028] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:31.485 [2024-07-23 00:55:15.543040] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:31.485 [2024-07-23 00:55:15.543062] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f26c46db000 00:15:31.485 [2024-07-23 00:55:15.544179] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:31.485 [2024-07-23 00:55:15.559207] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:15:31.485 [2024-07-23 00:55:15.559236] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:15:31.485 [2024-07-23 00:55:15.564346] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:31.485 [2024-07-23 00:55:15.564403] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:31.485 [2024-07-23 00:55:15.564486] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:15:31.485 [2024-07-23 00:55:15.564510] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:15:31.485 [2024-07-23 00:55:15.564520] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:15:31.485 [2024-07-23 00:55:15.565355] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:15:31.485 [2024-07-23 00:55:15.565381] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:15:31.485 [2024-07-23 00:55:15.565394] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:15:31.485 [2024-07-23 00:55:15.566361] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:31.485 [2024-07-23 00:55:15.566381] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:15:31.485 [2024-07-23 00:55:15.566394] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:15:31.485 [2024-07-23 00:55:15.567370] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:15:31.485 [2024-07-23 00:55:15.567389] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:31.485 [2024-07-23 00:55:15.568374] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:15:31.485 [2024-07-23 00:55:15.568393] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:15:31.485 [2024-07-23 00:55:15.568402] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:15:31.485 [2024-07-23 00:55:15.568413] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:31.485 [2024-07-23 00:55:15.568523] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:15:31.485 [2024-07-23 00:55:15.568530] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:31.485 [2024-07-23 00:55:15.568538] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:15:31.485 [2024-07-23 00:55:15.569384] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:15:31.485 [2024-07-23 00:55:15.570394] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:15:31.485 [2024-07-23 00:55:15.571409] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:31.485 [2024-07-23 00:55:15.572440] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:31.485 [2024-07-23 00:55:15.573424] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:15:31.485 [2024-07-23 00:55:15.573443] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:31.485 [2024-07-23 00:55:15.573452] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:15:31.485 [2024-07-23 00:55:15.573475] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:15:31.485 [2024-07-23 00:55:15.573487] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:15:31.485 [2024-07-23 00:55:15.573504] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:31.485 [2024-07-23 00:55:15.573515] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:31.485 [2024-07-23 00:55:15.573533] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:31.485 [2024-07-23 00:55:15.581626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:31.485 [2024-07-23 00:55:15.581648] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:15:31.485 [2024-07-23 00:55:15.581657] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:15:31.485 [2024-07-23 00:55:15.581665] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:15:31.485 [2024-07-23 00:55:15.581678] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:31.485 [2024-07-23 00:55:15.581686] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:15:31.485 [2024-07-23 00:55:15.581694] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:15:31.485 [2024-07-23 00:55:15.581702] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:15:31.485 [2024-07-23 00:55:15.581719] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:15:31.485 [2024-07-23 00:55:15.581735] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:31.485 [2024-07-23 00:55:15.589623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:31.485 [2024-07-23 00:55:15.589650] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:31.485 [2024-07-23 00:55:15.589665] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:31.485 [2024-07-23 00:55:15.589677] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:31.485 [2024-07-23 00:55:15.589688] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:31.485 [2024-07-23 00:55:15.589697] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:15:31.485 [2024-07-23 00:55:15.589711] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:31.485 [2024-07-23 00:55:15.589726] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:31.485 [2024-07-23 00:55:15.597625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:31.485 [2024-07-23 00:55:15.597643] nvme_ctrlr.c:2878:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:15:31.485 [2024-07-23 00:55:15.597652] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:31.485 [2024-07-23 00:55:15.597663] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:15:31.485 [2024-07-23 00:55:15.597676] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:15:31.485 [2024-07-23 00:55:15.597691] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:31.486 [2024-07-23 00:55:15.605626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:31.486 [2024-07-23 00:55:15.605694] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:15:31.486 [2024-07-23 00:55:15.605709] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:15:31.486 [2024-07-23 00:55:15.605721] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:31.486 [2024-07-23 00:55:15.605730] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:31.486 [2024-07-23 00:55:15.605740] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:31.486 [2024-07-23 00:55:15.613640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:31.486 [2024-07-23 00:55:15.613666] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:15:31.486 [2024-07-23 00:55:15.613685] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:15:31.486 [2024-07-23 00:55:15.613698] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:15:31.486 [2024-07-23 00:55:15.613711] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:31.486 [2024-07-23 00:55:15.613719] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:31.486 [2024-07-23 00:55:15.613729] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:31.486 [2024-07-23 00:55:15.621625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:31.486 [2024-07-23 00:55:15.621651] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:31.486 [2024-07-23 00:55:15.621666] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:31.486 [2024-07-23 00:55:15.621679] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:31.486 [2024-07-23 00:55:15.621688] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:31.486 [2024-07-23 00:55:15.621697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:31.486 [2024-07-23 00:55:15.629621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:31.486 [2024-07-23 00:55:15.629641] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:31.486 [2024-07-23 00:55:15.629654] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:15:31.486 [2024-07-23 00:55:15.629668] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:15:31.486 [2024-07-23 00:55:15.629678] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:31.486 [2024-07-23 00:55:15.629687] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:15:31.486 [2024-07-23 00:55:15.629698] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:15:31.486 [2024-07-23 00:55:15.629706] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:15:31.486 [2024-07-23 00:55:15.629715] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:15:31.486 [2024-07-23 00:55:15.629739] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:31.486 [2024-07-23 00:55:15.637625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:31.486 [2024-07-23 00:55:15.637652] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:31.486 [2024-07-23 00:55:15.645625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:31.486 [2024-07-23 00:55:15.645650] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:31.486 [2024-07-23 00:55:15.653622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:31.486 [2024-07-23 00:55:15.653647] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:31.486 [2024-07-23 00:55:15.661624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:31.486 [2024-07-23 00:55:15.661650] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:31.486 [2024-07-23 00:55:15.661660] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:31.486 [2024-07-23 00:55:15.661666] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:31.486 [2024-07-23 00:55:15.661672] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:31.486 [2024-07-23 00:55:15.661682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:31.486 [2024-07-23 00:55:15.661693] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:31.486 [2024-07-23 00:55:15.661701] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:31.486 [2024-07-23 00:55:15.661710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:31.486 [2024-07-23 00:55:15.661721] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:31.486 [2024-07-23 00:55:15.661729] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:31.486 [2024-07-23 00:55:15.661738] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:31.486 [2024-07-23 00:55:15.661749] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:31.486 [2024-07-23 00:55:15.661757] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:31.486 [2024-07-23 00:55:15.661766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:31.486 [2024-07-23 00:55:15.669622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:31.486 [2024-07-23 00:55:15.669650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:31.486 [2024-07-23 00:55:15.669666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:31.486 [2024-07-23 00:55:15.669681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:31.486 ===================================================== 00:15:31.486 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:31.486 ===================================================== 00:15:31.486 Controller Capabilities/Features 00:15:31.486 ================================ 00:15:31.486 Vendor ID: 4e58 00:15:31.486 Subsystem Vendor ID: 4e58 00:15:31.486 Serial Number: SPDK2 00:15:31.486 Model Number: SPDK bdev Controller 00:15:31.486 Firmware Version: 24.01.1 00:15:31.486 Recommended Arb Burst: 6 00:15:31.486 IEEE OUI Identifier: 8d 6b 50 00:15:31.486 Multi-path I/O 00:15:31.486 May have multiple subsystem ports: Yes 00:15:31.486 May have multiple controllers: Yes 00:15:31.486 Associated with SR-IOV VF: No 00:15:31.486 Max Data Transfer Size: 131072 00:15:31.486 Max Number of Namespaces: 32 00:15:31.486 Max Number of I/O Queues: 127 00:15:31.486 NVMe Specification Version (VS): 1.3 00:15:31.486 NVMe Specification Version (Identify): 1.3 00:15:31.486 Maximum Queue Entries: 256 00:15:31.486 Contiguous Queues Required: Yes 00:15:31.486 Arbitration Mechanisms Supported 00:15:31.486 Weighted Round Robin: Not Supported 00:15:31.486 Vendor Specific: Not Supported 00:15:31.486 Reset Timeout: 15000 ms 00:15:31.486 Doorbell Stride: 4 bytes 00:15:31.486 NVM Subsystem Reset: Not Supported 00:15:31.486 Command Sets Supported 00:15:31.486 NVM Command Set: Supported 00:15:31.486 Boot Partition: Not Supported 00:15:31.486 Memory Page Size Minimum: 4096 bytes 00:15:31.486 Memory Page Size Maximum: 4096 bytes 00:15:31.486 Persistent Memory Region: Not Supported 00:15:31.486 Optional Asynchronous Events Supported 00:15:31.486 Namespace Attribute Notices: Supported 00:15:31.486 Firmware Activation Notices: Not Supported 00:15:31.486 ANA Change Notices: Not Supported 00:15:31.486 PLE Aggregate Log Change Notices: Not Supported 00:15:31.486 LBA Status Info Alert Notices: Not Supported 00:15:31.486 EGE Aggregate Log Change Notices: Not Supported 00:15:31.486 Normal NVM Subsystem Shutdown event: Not Supported 00:15:31.486 Zone Descriptor Change Notices: Not Supported 00:15:31.486 Discovery Log Change Notices: Not Supported 00:15:31.486 Controller Attributes 00:15:31.486 128-bit Host Identifier: Supported 00:15:31.486 Non-Operational Permissive Mode: Not Supported 00:15:31.486 NVM Sets: Not Supported 00:15:31.486 Read Recovery Levels: Not Supported 00:15:31.486 Endurance Groups: Not Supported 00:15:31.486 Predictable Latency Mode: Not Supported 00:15:31.486 Traffic Based Keep ALive: Not Supported 00:15:31.486 Namespace Granularity: Not Supported 00:15:31.487 SQ Associations: Not Supported 00:15:31.487 UUID List: Not Supported 00:15:31.487 Multi-Domain Subsystem: Not Supported 00:15:31.487 Fixed Capacity Management: Not Supported 00:15:31.487 Variable Capacity Management: Not Supported 00:15:31.487 Delete Endurance Group: Not Supported 00:15:31.487 Delete NVM Set: Not Supported 00:15:31.487 Extended LBA Formats Supported: Not Supported 00:15:31.487 Flexible Data Placement Supported: Not Supported 00:15:31.487 00:15:31.487 Controller Memory Buffer Support 00:15:31.487 ================================ 00:15:31.487 Supported: No 00:15:31.487 00:15:31.487 Persistent Memory Region Support 00:15:31.487 ================================ 00:15:31.487 Supported: No 00:15:31.487 00:15:31.487 Admin Command Set Attributes 00:15:31.487 ============================ 00:15:31.487 Security Send/Receive: Not Supported 00:15:31.487 Format NVM: Not Supported 00:15:31.487 Firmware Activate/Download: Not Supported 00:15:31.487 Namespace Management: Not Supported 00:15:31.487 Device Self-Test: Not Supported 00:15:31.487 Directives: Not Supported 00:15:31.487 NVMe-MI: Not Supported 00:15:31.487 Virtualization Management: Not Supported 00:15:31.487 Doorbell Buffer Config: Not Supported 00:15:31.487 Get LBA Status Capability: Not Supported 00:15:31.487 Command & Feature Lockdown Capability: Not Supported 00:15:31.487 Abort Command Limit: 4 00:15:31.487 Async Event Request Limit: 4 00:15:31.487 Number of Firmware Slots: N/A 00:15:31.487 Firmware Slot 1 Read-Only: N/A 00:15:31.487 Firmware Activation Without Reset: N/A 00:15:31.487 Multiple Update Detection Support: N/A 00:15:31.487 Firmware Update Granularity: No Information Provided 00:15:31.487 Per-Namespace SMART Log: No 00:15:31.487 Asymmetric Namespace Access Log Page: Not Supported 00:15:31.487 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:15:31.487 Command Effects Log Page: Supported 00:15:31.487 Get Log Page Extended Data: Supported 00:15:31.487 Telemetry Log Pages: Not Supported 00:15:31.487 Persistent Event Log Pages: Not Supported 00:15:31.487 Supported Log Pages Log Page: May Support 00:15:31.487 Commands Supported & Effects Log Page: Not Supported 00:15:31.487 Feature Identifiers & Effects Log Page:May Support 00:15:31.487 NVMe-MI Commands & Effects Log Page: May Support 00:15:31.487 Data Area 4 for Telemetry Log: Not Supported 00:15:31.487 Error Log Page Entries Supported: 128 00:15:31.487 Keep Alive: Supported 00:15:31.487 Keep Alive Granularity: 10000 ms 00:15:31.487 00:15:31.487 NVM Command Set Attributes 00:15:31.487 ========================== 00:15:31.487 Submission Queue Entry Size 00:15:31.487 Max: 64 00:15:31.487 Min: 64 00:15:31.487 Completion Queue Entry Size 00:15:31.487 Max: 16 00:15:31.487 Min: 16 00:15:31.487 Number of Namespaces: 32 00:15:31.487 Compare Command: Supported 00:15:31.487 Write Uncorrectable Command: Not Supported 00:15:31.487 Dataset Management Command: Supported 00:15:31.487 Write Zeroes Command: Supported 00:15:31.487 Set Features Save Field: Not Supported 00:15:31.487 Reservations: Not Supported 00:15:31.487 Timestamp: Not Supported 00:15:31.487 Copy: Supported 00:15:31.487 Volatile Write Cache: Present 00:15:31.487 Atomic Write Unit (Normal): 1 00:15:31.487 Atomic Write Unit (PFail): 1 00:15:31.487 Atomic Compare & Write Unit: 1 00:15:31.487 Fused Compare & Write: Supported 00:15:31.487 Scatter-Gather List 00:15:31.487 SGL Command Set: Supported (Dword aligned) 00:15:31.487 SGL Keyed: Not Supported 00:15:31.487 SGL Bit Bucket Descriptor: Not Supported 00:15:31.487 SGL Metadata Pointer: Not Supported 00:15:31.487 Oversized SGL: Not Supported 00:15:31.487 SGL Metadata Address: Not Supported 00:15:31.487 SGL Offset: Not Supported 00:15:31.487 Transport SGL Data Block: Not Supported 00:15:31.487 Replay Protected Memory Block: Not Supported 00:15:31.487 00:15:31.487 Firmware Slot Information 00:15:31.487 ========================= 00:15:31.487 Active slot: 1 00:15:31.487 Slot 1 Firmware Revision: 24.01.1 00:15:31.487 00:15:31.487 00:15:31.487 Commands Supported and Effects 00:15:31.487 ============================== 00:15:31.487 Admin Commands 00:15:31.487 -------------- 00:15:31.487 Get Log Page (02h): Supported 00:15:31.487 Identify (06h): Supported 00:15:31.487 Abort (08h): Supported 00:15:31.487 Set Features (09h): Supported 00:15:31.487 Get Features (0Ah): Supported 00:15:31.487 Asynchronous Event Request (0Ch): Supported 00:15:31.487 Keep Alive (18h): Supported 00:15:31.487 I/O Commands 00:15:31.487 ------------ 00:15:31.487 Flush (00h): Supported LBA-Change 00:15:31.487 Write (01h): Supported LBA-Change 00:15:31.487 Read (02h): Supported 00:15:31.487 Compare (05h): Supported 00:15:31.487 Write Zeroes (08h): Supported LBA-Change 00:15:31.487 Dataset Management (09h): Supported LBA-Change 00:15:31.487 Copy (19h): Supported LBA-Change 00:15:31.487 Unknown (79h): Supported LBA-Change 00:15:31.487 Unknown (7Ah): Supported 00:15:31.487 00:15:31.487 Error Log 00:15:31.487 ========= 00:15:31.487 00:15:31.487 Arbitration 00:15:31.487 =========== 00:15:31.487 Arbitration Burst: 1 00:15:31.487 00:15:31.487 Power Management 00:15:31.487 ================ 00:15:31.487 Number of Power States: 1 00:15:31.487 Current Power State: Power State #0 00:15:31.487 Power State #0: 00:15:31.487 Max Power: 0.00 W 00:15:31.487 Non-Operational State: Operational 00:15:31.487 Entry Latency: Not Reported 00:15:31.487 Exit Latency: Not Reported 00:15:31.487 Relative Read Throughput: 0 00:15:31.487 Relative Read Latency: 0 00:15:31.487 Relative Write Throughput: 0 00:15:31.487 Relative Write Latency: 0 00:15:31.487 Idle Power: Not Reported 00:15:31.487 Active Power: Not Reported 00:15:31.487 Non-Operational Permissive Mode: Not Supported 00:15:31.487 00:15:31.487 Health Information 00:15:31.487 ================== 00:15:31.487 Critical Warnings: 00:15:31.487 Available Spare Space: OK 00:15:31.487 Temperature: OK 00:15:31.487 Device Reliability: OK 00:15:31.487 Read Only: No 00:15:31.487 Volatile Memory Backup: OK 00:15:31.487 Current Temperature: 0 Kelvin[2024-07-23 00:55:15.669810] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:31.487 [2024-07-23 00:55:15.677622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:31.487 [2024-07-23 00:55:15.677667] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:15:31.487 [2024-07-23 00:55:15.677684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.487 [2024-07-23 00:55:15.677695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.487 [2024-07-23 00:55:15.677705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.487 [2024-07-23 00:55:15.677714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.487 [2024-07-23 00:55:15.677797] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:31.487 [2024-07-23 00:55:15.677818] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:15:31.487 [2024-07-23 00:55:15.678841] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:15:31.487 [2024-07-23 00:55:15.678857] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:15:31.487 [2024-07-23 00:55:15.679808] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:15:31.487 [2024-07-23 00:55:15.679832] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:15:31.487 [2024-07-23 00:55:15.679882] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:15:31.487 [2024-07-23 00:55:15.681073] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:31.748 (-273 Celsius) 00:15:31.748 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:31.748 Available Spare: 0% 00:15:31.748 Available Spare Threshold: 0% 00:15:31.748 Life Percentage Used: 0% 00:15:31.748 Data Units Read: 0 00:15:31.748 Data Units Written: 0 00:15:31.748 Host Read Commands: 0 00:15:31.748 Host Write Commands: 0 00:15:31.748 Controller Busy Time: 0 minutes 00:15:31.748 Power Cycles: 0 00:15:31.748 Power On Hours: 0 hours 00:15:31.748 Unsafe Shutdowns: 0 00:15:31.748 Unrecoverable Media Errors: 0 00:15:31.748 Lifetime Error Log Entries: 0 00:15:31.748 Warning Temperature Time: 0 minutes 00:15:31.748 Critical Temperature Time: 0 minutes 00:15:31.748 00:15:31.748 Number of Queues 00:15:31.748 ================ 00:15:31.748 Number of I/O Submission Queues: 127 00:15:31.748 Number of I/O Completion Queues: 127 00:15:31.748 00:15:31.748 Active Namespaces 00:15:31.748 ================= 00:15:31.748 Namespace ID:1 00:15:31.748 Error Recovery Timeout: Unlimited 00:15:31.748 Command Set Identifier: NVM (00h) 00:15:31.748 Deallocate: Supported 00:15:31.748 Deallocated/Unwritten Error: Not Supported 00:15:31.748 Deallocated Read Value: Unknown 00:15:31.748 Deallocate in Write Zeroes: Not Supported 00:15:31.748 Deallocated Guard Field: 0xFFFF 00:15:31.748 Flush: Supported 00:15:31.748 Reservation: Supported 00:15:31.748 Namespace Sharing Capabilities: Multiple Controllers 00:15:31.748 Size (in LBAs): 131072 (0GiB) 00:15:31.748 Capacity (in LBAs): 131072 (0GiB) 00:15:31.748 Utilization (in LBAs): 131072 (0GiB) 00:15:31.748 NGUID: BCE5FF1FE17845AC9275B9124160BA90 00:15:31.748 UUID: bce5ff1f-e178-45ac-9275-b9124160ba90 00:15:31.748 Thin Provisioning: Not Supported 00:15:31.748 Per-NS Atomic Units: Yes 00:15:31.748 Atomic Boundary Size (Normal): 0 00:15:31.748 Atomic Boundary Size (PFail): 0 00:15:31.748 Atomic Boundary Offset: 0 00:15:31.748 Maximum Single Source Range Length: 65535 00:15:31.748 Maximum Copy Length: 65535 00:15:31.748 Maximum Source Range Count: 1 00:15:31.748 NGUID/EUI64 Never Reused: No 00:15:31.748 Namespace Write Protected: No 00:15:31.748 Number of LBA Formats: 1 00:15:31.748 Current LBA Format: LBA Format #00 00:15:31.748 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:31.748 00:15:31.748 00:55:15 -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:31.748 EAL: No free 2048 kB hugepages reported on node 1 00:15:37.025 Initializing NVMe Controllers 00:15:37.025 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:37.025 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:37.025 Initialization complete. Launching workers. 00:15:37.025 ======================================================== 00:15:37.025 Latency(us) 00:15:37.025 Device Information : IOPS MiB/s Average min max 00:15:37.025 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 37436.54 146.24 3418.61 1132.07 6651.19 00:15:37.025 ======================================================== 00:15:37.025 Total : 37436.54 146.24 3418.61 1132.07 6651.19 00:15:37.025 00:15:37.025 00:55:21 -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:37.025 EAL: No free 2048 kB hugepages reported on node 1 00:15:42.332 Initializing NVMe Controllers 00:15:42.332 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:42.332 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:42.332 Initialization complete. Launching workers. 00:15:42.332 ======================================================== 00:15:42.332 Latency(us) 00:15:42.332 Device Information : IOPS MiB/s Average min max 00:15:42.332 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 36166.07 141.27 3538.42 1166.81 8718.74 00:15:42.332 ======================================================== 00:15:42.332 Total : 36166.07 141.27 3538.42 1166.81 8718.74 00:15:42.332 00:15:42.332 00:55:26 -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:42.332 EAL: No free 2048 kB hugepages reported on node 1 00:15:47.605 Initializing NVMe Controllers 00:15:47.605 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:47.605 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:47.605 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:15:47.605 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:15:47.605 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:15:47.605 Initialization complete. Launching workers. 00:15:47.605 Starting thread on core 2 00:15:47.605 Starting thread on core 3 00:15:47.605 Starting thread on core 1 00:15:47.605 00:55:31 -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:15:47.605 EAL: No free 2048 kB hugepages reported on node 1 00:15:50.888 Initializing NVMe Controllers 00:15:50.888 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:50.888 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:50.888 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:15:50.888 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:15:50.888 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:15:50.888 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:15:50.888 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:50.888 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:50.888 Initialization complete. Launching workers. 00:15:50.889 Starting thread on core 1 with urgent priority queue 00:15:50.889 Starting thread on core 2 with urgent priority queue 00:15:50.889 Starting thread on core 3 with urgent priority queue 00:15:50.889 Starting thread on core 0 with urgent priority queue 00:15:50.889 SPDK bdev Controller (SPDK2 ) core 0: 5652.33 IO/s 17.69 secs/100000 ios 00:15:50.889 SPDK bdev Controller (SPDK2 ) core 1: 6513.00 IO/s 15.35 secs/100000 ios 00:15:50.889 SPDK bdev Controller (SPDK2 ) core 2: 6670.00 IO/s 14.99 secs/100000 ios 00:15:50.889 SPDK bdev Controller (SPDK2 ) core 3: 7008.00 IO/s 14.27 secs/100000 ios 00:15:50.889 ======================================================== 00:15:50.889 00:15:50.889 00:55:35 -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:50.889 EAL: No free 2048 kB hugepages reported on node 1 00:15:51.146 Initializing NVMe Controllers 00:15:51.146 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:51.146 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:51.146 Namespace ID: 1 size: 0GB 00:15:51.146 Initialization complete. 00:15:51.146 INFO: using host memory buffer for IO 00:15:51.146 Hello world! 00:15:51.146 00:55:35 -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:51.404 EAL: No free 2048 kB hugepages reported on node 1 00:15:52.781 Initializing NVMe Controllers 00:15:52.781 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:52.781 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:52.781 Initialization complete. Launching workers. 00:15:52.781 submit (in ns) avg, min, max = 8478.7, 3451.1, 4022400.0 00:15:52.781 complete (in ns) avg, min, max = 22885.0, 2052.2, 4019866.7 00:15:52.781 00:15:52.781 Submit histogram 00:15:52.781 ================ 00:15:52.781 Range in us Cumulative Count 00:15:52.781 3.437 - 3.461: 0.2096% ( 29) 00:15:52.781 3.461 - 3.484: 0.9252% ( 99) 00:15:52.781 3.484 - 3.508: 2.5587% ( 226) 00:15:52.781 3.508 - 3.532: 7.0257% ( 618) 00:15:52.781 3.532 - 3.556: 14.1525% ( 986) 00:15:52.781 3.556 - 3.579: 24.7055% ( 1460) 00:15:52.781 3.579 - 3.603: 34.6946% ( 1382) 00:15:52.781 3.603 - 3.627: 44.7705% ( 1394) 00:15:52.781 3.627 - 3.650: 52.9454% ( 1131) 00:15:52.781 3.650 - 3.674: 59.2049% ( 866) 00:15:52.781 3.674 - 3.698: 63.6429% ( 614) 00:15:52.781 3.698 - 3.721: 67.9508% ( 596) 00:15:52.781 3.721 - 3.745: 70.7336% ( 385) 00:15:52.781 3.745 - 3.769: 73.6827% ( 408) 00:15:52.781 3.769 - 3.793: 76.7185% ( 420) 00:15:52.781 3.793 - 3.816: 79.9133% ( 442) 00:15:52.781 3.816 - 3.840: 83.3466% ( 475) 00:15:52.781 3.840 - 3.864: 86.1294% ( 385) 00:15:52.781 3.864 - 3.887: 88.4930% ( 327) 00:15:52.781 3.887 - 3.911: 90.3289% ( 254) 00:15:52.781 3.911 - 3.935: 92.0347% ( 236) 00:15:52.781 3.935 - 3.959: 93.2996% ( 175) 00:15:52.781 3.959 - 3.982: 94.2682% ( 134) 00:15:52.781 3.982 - 4.006: 94.9621% ( 96) 00:15:52.781 4.006 - 4.030: 95.5403% ( 80) 00:15:52.781 4.030 - 4.053: 95.9740% ( 60) 00:15:52.781 4.053 - 4.077: 96.3426% ( 51) 00:15:52.781 4.077 - 4.101: 96.5450% ( 28) 00:15:52.781 4.101 - 4.124: 96.6534% ( 15) 00:15:52.781 4.124 - 4.148: 96.7907% ( 19) 00:15:52.781 4.148 - 4.172: 96.9064% ( 16) 00:15:52.781 4.172 - 4.196: 97.0076% ( 14) 00:15:52.781 4.196 - 4.219: 97.1088% ( 14) 00:15:52.781 4.219 - 4.243: 97.2244% ( 16) 00:15:52.781 4.243 - 4.267: 97.2678% ( 6) 00:15:52.781 4.267 - 4.290: 97.3329% ( 9) 00:15:52.781 4.290 - 4.314: 97.3473% ( 2) 00:15:52.781 4.314 - 4.338: 97.3762% ( 4) 00:15:52.781 4.338 - 4.361: 97.3979% ( 3) 00:15:52.781 4.361 - 4.385: 97.4340% ( 5) 00:15:52.781 4.385 - 4.409: 97.4485% ( 2) 00:15:52.782 4.433 - 4.456: 97.4557% ( 1) 00:15:52.782 4.456 - 4.480: 97.4774% ( 3) 00:15:52.782 4.504 - 4.527: 97.4919% ( 2) 00:15:52.782 4.527 - 4.551: 97.5063% ( 2) 00:15:52.782 4.551 - 4.575: 97.5136% ( 1) 00:15:52.782 4.575 - 4.599: 97.5208% ( 1) 00:15:52.782 4.599 - 4.622: 97.5280% ( 1) 00:15:52.782 4.622 - 4.646: 97.5641% ( 5) 00:15:52.782 4.646 - 4.670: 97.5858% ( 3) 00:15:52.782 4.670 - 4.693: 97.6292% ( 6) 00:15:52.782 4.693 - 4.717: 97.6798% ( 7) 00:15:52.782 4.717 - 4.741: 97.7449% ( 9) 00:15:52.782 4.741 - 4.764: 97.8027% ( 8) 00:15:52.782 4.764 - 4.788: 97.8244% ( 3) 00:15:52.782 4.788 - 4.812: 97.8605% ( 5) 00:15:52.782 4.812 - 4.836: 97.8966% ( 5) 00:15:52.782 4.836 - 4.859: 97.9545% ( 8) 00:15:52.782 4.859 - 4.883: 97.9689% ( 2) 00:15:52.782 4.883 - 4.907: 98.0123% ( 6) 00:15:52.782 4.907 - 4.930: 98.0846% ( 10) 00:15:52.782 4.930 - 4.954: 98.0990% ( 2) 00:15:52.782 4.954 - 4.978: 98.1207% ( 3) 00:15:52.782 4.978 - 5.001: 98.1641% ( 6) 00:15:52.782 5.001 - 5.025: 98.1858% ( 3) 00:15:52.782 5.025 - 5.049: 98.1930% ( 1) 00:15:52.782 5.049 - 5.073: 98.2147% ( 3) 00:15:52.782 5.073 - 5.096: 98.2219% ( 1) 00:15:52.782 5.096 - 5.120: 98.2291% ( 1) 00:15:52.782 5.167 - 5.191: 98.2364% ( 1) 00:15:52.782 5.191 - 5.215: 98.2508% ( 2) 00:15:52.782 5.215 - 5.239: 98.2653% ( 2) 00:15:52.782 5.262 - 5.286: 98.2725% ( 1) 00:15:52.782 5.476 - 5.499: 98.2870% ( 2) 00:15:52.782 5.665 - 5.689: 98.2942% ( 1) 00:15:52.782 5.689 - 5.713: 98.3086% ( 2) 00:15:52.782 5.736 - 5.760: 98.3159% ( 1) 00:15:52.782 5.855 - 5.879: 98.3231% ( 1) 00:15:52.782 6.021 - 6.044: 98.3303% ( 1) 00:15:52.782 6.068 - 6.116: 98.3375% ( 1) 00:15:52.782 6.163 - 6.210: 98.3448% ( 1) 00:15:52.782 6.258 - 6.305: 98.3592% ( 2) 00:15:52.782 6.542 - 6.590: 98.3665% ( 1) 00:15:52.782 6.684 - 6.732: 98.3809% ( 2) 00:15:52.782 6.827 - 6.874: 98.3881% ( 1) 00:15:52.782 6.874 - 6.921: 98.3954% ( 1) 00:15:52.782 6.969 - 7.016: 98.4026% ( 1) 00:15:52.782 7.016 - 7.064: 98.4243% ( 3) 00:15:52.782 7.064 - 7.111: 98.4460% ( 3) 00:15:52.782 7.111 - 7.159: 98.4677% ( 3) 00:15:52.782 7.206 - 7.253: 98.4821% ( 2) 00:15:52.782 7.253 - 7.301: 98.4966% ( 2) 00:15:52.782 7.301 - 7.348: 98.5038% ( 1) 00:15:52.782 7.348 - 7.396: 98.5255% ( 3) 00:15:52.782 7.396 - 7.443: 98.5327% ( 1) 00:15:52.782 7.443 - 7.490: 98.5399% ( 1) 00:15:52.782 7.538 - 7.585: 98.5544% ( 2) 00:15:52.782 7.585 - 7.633: 98.5688% ( 2) 00:15:52.782 7.680 - 7.727: 98.5761% ( 1) 00:15:52.782 7.727 - 7.775: 98.5833% ( 1) 00:15:52.782 7.775 - 7.822: 98.6050% ( 3) 00:15:52.782 7.822 - 7.870: 98.6194% ( 2) 00:15:52.782 7.870 - 7.917: 98.6411% ( 3) 00:15:52.782 7.964 - 8.012: 98.6628% ( 3) 00:15:52.782 8.012 - 8.059: 98.6700% ( 1) 00:15:52.782 8.154 - 8.201: 98.6773% ( 1) 00:15:52.782 8.201 - 8.249: 98.6845% ( 1) 00:15:52.782 8.249 - 8.296: 98.6990% ( 2) 00:15:52.782 8.344 - 8.391: 98.7206% ( 3) 00:15:52.782 8.628 - 8.676: 98.7279% ( 1) 00:15:52.782 8.818 - 8.865: 98.7351% ( 1) 00:15:52.782 8.913 - 8.960: 98.7423% ( 1) 00:15:52.782 8.960 - 9.007: 98.7568% ( 2) 00:15:52.782 9.007 - 9.055: 98.7640% ( 1) 00:15:52.782 9.102 - 9.150: 98.7712% ( 1) 00:15:52.782 9.150 - 9.197: 98.8001% ( 4) 00:15:52.782 9.576 - 9.624: 98.8074% ( 1) 00:15:52.782 9.624 - 9.671: 98.8146% ( 1) 00:15:52.782 9.766 - 9.813: 98.8291% ( 2) 00:15:52.782 9.813 - 9.861: 98.8363% ( 1) 00:15:52.782 9.908 - 9.956: 98.8507% ( 2) 00:15:52.782 9.956 - 10.003: 98.8580% ( 1) 00:15:52.782 10.003 - 10.050: 98.8652% ( 1) 00:15:52.782 10.050 - 10.098: 98.8724% ( 1) 00:15:52.782 10.382 - 10.430: 98.8797% ( 1) 00:15:52.782 10.714 - 10.761: 98.8869% ( 1) 00:15:52.782 10.809 - 10.856: 98.8941% ( 1) 00:15:52.782 11.093 - 11.141: 98.9013% ( 1) 00:15:52.782 11.330 - 11.378: 98.9086% ( 1) 00:15:52.782 11.425 - 11.473: 98.9158% ( 1) 00:15:52.782 11.520 - 11.567: 98.9302% ( 2) 00:15:52.782 12.136 - 12.231: 98.9375% ( 1) 00:15:52.782 12.231 - 12.326: 98.9447% ( 1) 00:15:52.782 12.326 - 12.421: 98.9519% ( 1) 00:15:52.782 12.421 - 12.516: 98.9592% ( 1) 00:15:52.782 12.610 - 12.705: 98.9664% ( 1) 00:15:52.782 12.800 - 12.895: 98.9736% ( 1) 00:15:52.782 13.084 - 13.179: 98.9808% ( 1) 00:15:52.782 13.274 - 13.369: 98.9881% ( 1) 00:15:52.782 13.369 - 13.464: 99.0098% ( 3) 00:15:52.782 13.464 - 13.559: 99.0314% ( 3) 00:15:52.782 13.559 - 13.653: 99.0387% ( 1) 00:15:52.782 13.653 - 13.748: 99.0531% ( 2) 00:15:52.782 13.938 - 14.033: 99.0604% ( 1) 00:15:52.782 14.033 - 14.127: 99.0676% ( 1) 00:15:52.782 14.317 - 14.412: 99.0820% ( 2) 00:15:52.782 14.981 - 15.076: 99.0893% ( 1) 00:15:52.782 15.170 - 15.265: 99.0965% ( 1) 00:15:52.782 17.067 - 17.161: 99.1037% ( 1) 00:15:52.782 17.161 - 17.256: 99.1110% ( 1) 00:15:52.782 17.256 - 17.351: 99.1326% ( 3) 00:15:52.782 17.351 - 17.446: 99.1615% ( 4) 00:15:52.782 17.446 - 17.541: 99.1905% ( 4) 00:15:52.782 17.541 - 17.636: 99.2338% ( 6) 00:15:52.782 17.636 - 17.730: 99.2772% ( 6) 00:15:52.782 17.730 - 17.825: 99.2844% ( 1) 00:15:52.782 17.825 - 17.920: 99.3350% ( 7) 00:15:52.782 17.920 - 18.015: 99.3639% ( 4) 00:15:52.782 18.015 - 18.110: 99.4218% ( 8) 00:15:52.782 18.110 - 18.204: 99.5085% ( 12) 00:15:52.782 18.204 - 18.299: 99.5519% ( 6) 00:15:52.782 18.299 - 18.394: 99.6097% ( 8) 00:15:52.782 18.394 - 18.489: 99.6241% ( 2) 00:15:52.782 18.489 - 18.584: 99.6747% ( 7) 00:15:52.782 18.584 - 18.679: 99.7037% ( 4) 00:15:52.782 18.679 - 18.773: 99.7398% ( 5) 00:15:52.782 18.773 - 18.868: 99.7759% ( 5) 00:15:52.782 19.058 - 19.153: 99.7904% ( 2) 00:15:52.782 19.153 - 19.247: 99.7976% ( 1) 00:15:52.782 19.247 - 19.342: 99.8048% ( 1) 00:15:52.782 19.721 - 19.816: 99.8265% ( 3) 00:15:52.782 20.290 - 20.385: 99.8338% ( 1) 00:15:52.782 20.480 - 20.575: 99.8410% ( 1) 00:15:52.782 21.144 - 21.239: 99.8482% ( 1) 00:15:52.782 21.618 - 21.713: 99.8554% ( 1) 00:15:52.782 23.040 - 23.135: 99.8627% ( 1) 00:15:52.782 25.221 - 25.410: 99.8699% ( 1) 00:15:52.782 28.065 - 28.255: 99.8771% ( 1) 00:15:52.782 31.479 - 31.668: 99.8844% ( 1) 00:15:52.782 3980.705 - 4004.978: 99.9783% ( 13) 00:15:52.782 4004.978 - 4029.250: 100.0000% ( 3) 00:15:52.782 00:15:52.782 Complete histogram 00:15:52.782 ================== 00:15:52.782 Range in us Cumulative Count 00:15:52.782 2.050 - 2.062: 4.4163% ( 611) 00:15:52.782 2.062 - 2.074: 26.2667% ( 3023) 00:15:52.782 2.074 - 2.086: 30.9216% ( 644) 00:15:52.782 2.086 - 2.098: 44.4163% ( 1867) 00:15:52.782 2.098 - 2.110: 63.0358% ( 2576) 00:15:52.782 2.110 - 2.121: 65.8186% ( 385) 00:15:52.782 2.121 - 2.133: 69.6928% ( 536) 00:15:52.782 2.133 - 2.145: 74.7308% ( 697) 00:15:52.782 2.145 - 2.157: 76.0752% ( 186) 00:15:52.782 2.157 - 2.169: 84.0477% ( 1103) 00:15:52.782 2.169 - 2.181: 90.0181% ( 826) 00:15:52.782 2.181 - 2.193: 91.5866% ( 217) 00:15:52.782 2.193 - 2.204: 93.0972% ( 209) 00:15:52.782 2.204 - 2.216: 93.8634% ( 106) 00:15:52.782 2.216 - 2.228: 94.5862% ( 100) 00:15:52.782 2.228 - 2.240: 95.3741% ( 109) 00:15:52.782 2.240 - 2.252: 95.6632% ( 40) 00:15:52.782 2.252 - 2.264: 95.8077% ( 20) 00:15:52.782 2.264 - 2.276: 95.9523% ( 20) 00:15:52.782 2.276 - 2.287: 96.0390% ( 12) 00:15:52.782 2.287 - 2.299: 96.1402% ( 14) 00:15:52.782 2.299 - 2.311: 96.2053% ( 9) 00:15:52.782 2.311 - 2.323: 96.2848% ( 11) 00:15:52.782 2.323 - 2.335: 96.4438% ( 22) 00:15:52.782 2.335 - 2.347: 96.7329% ( 40) 00:15:52.782 2.347 - 2.359: 96.9642% ( 32) 00:15:52.782 2.359 - 2.370: 97.3184% ( 49) 00:15:52.782 2.370 - 2.382: 97.5931% ( 38) 00:15:52.782 2.382 - 2.394: 97.8316% ( 33) 00:15:52.782 2.394 - 2.406: 98.0484% ( 30) 00:15:52.782 2.406 - 2.418: 98.1713% ( 17) 00:15:52.782 2.418 - 2.430: 98.2508% ( 11) 00:15:52.782 2.430 - 2.441: 98.3665% ( 16) 00:15:52.782 2.441 - 2.453: 98.3954% ( 4) 00:15:52.782 2.453 - 2.465: 98.4098% ( 2) 00:15:52.782 2.465 - 2.477: 98.4460% ( 5) 00:15:52.782 2.477 - 2.489: 98.4749% ( 4) 00:15:52.782 2.489 - 2.501: 98.4893% ( 2) 00:15:52.783 2.501 - 2.513: 98.4966% ( 1) 00:15:52.783 2.513 - 2.524: 98.5255% ( 4) 00:15:52.783 2.560 - 2.572: 98.5327% ( 1) 00:15:52.783 2.572 - 2.584: 98.5399% ( 1) 00:15:52.783 2.596 - 2.607: 98.5472% ( 1) 00:15:52.783 2.607 - 2.619: 98.5688% ( 3) 00:15:52.783 2.679 - 2.690: 98.5761% ( 1) 00:15:52.783 2.738 - 2.750: 98.5833% ( 1) 00:15:52.783 2.999 - 3.010: 98.5905% ( 1) 00:15:52.783 3.176 - 3.200: 98.5978% ( 1) 00:15:52.783 3.200 - 3.224: 98.6050% ( 1) 00:15:52.783 3.366 - 3.390: 98.6122% ( 1) 00:15:52.783 3.390 - 3.413: 98.6194% ( 1) 00:15:52.783 3.413 - 3.437: 98.6556% ( 5) 00:15:52.783 3.437 - 3.461: 98.6628% ( 1) 00:15:52.783 3.461 - 3.484: 98.6990% ( 5) 00:15:52.783 3.508 - 3.532: 98.7062% ( 1) 00:15:52.783 3.532 - 3.556: 98.7134% ( 1) 00:15:52.783 3.556 - 3.579: 98.7206% ( 1) 00:15:52.783 3.627 - 3.650: 98.7351% ( 2) 00:15:52.783 3.650 - 3.674: 98.7423% ( 1) 00:15:52.783 3.698 - 3.721: 98.7495% ( 1) 00:15:52.783 3.721 - 3.745: 98.7568% ( 1) 00:15:52.783 3.769 - 3.793: 98.7640% ( 1) 00:15:52.783 3.793 - 3.816: 98.7712% ( 1) 00:15:52.783 3.816 - 3.840: 98.7785% ( 1) 00:15:52.783 3.911 - 3.935: 98.7857% ( 1) 00:15:52.783 4.196 - 4.219: 98.7929% ( 1) 00:15:52.783 4.954 - 4.978: 98.8001% ( 1) 00:15:52.783 4.978 - 5.001: 98.8074% ( 1) 00:15:52.783 5.073 - 5.096: 98.8218% ( 2) 00:15:52.783 5.689 - 5.713: 98.8291% ( 1) 00:15:52.783 5.807 - 5.831: 98.8363% ( 1) 00:15:52.783 5.831 - 5.855: 98.8435% ( 1) 00:15:52.783 5.950 - 5.973: 98.8507% ( 1) 00:15:52.783 6.068 - 6.116: 98.8580% ( 1) 00:15:52.783 6.400 - 6.447: 98.8652% ( 1) 00:15:52.783 6.590 - 6.637: 98.8724% ( 1) 00:15:52.783 6.732 - 6.779: 98.8797% ( 1) 00:15:52.783 6.969 - 7.016: 98.8869% ( 1) 00:15:52.783 7.064 - 7.111: 98.8941% ( 1) 00:15:52.783 7.111 - 7.159: 98.9013% ( 1) 00:15:52.783 7.159 - 7.206: 98.9086% ( 1) 00:15:52.783 7.301 - 7.348: 98.9158% ( 1) 00:15:52.783 8.439 - 8.486: 98.9230% ( 1) 00:15:52.783 8.865 - 8.913: 98.9302% ( 1) 00:15:52.783 9.150 - 9.197: 98.9375% ( 1) 00:15:52.783 12.705 - 12.800: 98.9447% ( 1) 00:15:52.783 15.360 - 15.455: 98.9519% ( 1) 00:15:52.783 15.644 - 15.739: 98.9592% ( 1) 00:15:52.783 15.739 - 15.834: 98.9736% ( 2) 00:15:52.783 15.834 - 15.929: 98.9881% ( 2) 00:15:52.783 15.929 - 16.024: 99.0170% ( 4) 00:15:52.783 16.024 - 16.119: 99.0531% ( 5) 00:15:52.783 16.119 - 16.213: 99.0604% ( 1) 00:15:52.783 16.213 - 16.308: 99.0676% ( 1) 00:15:52.783 16.308 - 16.403: 99.0893% ( 3) 00:15:52.783 16.403 - 16.498: 99.1326% ( 6) 00:15:52.783 16.498 - 16.593: 99.2266% ( 13) 00:15:52.783 16.593 - 16.687: 99.2483% ( 3) 00:15:52.783 16.687 - 16.782: 99.2772% ( 4) 00:15:52.783 16.782 - 16.877: 99.2917% ( 2) 00:15:52.783 16.877 - 16.972: 99.3061% ( 2) 00:15:52.783 16.972 - 17.067: 99.3422% ( 5) 00:15:52.783 17.067 - 17.161: 99.3495% ( 1) 00:15:52.783 17.161 - 17.256: 99.3567% ( 1) 00:15:52.783 17.351 - 17.446: 99.3639% ( 1) 00:15:52.783 17.541 - 17.636: 99.3928% ( 4) 00:15:52.783 17.636 - 17.730: 99.4001% ( 1) 00:15:52.783 17.730 - 17.825: 99.4073% ( 1) 00:15:52.783 17.825 - 17.920: 99.4218% ( 2) 00:15:52.783 17.920 - 18.015: 99.4290% ( 1) 00:15:52.783 18.110 - 18.204: 99.4362% ( 1) 00:15:52.783 18.204 - 18.299: 99.4507% ( 2) 00:15:52.783 18.299 - 18.394: 99.4651% ( 2) 00:15:52.783 18.489 - 18.584: 99.4724% ( 1) 00:15:52.783 22.471 - 22.566: 99.4796% ( 1) 00:15:52.783 2936.984 - 2949.120: 99.4868% ( 1) 00:15:52.783 3131.164 - 3155.437: 99.4940% ( 1) 00:15:52.783 3980.705 - 4004.978: 99.8988% ( 56) 00:15:52.783 4004.978 - 4029.250: 100.0000% ( 14) 00:15:52.783 00:15:52.783 00:55:36 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:15:52.783 00:55:36 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:52.783 00:55:36 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:15:52.783 00:55:36 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:15:52.783 00:55:36 -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:52.783 [ 00:15:52.783 { 00:15:52.783 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:52.783 "subtype": "Discovery", 00:15:52.783 "listen_addresses": [], 00:15:52.783 "allow_any_host": true, 00:15:52.783 "hosts": [] 00:15:52.783 }, 00:15:52.783 { 00:15:52.783 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:52.783 "subtype": "NVMe", 00:15:52.783 "listen_addresses": [ 00:15:52.783 { 00:15:52.783 "transport": "VFIOUSER", 00:15:52.783 "trtype": "VFIOUSER", 00:15:52.783 "adrfam": "IPv4", 00:15:52.783 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:52.783 "trsvcid": "0" 00:15:52.783 } 00:15:52.783 ], 00:15:52.783 "allow_any_host": true, 00:15:52.783 "hosts": [], 00:15:52.783 "serial_number": "SPDK1", 00:15:52.783 "model_number": "SPDK bdev Controller", 00:15:52.783 "max_namespaces": 32, 00:15:52.783 "min_cntlid": 1, 00:15:52.783 "max_cntlid": 65519, 00:15:52.783 "namespaces": [ 00:15:52.783 { 00:15:52.783 "nsid": 1, 00:15:52.783 "bdev_name": "Malloc1", 00:15:52.783 "name": "Malloc1", 00:15:52.783 "nguid": "7842F0BD6C4C40C387F1FBDE37C894A7", 00:15:52.783 "uuid": "7842f0bd-6c4c-40c3-87f1-fbde37c894a7" 00:15:52.783 }, 00:15:52.783 { 00:15:52.783 "nsid": 2, 00:15:52.783 "bdev_name": "Malloc3", 00:15:52.783 "name": "Malloc3", 00:15:52.783 "nguid": "B3FB288D348D4143997B64F637A2A5A6", 00:15:52.783 "uuid": "b3fb288d-348d-4143-997b-64f637a2a5a6" 00:15:52.783 } 00:15:52.783 ] 00:15:52.783 }, 00:15:52.783 { 00:15:52.783 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:52.783 "subtype": "NVMe", 00:15:52.783 "listen_addresses": [ 00:15:52.783 { 00:15:52.783 "transport": "VFIOUSER", 00:15:52.783 "trtype": "VFIOUSER", 00:15:52.783 "adrfam": "IPv4", 00:15:52.783 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:52.783 "trsvcid": "0" 00:15:52.783 } 00:15:52.783 ], 00:15:52.783 "allow_any_host": true, 00:15:52.783 "hosts": [], 00:15:52.783 "serial_number": "SPDK2", 00:15:52.783 "model_number": "SPDK bdev Controller", 00:15:52.783 "max_namespaces": 32, 00:15:52.783 "min_cntlid": 1, 00:15:52.783 "max_cntlid": 65519, 00:15:52.783 "namespaces": [ 00:15:52.783 { 00:15:52.783 "nsid": 1, 00:15:52.783 "bdev_name": "Malloc2", 00:15:52.783 "name": "Malloc2", 00:15:52.783 "nguid": "BCE5FF1FE17845AC9275B9124160BA90", 00:15:52.783 "uuid": "bce5ff1f-e178-45ac-9275-b9124160ba90" 00:15:52.783 } 00:15:52.783 ] 00:15:52.783 } 00:15:52.783 ] 00:15:53.041 00:55:36 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:53.041 00:55:36 -- target/nvmf_vfio_user.sh@34 -- # aerpid=3377773 00:15:53.041 00:55:36 -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:15:53.041 00:55:36 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:53.041 00:55:36 -- common/autotest_common.sh@1244 -- # local i=0 00:15:53.042 00:55:36 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:53.042 00:55:36 -- common/autotest_common.sh@1251 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:53.042 00:55:36 -- common/autotest_common.sh@1255 -- # return 0 00:15:53.042 00:55:36 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:53.042 00:55:36 -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:15:53.042 EAL: No free 2048 kB hugepages reported on node 1 00:15:53.042 Malloc4 00:15:53.300 00:55:37 -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:15:53.300 00:55:37 -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:53.558 Asynchronous Event Request test 00:15:53.558 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:53.558 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:53.558 Registering asynchronous event callbacks... 00:15:53.558 Starting namespace attribute notice tests for all controllers... 00:15:53.558 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:53.558 aer_cb - Changed Namespace 00:15:53.558 Cleaning up... 00:15:53.558 [ 00:15:53.558 { 00:15:53.558 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:53.558 "subtype": "Discovery", 00:15:53.558 "listen_addresses": [], 00:15:53.558 "allow_any_host": true, 00:15:53.558 "hosts": [] 00:15:53.558 }, 00:15:53.558 { 00:15:53.558 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:53.558 "subtype": "NVMe", 00:15:53.558 "listen_addresses": [ 00:15:53.558 { 00:15:53.558 "transport": "VFIOUSER", 00:15:53.558 "trtype": "VFIOUSER", 00:15:53.558 "adrfam": "IPv4", 00:15:53.558 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:53.558 "trsvcid": "0" 00:15:53.558 } 00:15:53.558 ], 00:15:53.558 "allow_any_host": true, 00:15:53.558 "hosts": [], 00:15:53.558 "serial_number": "SPDK1", 00:15:53.558 "model_number": "SPDK bdev Controller", 00:15:53.558 "max_namespaces": 32, 00:15:53.558 "min_cntlid": 1, 00:15:53.558 "max_cntlid": 65519, 00:15:53.558 "namespaces": [ 00:15:53.558 { 00:15:53.558 "nsid": 1, 00:15:53.558 "bdev_name": "Malloc1", 00:15:53.558 "name": "Malloc1", 00:15:53.558 "nguid": "7842F0BD6C4C40C387F1FBDE37C894A7", 00:15:53.558 "uuid": "7842f0bd-6c4c-40c3-87f1-fbde37c894a7" 00:15:53.558 }, 00:15:53.558 { 00:15:53.558 "nsid": 2, 00:15:53.558 "bdev_name": "Malloc3", 00:15:53.558 "name": "Malloc3", 00:15:53.558 "nguid": "B3FB288D348D4143997B64F637A2A5A6", 00:15:53.558 "uuid": "b3fb288d-348d-4143-997b-64f637a2a5a6" 00:15:53.558 } 00:15:53.558 ] 00:15:53.558 }, 00:15:53.558 { 00:15:53.558 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:53.558 "subtype": "NVMe", 00:15:53.558 "listen_addresses": [ 00:15:53.558 { 00:15:53.558 "transport": "VFIOUSER", 00:15:53.558 "trtype": "VFIOUSER", 00:15:53.558 "adrfam": "IPv4", 00:15:53.558 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:53.558 "trsvcid": "0" 00:15:53.558 } 00:15:53.558 ], 00:15:53.558 "allow_any_host": true, 00:15:53.558 "hosts": [], 00:15:53.558 "serial_number": "SPDK2", 00:15:53.558 "model_number": "SPDK bdev Controller", 00:15:53.558 "max_namespaces": 32, 00:15:53.558 "min_cntlid": 1, 00:15:53.558 "max_cntlid": 65519, 00:15:53.558 "namespaces": [ 00:15:53.558 { 00:15:53.558 "nsid": 1, 00:15:53.558 "bdev_name": "Malloc2", 00:15:53.558 "name": "Malloc2", 00:15:53.558 "nguid": "BCE5FF1FE17845AC9275B9124160BA90", 00:15:53.558 "uuid": "bce5ff1f-e178-45ac-9275-b9124160ba90" 00:15:53.558 }, 00:15:53.558 { 00:15:53.558 "nsid": 2, 00:15:53.558 "bdev_name": "Malloc4", 00:15:53.558 "name": "Malloc4", 00:15:53.559 "nguid": "04481C7024834EFBAAB37255E62ACCCF", 00:15:53.559 "uuid": "04481c70-2483-4efb-aab3-7255e62acccf" 00:15:53.559 } 00:15:53.559 ] 00:15:53.559 } 00:15:53.559 ] 00:15:53.559 00:55:37 -- target/nvmf_vfio_user.sh@44 -- # wait 3377773 00:15:53.559 00:55:37 -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:15:53.559 00:55:37 -- target/nvmf_vfio_user.sh@95 -- # killprocess 3371985 00:15:53.559 00:55:37 -- common/autotest_common.sh@926 -- # '[' -z 3371985 ']' 00:15:53.559 00:55:37 -- common/autotest_common.sh@930 -- # kill -0 3371985 00:15:53.559 00:55:37 -- common/autotest_common.sh@931 -- # uname 00:15:53.559 00:55:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:53.559 00:55:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3371985 00:15:53.818 00:55:37 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:53.818 00:55:37 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:53.818 00:55:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3371985' 00:15:53.818 killing process with pid 3371985 00:15:53.818 00:55:37 -- common/autotest_common.sh@945 -- # kill 3371985 00:15:53.818 [2024-07-23 00:55:37.762026] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:15:53.818 00:55:37 -- common/autotest_common.sh@950 -- # wait 3371985 00:15:54.077 00:55:38 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:54.077 00:55:38 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:54.077 00:55:38 -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:15:54.077 00:55:38 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:15:54.077 00:55:38 -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:15:54.077 00:55:38 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3377914 00:15:54.077 00:55:38 -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:15:54.077 00:55:38 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3377914' 00:15:54.077 Process pid: 3377914 00:15:54.077 00:55:38 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:54.077 00:55:38 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3377914 00:15:54.077 00:55:38 -- common/autotest_common.sh@819 -- # '[' -z 3377914 ']' 00:15:54.077 00:55:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:54.077 00:55:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:54.077 00:55:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:54.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:54.077 00:55:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:54.077 00:55:38 -- common/autotest_common.sh@10 -- # set +x 00:15:54.077 [2024-07-23 00:55:38.142702] thread.c:2927:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:15:54.077 [2024-07-23 00:55:38.143775] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:15:54.077 [2024-07-23 00:55:38.143838] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:54.077 EAL: No free 2048 kB hugepages reported on node 1 00:15:54.077 [2024-07-23 00:55:38.207284] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:54.337 [2024-07-23 00:55:38.295655] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:54.337 [2024-07-23 00:55:38.295810] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:54.337 [2024-07-23 00:55:38.295832] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:54.337 [2024-07-23 00:55:38.295847] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:54.337 [2024-07-23 00:55:38.295918] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:54.337 [2024-07-23 00:55:38.295994] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:54.337 [2024-07-23 00:55:38.296085] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:54.337 [2024-07-23 00:55:38.296087] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:54.337 [2024-07-23 00:55:38.397818] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_0) to intr mode from intr mode. 00:15:54.337 [2024-07-23 00:55:38.398076] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_1) to intr mode from intr mode. 00:15:54.337 [2024-07-23 00:55:38.398367] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_2) to intr mode from intr mode. 00:15:54.337 [2024-07-23 00:55:38.399143] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:15:54.337 [2024-07-23 00:55:38.399244] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_3) to intr mode from intr mode. 00:15:55.274 00:55:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:55.274 00:55:39 -- common/autotest_common.sh@852 -- # return 0 00:15:55.274 00:55:39 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:56.211 00:55:40 -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:15:56.211 00:55:40 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:56.211 00:55:40 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:56.211 00:55:40 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:56.211 00:55:40 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:56.211 00:55:40 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:56.469 Malloc1 00:15:56.469 00:55:40 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:56.726 00:55:40 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:56.984 00:55:41 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:57.241 00:55:41 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:57.241 00:55:41 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:57.241 00:55:41 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:57.500 Malloc2 00:15:57.500 00:55:41 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:57.757 00:55:41 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:58.015 00:55:42 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:58.275 00:55:42 -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:15:58.275 00:55:42 -- target/nvmf_vfio_user.sh@95 -- # killprocess 3377914 00:15:58.275 00:55:42 -- common/autotest_common.sh@926 -- # '[' -z 3377914 ']' 00:15:58.275 00:55:42 -- common/autotest_common.sh@930 -- # kill -0 3377914 00:15:58.275 00:55:42 -- common/autotest_common.sh@931 -- # uname 00:15:58.275 00:55:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:58.275 00:55:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3377914 00:15:58.275 00:55:42 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:58.275 00:55:42 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:58.275 00:55:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3377914' 00:15:58.275 killing process with pid 3377914 00:15:58.275 00:55:42 -- common/autotest_common.sh@945 -- # kill 3377914 00:15:58.275 00:55:42 -- common/autotest_common.sh@950 -- # wait 3377914 00:15:58.534 00:55:42 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:58.534 00:55:42 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:58.534 00:15:58.534 real 0m53.480s 00:15:58.534 user 3m31.555s 00:15:58.534 sys 0m4.492s 00:15:58.534 00:55:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:58.534 00:55:42 -- common/autotest_common.sh@10 -- # set +x 00:15:58.534 ************************************ 00:15:58.534 END TEST nvmf_vfio_user 00:15:58.534 ************************************ 00:15:58.534 00:55:42 -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:58.534 00:55:42 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:58.534 00:55:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:58.534 00:55:42 -- common/autotest_common.sh@10 -- # set +x 00:15:58.534 ************************************ 00:15:58.534 START TEST nvmf_vfio_user_nvme_compliance 00:15:58.534 ************************************ 00:15:58.534 00:55:42 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:58.534 * Looking for test storage... 00:15:58.534 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:15:58.534 00:55:42 -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:58.534 00:55:42 -- nvmf/common.sh@7 -- # uname -s 00:15:58.534 00:55:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:58.534 00:55:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:58.534 00:55:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:58.534 00:55:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:58.534 00:55:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:58.534 00:55:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:58.534 00:55:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:58.534 00:55:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:58.534 00:55:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:58.534 00:55:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:58.534 00:55:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:58.534 00:55:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:58.534 00:55:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:58.534 00:55:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:58.534 00:55:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:58.534 00:55:42 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:58.534 00:55:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:58.534 00:55:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:58.534 00:55:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:58.534 00:55:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.534 00:55:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.534 00:55:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.534 00:55:42 -- paths/export.sh@5 -- # export PATH 00:15:58.534 00:55:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.534 00:55:42 -- nvmf/common.sh@46 -- # : 0 00:15:58.534 00:55:42 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:58.534 00:55:42 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:58.534 00:55:42 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:58.534 00:55:42 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:58.534 00:55:42 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:58.534 00:55:42 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:58.534 00:55:42 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:58.534 00:55:42 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:58.534 00:55:42 -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:58.534 00:55:42 -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:58.534 00:55:42 -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:15:58.534 00:55:42 -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:15:58.534 00:55:42 -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:15:58.534 00:55:42 -- compliance/compliance.sh@20 -- # nvmfpid=3378538 00:15:58.534 00:55:42 -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:58.534 00:55:42 -- compliance/compliance.sh@21 -- # echo 'Process pid: 3378538' 00:15:58.534 Process pid: 3378538 00:15:58.534 00:55:42 -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:58.534 00:55:42 -- compliance/compliance.sh@24 -- # waitforlisten 3378538 00:15:58.534 00:55:42 -- common/autotest_common.sh@819 -- # '[' -z 3378538 ']' 00:15:58.534 00:55:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:58.534 00:55:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:58.534 00:55:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:58.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:58.534 00:55:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:58.534 00:55:42 -- common/autotest_common.sh@10 -- # set +x 00:15:58.793 [2024-07-23 00:55:42.753953] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:15:58.793 [2024-07-23 00:55:42.754032] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:58.793 EAL: No free 2048 kB hugepages reported on node 1 00:15:58.793 [2024-07-23 00:55:42.811187] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:58.793 [2024-07-23 00:55:42.893124] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:58.793 [2024-07-23 00:55:42.893277] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:58.793 [2024-07-23 00:55:42.893294] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:58.793 [2024-07-23 00:55:42.893306] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:58.793 [2024-07-23 00:55:42.893358] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:58.793 [2024-07-23 00:55:42.895633] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:58.793 [2024-07-23 00:55:42.895644] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:59.764 00:55:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:59.764 00:55:43 -- common/autotest_common.sh@852 -- # return 0 00:15:59.764 00:55:43 -- compliance/compliance.sh@26 -- # sleep 1 00:16:00.701 00:55:44 -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:00.701 00:55:44 -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:16:00.701 00:55:44 -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:00.701 00:55:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:00.701 00:55:44 -- common/autotest_common.sh@10 -- # set +x 00:16:00.701 00:55:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:00.701 00:55:44 -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:16:00.701 00:55:44 -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:00.701 00:55:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:00.701 00:55:44 -- common/autotest_common.sh@10 -- # set +x 00:16:00.701 malloc0 00:16:00.701 00:55:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:00.701 00:55:44 -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:16:00.701 00:55:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:00.701 00:55:44 -- common/autotest_common.sh@10 -- # set +x 00:16:00.701 00:55:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:00.701 00:55:44 -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:00.701 00:55:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:00.701 00:55:44 -- common/autotest_common.sh@10 -- # set +x 00:16:00.701 00:55:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:00.701 00:55:44 -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:00.701 00:55:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:00.701 00:55:44 -- common/autotest_common.sh@10 -- # set +x 00:16:00.701 00:55:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:00.701 00:55:44 -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:16:00.701 EAL: No free 2048 kB hugepages reported on node 1 00:16:00.962 00:16:00.962 00:16:00.962 CUnit - A unit testing framework for C - Version 2.1-3 00:16:00.962 http://cunit.sourceforge.net/ 00:16:00.962 00:16:00.962 00:16:00.962 Suite: nvme_compliance 00:16:00.962 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-23 00:55:44.975608] vfio_user.c: 789:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:16:00.962 [2024-07-23 00:55:44.975673] vfio_user.c:5484:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:16:00.962 [2024-07-23 00:55:44.975685] vfio_user.c:5576:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:16:00.962 passed 00:16:00.962 Test: admin_identify_ctrlr_verify_fused ...passed 00:16:01.222 Test: admin_identify_ns ...[2024-07-23 00:55:45.222645] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:16:01.222 [2024-07-23 00:55:45.230632] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:16:01.222 passed 00:16:01.222 Test: admin_get_features_mandatory_features ...passed 00:16:01.480 Test: admin_get_features_optional_features ...passed 00:16:01.480 Test: admin_set_features_number_of_queues ...passed 00:16:01.738 Test: admin_get_log_page_mandatory_logs ...passed 00:16:01.738 Test: admin_get_log_page_with_lpo ...[2024-07-23 00:55:45.860641] ctrlr.c:2546:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:16:01.738 passed 00:16:01.998 Test: fabric_property_get ...passed 00:16:01.998 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-23 00:55:46.045853] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:16:01.998 passed 00:16:02.258 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-23 00:55:46.214626] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:02.258 [2024-07-23 00:55:46.230638] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:02.258 passed 00:16:02.258 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-23 00:55:46.322143] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:16:02.258 passed 00:16:02.517 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-23 00:55:46.485621] vfio_user.c:2310:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:02.517 [2024-07-23 00:55:46.509643] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:02.517 passed 00:16:02.517 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-23 00:55:46.600589] vfio_user.c:2150:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:16:02.517 [2024-07-23 00:55:46.600664] vfio_user.c:2144:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:16:02.517 passed 00:16:02.777 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-23 00:55:46.776629] vfio_user.c:2231:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:16:02.777 [2024-07-23 00:55:46.784621] vfio_user.c:2231:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:16:02.777 [2024-07-23 00:55:46.792643] vfio_user.c:2031:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:16:02.777 [2024-07-23 00:55:46.800636] vfio_user.c:2031:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:16:02.777 passed 00:16:02.777 Test: admin_create_io_sq_verify_pc ...[2024-07-23 00:55:46.928652] vfio_user.c:2044:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:16:02.777 passed 00:16:04.154 Test: admin_create_io_qp_max_qps ...[2024-07-23 00:55:48.154630] nvme_ctrlr.c:5318:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:16:04.413 passed 00:16:04.671 Test: admin_create_io_sq_shared_cq ...[2024-07-23 00:55:48.750651] vfio_user.c:2310:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:04.671 passed 00:16:04.671 00:16:04.671 Run Summary: Type Total Ran Passed Failed Inactive 00:16:04.671 suites 1 1 n/a 0 0 00:16:04.671 tests 18 18 18 0 0 00:16:04.671 asserts 360 360 360 0 n/a 00:16:04.671 00:16:04.671 Elapsed time = 1.583 seconds 00:16:04.671 00:55:48 -- compliance/compliance.sh@42 -- # killprocess 3378538 00:16:04.671 00:55:48 -- common/autotest_common.sh@926 -- # '[' -z 3378538 ']' 00:16:04.671 00:55:48 -- common/autotest_common.sh@930 -- # kill -0 3378538 00:16:04.671 00:55:48 -- common/autotest_common.sh@931 -- # uname 00:16:04.671 00:55:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:04.671 00:55:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3378538 00:16:04.671 00:55:48 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:04.671 00:55:48 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:04.671 00:55:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3378538' 00:16:04.671 killing process with pid 3378538 00:16:04.671 00:55:48 -- common/autotest_common.sh@945 -- # kill 3378538 00:16:04.671 00:55:48 -- common/autotest_common.sh@950 -- # wait 3378538 00:16:04.929 00:55:49 -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:16:04.929 00:55:49 -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:16:04.929 00:16:04.929 real 0m6.450s 00:16:04.929 user 0m18.618s 00:16:04.929 sys 0m0.559s 00:16:04.929 00:55:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:04.929 00:55:49 -- common/autotest_common.sh@10 -- # set +x 00:16:04.929 ************************************ 00:16:04.929 END TEST nvmf_vfio_user_nvme_compliance 00:16:04.929 ************************************ 00:16:05.188 00:55:49 -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:05.188 00:55:49 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:05.188 00:55:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:05.188 00:55:49 -- common/autotest_common.sh@10 -- # set +x 00:16:05.188 ************************************ 00:16:05.188 START TEST nvmf_vfio_user_fuzz 00:16:05.188 ************************************ 00:16:05.188 00:55:49 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:05.188 * Looking for test storage... 00:16:05.188 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:05.188 00:55:49 -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:05.188 00:55:49 -- nvmf/common.sh@7 -- # uname -s 00:16:05.188 00:55:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:05.188 00:55:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:05.188 00:55:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:05.188 00:55:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:05.188 00:55:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:05.188 00:55:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:05.188 00:55:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:05.188 00:55:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:05.188 00:55:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:05.188 00:55:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:05.188 00:55:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:05.188 00:55:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:05.188 00:55:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:05.188 00:55:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:05.188 00:55:49 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:05.188 00:55:49 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:05.188 00:55:49 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:05.188 00:55:49 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:05.188 00:55:49 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:05.188 00:55:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.188 00:55:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.188 00:55:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.188 00:55:49 -- paths/export.sh@5 -- # export PATH 00:16:05.188 00:55:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.188 00:55:49 -- nvmf/common.sh@46 -- # : 0 00:16:05.188 00:55:49 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:05.188 00:55:49 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:05.188 00:55:49 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:05.188 00:55:49 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:05.188 00:55:49 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:05.188 00:55:49 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:05.188 00:55:49 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:05.188 00:55:49 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:05.188 00:55:49 -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:05.188 00:55:49 -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:05.188 00:55:49 -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:05.188 00:55:49 -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:16:05.188 00:55:49 -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:16:05.188 00:55:49 -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:16:05.188 00:55:49 -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:16:05.188 00:55:49 -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=3379402 00:16:05.188 00:55:49 -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:05.188 00:55:49 -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 3379402' 00:16:05.188 Process pid: 3379402 00:16:05.188 00:55:49 -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:05.188 00:55:49 -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 3379402 00:16:05.188 00:55:49 -- common/autotest_common.sh@819 -- # '[' -z 3379402 ']' 00:16:05.188 00:55:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:05.188 00:55:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:05.188 00:55:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:05.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:05.188 00:55:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:05.188 00:55:49 -- common/autotest_common.sh@10 -- # set +x 00:16:06.128 00:55:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:06.128 00:55:50 -- common/autotest_common.sh@852 -- # return 0 00:16:06.128 00:55:50 -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:16:07.066 00:55:51 -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:07.066 00:55:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:07.066 00:55:51 -- common/autotest_common.sh@10 -- # set +x 00:16:07.066 00:55:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:07.066 00:55:51 -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:16:07.066 00:55:51 -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:07.067 00:55:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:07.067 00:55:51 -- common/autotest_common.sh@10 -- # set +x 00:16:07.067 malloc0 00:16:07.067 00:55:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:07.067 00:55:51 -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:16:07.067 00:55:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:07.067 00:55:51 -- common/autotest_common.sh@10 -- # set +x 00:16:07.325 00:55:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:07.325 00:55:51 -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:07.325 00:55:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:07.325 00:55:51 -- common/autotest_common.sh@10 -- # set +x 00:16:07.325 00:55:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:07.325 00:55:51 -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:07.325 00:55:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:07.325 00:55:51 -- common/autotest_common.sh@10 -- # set +x 00:16:07.325 00:55:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:07.325 00:55:51 -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:16:07.325 00:55:51 -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/vfio_user_fuzz -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:16:39.388 Fuzzing completed. Shutting down the fuzz application 00:16:39.388 00:16:39.388 Dumping successful admin opcodes: 00:16:39.388 8, 9, 10, 24, 00:16:39.388 Dumping successful io opcodes: 00:16:39.388 0, 00:16:39.388 NS: 0x200003a1ef00 I/O qp, Total commands completed: 568118, total successful commands: 2183, random_seed: 2440377280 00:16:39.388 NS: 0x200003a1ef00 admin qp, Total commands completed: 141194, total successful commands: 1146, random_seed: 3443258112 00:16:39.388 00:56:21 -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:16:39.388 00:56:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:39.388 00:56:21 -- common/autotest_common.sh@10 -- # set +x 00:16:39.388 00:56:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:39.388 00:56:21 -- target/vfio_user_fuzz.sh@46 -- # killprocess 3379402 00:16:39.388 00:56:21 -- common/autotest_common.sh@926 -- # '[' -z 3379402 ']' 00:16:39.388 00:56:21 -- common/autotest_common.sh@930 -- # kill -0 3379402 00:16:39.388 00:56:21 -- common/autotest_common.sh@931 -- # uname 00:16:39.388 00:56:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:39.388 00:56:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3379402 00:16:39.388 00:56:21 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:39.388 00:56:21 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:39.388 00:56:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3379402' 00:16:39.388 killing process with pid 3379402 00:16:39.388 00:56:21 -- common/autotest_common.sh@945 -- # kill 3379402 00:16:39.388 00:56:21 -- common/autotest_common.sh@950 -- # wait 3379402 00:16:39.388 00:56:22 -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:16:39.388 00:56:22 -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:16:39.388 00:16:39.388 real 0m33.066s 00:16:39.388 user 0m33.861s 00:16:39.388 sys 0m26.359s 00:16:39.388 00:56:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:39.388 00:56:22 -- common/autotest_common.sh@10 -- # set +x 00:16:39.388 ************************************ 00:16:39.388 END TEST nvmf_vfio_user_fuzz 00:16:39.388 ************************************ 00:16:39.388 00:56:22 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:39.388 00:56:22 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:39.388 00:56:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:39.388 00:56:22 -- common/autotest_common.sh@10 -- # set +x 00:16:39.388 ************************************ 00:16:39.388 START TEST nvmf_host_management 00:16:39.388 ************************************ 00:16:39.388 00:56:22 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:39.388 * Looking for test storage... 00:16:39.388 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:39.389 00:56:22 -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:39.389 00:56:22 -- nvmf/common.sh@7 -- # uname -s 00:16:39.389 00:56:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:39.389 00:56:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:39.389 00:56:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:39.389 00:56:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:39.389 00:56:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:39.389 00:56:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:39.389 00:56:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:39.389 00:56:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:39.389 00:56:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:39.389 00:56:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:39.389 00:56:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:39.389 00:56:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:39.389 00:56:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:39.389 00:56:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:39.389 00:56:22 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:39.389 00:56:22 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:39.389 00:56:22 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:39.389 00:56:22 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:39.389 00:56:22 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:39.389 00:56:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.389 00:56:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.389 00:56:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.389 00:56:22 -- paths/export.sh@5 -- # export PATH 00:16:39.389 00:56:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.389 00:56:22 -- nvmf/common.sh@46 -- # : 0 00:16:39.389 00:56:22 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:39.389 00:56:22 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:39.389 00:56:22 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:39.389 00:56:22 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:39.389 00:56:22 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:39.389 00:56:22 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:39.389 00:56:22 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:39.389 00:56:22 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:39.389 00:56:22 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:39.389 00:56:22 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:39.389 00:56:22 -- target/host_management.sh@104 -- # nvmftestinit 00:16:39.389 00:56:22 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:39.389 00:56:22 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:39.389 00:56:22 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:39.389 00:56:22 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:39.389 00:56:22 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:39.389 00:56:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:39.389 00:56:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:39.389 00:56:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:39.389 00:56:22 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:39.389 00:56:22 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:39.389 00:56:22 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:39.389 00:56:22 -- common/autotest_common.sh@10 -- # set +x 00:16:39.955 00:56:24 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:39.955 00:56:24 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:39.955 00:56:24 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:39.955 00:56:24 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:39.955 00:56:24 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:39.955 00:56:24 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:39.955 00:56:24 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:39.955 00:56:24 -- nvmf/common.sh@294 -- # net_devs=() 00:16:39.955 00:56:24 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:39.955 00:56:24 -- nvmf/common.sh@295 -- # e810=() 00:16:39.955 00:56:24 -- nvmf/common.sh@295 -- # local -ga e810 00:16:39.955 00:56:24 -- nvmf/common.sh@296 -- # x722=() 00:16:39.955 00:56:24 -- nvmf/common.sh@296 -- # local -ga x722 00:16:39.955 00:56:24 -- nvmf/common.sh@297 -- # mlx=() 00:16:39.955 00:56:24 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:39.955 00:56:24 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:39.955 00:56:24 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:39.955 00:56:24 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:39.955 00:56:24 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:39.955 00:56:24 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:39.955 00:56:24 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:39.955 00:56:24 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:39.955 00:56:24 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:39.955 00:56:24 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:39.955 00:56:24 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:39.955 00:56:24 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:39.955 00:56:24 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:39.955 00:56:24 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:16:39.955 00:56:24 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:16:39.955 00:56:24 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:16:39.955 00:56:24 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:16:39.955 00:56:24 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:39.955 00:56:24 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:39.955 00:56:24 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:39.955 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:39.955 00:56:24 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:39.956 00:56:24 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:39.956 00:56:24 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:39.956 00:56:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:39.956 00:56:24 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:39.956 00:56:24 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:39.956 00:56:24 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:39.956 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:39.956 00:56:24 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:39.956 00:56:24 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:39.956 00:56:24 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:39.956 00:56:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:39.956 00:56:24 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:39.956 00:56:24 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:39.956 00:56:24 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:16:39.956 00:56:24 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:16:39.956 00:56:24 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:39.956 00:56:24 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:39.956 00:56:24 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:39.956 00:56:24 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:39.956 00:56:24 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:39.956 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:39.956 00:56:24 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:39.956 00:56:24 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:39.956 00:56:24 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:39.956 00:56:24 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:39.956 00:56:24 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:39.956 00:56:24 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:39.956 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:39.956 00:56:24 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:39.956 00:56:24 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:39.956 00:56:24 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:39.956 00:56:24 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:39.956 00:56:24 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:16:39.956 00:56:24 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:16:39.956 00:56:24 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:39.956 00:56:24 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:39.956 00:56:24 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:39.956 00:56:24 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:16:39.956 00:56:24 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:39.956 00:56:24 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:39.956 00:56:24 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:16:39.956 00:56:24 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:39.956 00:56:24 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:39.956 00:56:24 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:16:39.956 00:56:24 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:16:40.216 00:56:24 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:16:40.216 00:56:24 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:40.216 00:56:24 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:40.216 00:56:24 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:40.216 00:56:24 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:16:40.216 00:56:24 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:40.216 00:56:24 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:40.216 00:56:24 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:40.216 00:56:24 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:16:40.216 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:40.216 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:16:40.216 00:16:40.216 --- 10.0.0.2 ping statistics --- 00:16:40.216 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:40.216 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:16:40.216 00:56:24 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:40.216 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:40.216 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:16:40.216 00:16:40.216 --- 10.0.0.1 ping statistics --- 00:16:40.216 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:40.216 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:16:40.216 00:56:24 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:40.216 00:56:24 -- nvmf/common.sh@410 -- # return 0 00:16:40.216 00:56:24 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:40.216 00:56:24 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:40.216 00:56:24 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:40.216 00:56:24 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:40.216 00:56:24 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:40.216 00:56:24 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:40.216 00:56:24 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:40.216 00:56:24 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:16:40.216 00:56:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:16:40.216 00:56:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:40.216 00:56:24 -- common/autotest_common.sh@10 -- # set +x 00:16:40.216 ************************************ 00:16:40.216 START TEST nvmf_host_management 00:16:40.216 ************************************ 00:16:40.216 00:56:24 -- common/autotest_common.sh@1104 -- # nvmf_host_management 00:16:40.216 00:56:24 -- target/host_management.sh@69 -- # starttarget 00:16:40.216 00:56:24 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:16:40.216 00:56:24 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:40.216 00:56:24 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:40.216 00:56:24 -- common/autotest_common.sh@10 -- # set +x 00:16:40.216 00:56:24 -- nvmf/common.sh@469 -- # nvmfpid=3384984 00:16:40.216 00:56:24 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:16:40.216 00:56:24 -- nvmf/common.sh@470 -- # waitforlisten 3384984 00:16:40.216 00:56:24 -- common/autotest_common.sh@819 -- # '[' -z 3384984 ']' 00:16:40.216 00:56:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:40.216 00:56:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:40.216 00:56:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:40.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:40.216 00:56:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:40.216 00:56:24 -- common/autotest_common.sh@10 -- # set +x 00:16:40.216 [2024-07-23 00:56:24.356336] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:16:40.216 [2024-07-23 00:56:24.356414] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:40.216 EAL: No free 2048 kB hugepages reported on node 1 00:16:40.474 [2024-07-23 00:56:24.420830] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:40.474 [2024-07-23 00:56:24.509921] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:40.474 [2024-07-23 00:56:24.510078] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:40.474 [2024-07-23 00:56:24.510096] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:40.474 [2024-07-23 00:56:24.510109] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:40.474 [2024-07-23 00:56:24.510200] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:40.474 [2024-07-23 00:56:24.510325] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:40.474 [2024-07-23 00:56:24.510395] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:40.474 [2024-07-23 00:56:24.510393] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:41.408 00:56:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:41.408 00:56:25 -- common/autotest_common.sh@852 -- # return 0 00:16:41.408 00:56:25 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:41.408 00:56:25 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:41.408 00:56:25 -- common/autotest_common.sh@10 -- # set +x 00:16:41.408 00:56:25 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:41.408 00:56:25 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:41.408 00:56:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:41.408 00:56:25 -- common/autotest_common.sh@10 -- # set +x 00:16:41.408 [2024-07-23 00:56:25.316184] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:41.408 00:56:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:41.408 00:56:25 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:16:41.408 00:56:25 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:41.408 00:56:25 -- common/autotest_common.sh@10 -- # set +x 00:16:41.408 00:56:25 -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:41.408 00:56:25 -- target/host_management.sh@23 -- # cat 00:16:41.408 00:56:25 -- target/host_management.sh@30 -- # rpc_cmd 00:16:41.408 00:56:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:41.408 00:56:25 -- common/autotest_common.sh@10 -- # set +x 00:16:41.408 Malloc0 00:16:41.408 [2024-07-23 00:56:25.375197] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:41.408 00:56:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:41.408 00:56:25 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:16:41.408 00:56:25 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:41.408 00:56:25 -- common/autotest_common.sh@10 -- # set +x 00:16:41.408 00:56:25 -- target/host_management.sh@73 -- # perfpid=3385163 00:16:41.408 00:56:25 -- target/host_management.sh@74 -- # waitforlisten 3385163 /var/tmp/bdevperf.sock 00:16:41.408 00:56:25 -- common/autotest_common.sh@819 -- # '[' -z 3385163 ']' 00:16:41.408 00:56:25 -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:16:41.408 00:56:25 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:16:41.408 00:56:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:41.408 00:56:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:41.408 00:56:25 -- nvmf/common.sh@520 -- # config=() 00:16:41.408 00:56:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:41.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:41.408 00:56:25 -- nvmf/common.sh@520 -- # local subsystem config 00:16:41.408 00:56:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:41.408 00:56:25 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:41.408 00:56:25 -- common/autotest_common.sh@10 -- # set +x 00:16:41.409 00:56:25 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:41.409 { 00:16:41.409 "params": { 00:16:41.409 "name": "Nvme$subsystem", 00:16:41.409 "trtype": "$TEST_TRANSPORT", 00:16:41.409 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:41.409 "adrfam": "ipv4", 00:16:41.409 "trsvcid": "$NVMF_PORT", 00:16:41.409 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:41.409 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:41.409 "hdgst": ${hdgst:-false}, 00:16:41.409 "ddgst": ${ddgst:-false} 00:16:41.409 }, 00:16:41.409 "method": "bdev_nvme_attach_controller" 00:16:41.409 } 00:16:41.409 EOF 00:16:41.409 )") 00:16:41.409 00:56:25 -- nvmf/common.sh@542 -- # cat 00:16:41.409 00:56:25 -- nvmf/common.sh@544 -- # jq . 00:16:41.409 00:56:25 -- nvmf/common.sh@545 -- # IFS=, 00:16:41.409 00:56:25 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:41.409 "params": { 00:16:41.409 "name": "Nvme0", 00:16:41.409 "trtype": "tcp", 00:16:41.409 "traddr": "10.0.0.2", 00:16:41.409 "adrfam": "ipv4", 00:16:41.409 "trsvcid": "4420", 00:16:41.409 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:41.409 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:41.409 "hdgst": false, 00:16:41.409 "ddgst": false 00:16:41.409 }, 00:16:41.409 "method": "bdev_nvme_attach_controller" 00:16:41.409 }' 00:16:41.409 [2024-07-23 00:56:25.442566] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:16:41.409 [2024-07-23 00:56:25.442693] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3385163 ] 00:16:41.409 EAL: No free 2048 kB hugepages reported on node 1 00:16:41.409 [2024-07-23 00:56:25.503429] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:41.409 [2024-07-23 00:56:25.588737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:42.053 Running I/O for 10 seconds... 00:16:42.314 00:56:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:42.314 00:56:26 -- common/autotest_common.sh@852 -- # return 0 00:16:42.314 00:56:26 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:16:42.314 00:56:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:42.314 00:56:26 -- common/autotest_common.sh@10 -- # set +x 00:16:42.314 00:56:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:42.314 00:56:26 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:42.314 00:56:26 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:16:42.314 00:56:26 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:16:42.314 00:56:26 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:16:42.314 00:56:26 -- target/host_management.sh@52 -- # local ret=1 00:16:42.314 00:56:26 -- target/host_management.sh@53 -- # local i 00:16:42.314 00:56:26 -- target/host_management.sh@54 -- # (( i = 10 )) 00:16:42.314 00:56:26 -- target/host_management.sh@54 -- # (( i != 0 )) 00:16:42.314 00:56:26 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:16:42.314 00:56:26 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:16:42.314 00:56:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:42.314 00:56:26 -- common/autotest_common.sh@10 -- # set +x 00:16:42.314 00:56:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:42.314 00:56:26 -- target/host_management.sh@55 -- # read_io_count=1220 00:16:42.314 00:56:26 -- target/host_management.sh@58 -- # '[' 1220 -ge 100 ']' 00:16:42.315 00:56:26 -- target/host_management.sh@59 -- # ret=0 00:16:42.315 00:56:26 -- target/host_management.sh@60 -- # break 00:16:42.315 00:56:26 -- target/host_management.sh@64 -- # return 0 00:16:42.315 00:56:26 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:42.315 00:56:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:42.315 00:56:26 -- common/autotest_common.sh@10 -- # set +x 00:16:42.315 [2024-07-23 00:56:26.438822] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x73f370 is same with the state(5) to be set 00:16:42.315 [2024-07-23 00:56:26.438892] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x73f370 is same with the state(5) to be set 00:16:42.315 [2024-07-23 00:56:26.438918] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x73f370 is same with the state(5) to be set 00:16:42.315 [2024-07-23 00:56:26.438932] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x73f370 is same with the state(5) to be set 00:16:42.315 [2024-07-23 00:56:26.438945] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x73f370 is same with the state(5) to be set 00:16:42.315 [2024-07-23 00:56:26.438969] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x73f370 is same with the state(5) to be set 00:16:42.315 [2024-07-23 00:56:26.438982] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x73f370 is same with the state(5) to be set 00:16:42.315 [2024-07-23 00:56:26.438995] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x73f370 is same with the state(5) to be set 00:16:42.315 [2024-07-23 00:56:26.439008] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x73f370 is same with the state(5) to be set 00:16:42.315 [2024-07-23 00:56:26.439021] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x73f370 is same with the state(5) to be set 00:16:42.315 [2024-07-23 00:56:26.439034] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x73f370 is same with the state(5) to be set 00:16:42.315 [2024-07-23 00:56:26.439047] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x73f370 is same with the state(5) to be set 00:16:42.315 [2024-07-23 00:56:26.439060] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x73f370 is same with the state(5) to be set 00:16:42.315 [2024-07-23 00:56:26.439073] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x73f370 is same with the state(5) to be set 00:16:42.315 [2024-07-23 00:56:26.439087] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x73f370 is same with the state(5) to be set 00:16:42.315 [2024-07-23 00:56:26.439945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:43904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.315 [2024-07-23 00:56:26.439999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.315 [2024-07-23 00:56:26.440030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:44032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.315 [2024-07-23 00:56:26.440060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.315 [2024-07-23 00:56:26.440078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:44160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.315 [2024-07-23 00:56:26.440093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.315 [2024-07-23 00:56:26.440108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:44288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.315 [2024-07-23 00:56:26.440122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.315 [2024-07-23 00:56:26.440138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:44416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.315 [2024-07-23 00:56:26.440152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.315 [2024-07-23 00:56:26.440168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:44544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.315 [2024-07-23 00:56:26.440182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.315 [2024-07-23 00:56:26.440197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:44672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.315 [2024-07-23 00:56:26.440212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.315 [2024-07-23 00:56:26.440228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:44800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.315 [2024-07-23 00:56:26.440243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.315 [2024-07-23 00:56:26.440259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:44928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.315 [2024-07-23 00:56:26.440275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.315 [2024-07-23 00:56:26.440291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:45056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.315 [2024-07-23 00:56:26.440306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.315 [2024-07-23 00:56:26.440323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:45184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.315 [2024-07-23 00:56:26.440338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.315 [2024-07-23 00:56:26.440354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:45312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.315 [2024-07-23 00:56:26.440370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.315 [2024-07-23 00:56:26.440386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:45440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.315 [2024-07-23 00:56:26.440401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.315 [2024-07-23 00:56:26.440416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:45568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.315 [2024-07-23 00:56:26.440432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.315 [2024-07-23 00:56:26.440453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:45696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.315 [2024-07-23 00:56:26.440469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.315 [2024-07-23 00:56:26.440485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.315 [2024-07-23 00:56:26.440501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.315 [2024-07-23 00:56:26.440517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:45824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.315 [2024-07-23 00:56:26.440533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.315 [2024-07-23 00:56:26.440549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:45952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.315 [2024-07-23 00:56:26.440564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.315 [2024-07-23 00:56:26.440581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.315 [2024-07-23 00:56:26.440610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.315 [2024-07-23 00:56:26.440636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:46080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.315 [2024-07-23 00:56:26.440652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.315 [2024-07-23 00:56:26.440669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.315 [2024-07-23 00:56:26.440684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.315 [2024-07-23 00:56:26.440700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.315 [2024-07-23 00:56:26.440716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.315 [2024-07-23 00:56:26.440733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.315 [2024-07-23 00:56:26.440748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.315 [2024-07-23 00:56:26.440764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:46208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.315 [2024-07-23 00:56:26.440780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.315 [2024-07-23 00:56:26.440796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:46336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.315 [2024-07-23 00:56:26.440812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.315 [2024-07-23 00:56:26.440828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.316 [2024-07-23 00:56:26.440843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.316 [2024-07-23 00:56:26.440860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.316 [2024-07-23 00:56:26.440879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.316 [2024-07-23 00:56:26.440897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:46464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.316 [2024-07-23 00:56:26.440920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.316 [2024-07-23 00:56:26.440937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:46592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.316 [2024-07-23 00:56:26.440952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.316 [2024-07-23 00:56:26.440987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.316 [2024-07-23 00:56:26.441002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.316 [2024-07-23 00:56:26.441018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:41344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.316 [2024-07-23 00:56:26.441032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.316 [2024-07-23 00:56:26.441048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:41472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.316 [2024-07-23 00:56:26.441065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.316 [2024-07-23 00:56:26.441081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:46720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.316 [2024-07-23 00:56:26.441096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.316 [2024-07-23 00:56:26.441113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:41728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.316 [2024-07-23 00:56:26.441128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.316 [2024-07-23 00:56:26.441143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:46848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.316 [2024-07-23 00:56:26.441158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.316 [2024-07-23 00:56:26.441175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:46976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.316 [2024-07-23 00:56:26.441191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.316 [2024-07-23 00:56:26.441206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:47104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.316 [2024-07-23 00:56:26.441222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.316 [2024-07-23 00:56:26.441238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:41856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.316 [2024-07-23 00:56:26.441253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.316 [2024-07-23 00:56:26.441269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:47232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.316 [2024-07-23 00:56:26.441284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.316 [2024-07-23 00:56:26.441304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:47360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.316 [2024-07-23 00:56:26.441320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.316 [2024-07-23 00:56:26.441337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:42368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.316 [2024-07-23 00:56:26.441352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.316 [2024-07-23 00:56:26.441367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:42880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.316 [2024-07-23 00:56:26.441383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.316 [2024-07-23 00:56:26.441399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:47488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.316 [2024-07-23 00:56:26.441414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.316 [2024-07-23 00:56:26.441430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:47616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.316 [2024-07-23 00:56:26.441445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.316 [2024-07-23 00:56:26.441462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:43392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.316 [2024-07-23 00:56:26.441477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.316 [2024-07-23 00:56:26.441494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:47744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.316 [2024-07-23 00:56:26.441509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.316 [2024-07-23 00:56:26.441525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:43648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.316 [2024-07-23 00:56:26.441540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.316 [2024-07-23 00:56:26.441556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:47872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.316 [2024-07-23 00:56:26.441571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.316 [2024-07-23 00:56:26.441587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:43776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.316 [2024-07-23 00:56:26.441625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.316 [2024-07-23 00:56:26.441644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:48000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.316 [2024-07-23 00:56:26.441670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.316 [2024-07-23 00:56:26.441685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:48128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.316 [2024-07-23 00:56:26.441700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.316 [2024-07-23 00:56:26.441717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:48256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.316 [2024-07-23 00:56:26.441736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.316 [2024-07-23 00:56:26.441753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:48384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.316 [2024-07-23 00:56:26.441769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.316 [2024-07-23 00:56:26.441785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:48512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.316 [2024-07-23 00:56:26.441800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.316 [2024-07-23 00:56:26.441817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:48640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.316 [2024-07-23 00:56:26.441832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.316 [2024-07-23 00:56:26.441849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:48768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.316 [2024-07-23 00:56:26.441864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.316 [2024-07-23 00:56:26.441880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:48896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.316 [2024-07-23 00:56:26.441916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.316 [2024-07-23 00:56:26.441933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:49024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.316 [2024-07-23 00:56:26.441947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.316 [2024-07-23 00:56:26.441963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:49152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.316 [2024-07-23 00:56:26.441979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.316 [2024-07-23 00:56:26.441996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:49280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.317 [2024-07-23 00:56:26.442009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.317 [2024-07-23 00:56:26.442026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:49408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.317 [2024-07-23 00:56:26.442041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.317 [2024-07-23 00:56:26.442057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:49536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.317 [2024-07-23 00:56:26.442072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.317 [2024-07-23 00:56:26.442088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:49664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.317 [2024-07-23 00:56:26.442102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.317 [2024-07-23 00:56:26.442120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:49792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.317 [2024-07-23 00:56:26.442135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.317 [2024-07-23 00:56:26.442234] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xcd4080 was disconnected and freed. reset controller. 00:16:42.317 00:56:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:42.317 00:56:26 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:42.317 [2024-07-23 00:56:26.443413] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:42.317 00:56:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:42.317 00:56:26 -- common/autotest_common.sh@10 -- # set +x 00:16:42.317 task offset: 43904 on job bdev=Nvme0n1 fails 00:16:42.317 00:16:42.317 Latency(us) 00:16:42.317 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:42.317 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:42.317 Job: Nvme0n1 ended in about 0.52 seconds with error 00:16:42.317 Verification LBA range: start 0x0 length 0x400 00:16:42.317 Nvme0n1 : 0.52 2603.42 162.71 123.42 0.00 23148.85 2706.39 27573.67 00:16:42.317 =================================================================================================================== 00:16:42.317 Total : 2603.42 162.71 123.42 0.00 23148.85 2706.39 27573.67 00:16:42.317 [2024-07-23 00:56:26.445307] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:42.317 [2024-07-23 00:56:26.445338] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcd9c20 (9): Bad file descriptor 00:16:42.317 00:56:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:42.317 00:56:26 -- target/host_management.sh@87 -- # sleep 1 00:16:42.575 [2024-07-23 00:56:26.537820] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:43.514 00:56:27 -- target/host_management.sh@91 -- # kill -9 3385163 00:16:43.514 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3385163) - No such process 00:16:43.514 00:56:27 -- target/host_management.sh@91 -- # true 00:16:43.514 00:56:27 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:16:43.514 00:56:27 -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:16:43.514 00:56:27 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:16:43.514 00:56:27 -- nvmf/common.sh@520 -- # config=() 00:16:43.514 00:56:27 -- nvmf/common.sh@520 -- # local subsystem config 00:16:43.514 00:56:27 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:43.514 00:56:27 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:43.514 { 00:16:43.514 "params": { 00:16:43.514 "name": "Nvme$subsystem", 00:16:43.514 "trtype": "$TEST_TRANSPORT", 00:16:43.514 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:43.514 "adrfam": "ipv4", 00:16:43.514 "trsvcid": "$NVMF_PORT", 00:16:43.514 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:43.514 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:43.514 "hdgst": ${hdgst:-false}, 00:16:43.514 "ddgst": ${ddgst:-false} 00:16:43.514 }, 00:16:43.514 "method": "bdev_nvme_attach_controller" 00:16:43.514 } 00:16:43.514 EOF 00:16:43.514 )") 00:16:43.514 00:56:27 -- nvmf/common.sh@542 -- # cat 00:16:43.514 00:56:27 -- nvmf/common.sh@544 -- # jq . 00:16:43.514 00:56:27 -- nvmf/common.sh@545 -- # IFS=, 00:16:43.514 00:56:27 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:43.514 "params": { 00:16:43.514 "name": "Nvme0", 00:16:43.514 "trtype": "tcp", 00:16:43.514 "traddr": "10.0.0.2", 00:16:43.514 "adrfam": "ipv4", 00:16:43.514 "trsvcid": "4420", 00:16:43.514 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:43.514 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:43.514 "hdgst": false, 00:16:43.514 "ddgst": false 00:16:43.514 }, 00:16:43.514 "method": "bdev_nvme_attach_controller" 00:16:43.514 }' 00:16:43.514 [2024-07-23 00:56:27.497457] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:16:43.514 [2024-07-23 00:56:27.497535] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3385449 ] 00:16:43.514 EAL: No free 2048 kB hugepages reported on node 1 00:16:43.514 [2024-07-23 00:56:27.557698] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:43.514 [2024-07-23 00:56:27.644255] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:43.773 Running I/O for 1 seconds... 00:16:44.708 00:16:44.708 Latency(us) 00:16:44.708 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:44.708 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:44.708 Verification LBA range: start 0x0 length 0x400 00:16:44.708 Nvme0n1 : 1.01 3046.94 190.43 0.00 0.00 20708.38 2463.67 25437.68 00:16:44.708 =================================================================================================================== 00:16:44.708 Total : 3046.94 190.43 0.00 0.00 20708.38 2463.67 25437.68 00:16:44.967 00:56:29 -- target/host_management.sh@101 -- # stoptarget 00:16:44.967 00:56:29 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:16:44.967 00:56:29 -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:16:44.967 00:56:29 -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:44.967 00:56:29 -- target/host_management.sh@40 -- # nvmftestfini 00:16:44.967 00:56:29 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:44.967 00:56:29 -- nvmf/common.sh@116 -- # sync 00:16:44.967 00:56:29 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:44.967 00:56:29 -- nvmf/common.sh@119 -- # set +e 00:16:44.967 00:56:29 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:44.967 00:56:29 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:44.967 rmmod nvme_tcp 00:16:44.967 rmmod nvme_fabrics 00:16:44.967 rmmod nvme_keyring 00:16:44.967 00:56:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:44.967 00:56:29 -- nvmf/common.sh@123 -- # set -e 00:16:44.967 00:56:29 -- nvmf/common.sh@124 -- # return 0 00:16:44.967 00:56:29 -- nvmf/common.sh@477 -- # '[' -n 3384984 ']' 00:16:44.967 00:56:29 -- nvmf/common.sh@478 -- # killprocess 3384984 00:16:44.967 00:56:29 -- common/autotest_common.sh@926 -- # '[' -z 3384984 ']' 00:16:44.967 00:56:29 -- common/autotest_common.sh@930 -- # kill -0 3384984 00:16:44.967 00:56:29 -- common/autotest_common.sh@931 -- # uname 00:16:44.967 00:56:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:44.967 00:56:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3384984 00:16:44.967 00:56:29 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:16:44.967 00:56:29 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:16:44.967 00:56:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3384984' 00:16:44.967 killing process with pid 3384984 00:16:44.967 00:56:29 -- common/autotest_common.sh@945 -- # kill 3384984 00:16:44.967 00:56:29 -- common/autotest_common.sh@950 -- # wait 3384984 00:16:45.225 [2024-07-23 00:56:29.381783] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:16:45.225 00:56:29 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:45.225 00:56:29 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:45.225 00:56:29 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:45.225 00:56:29 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:45.225 00:56:29 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:45.225 00:56:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:45.225 00:56:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:45.225 00:56:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:47.766 00:56:31 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:16:47.766 00:16:47.766 real 0m7.139s 00:16:47.766 user 0m22.122s 00:16:47.766 sys 0m1.275s 00:16:47.766 00:56:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:47.766 00:56:31 -- common/autotest_common.sh@10 -- # set +x 00:16:47.766 ************************************ 00:16:47.766 END TEST nvmf_host_management 00:16:47.766 ************************************ 00:16:47.766 00:56:31 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:16:47.766 00:16:47.766 real 0m9.246s 00:16:47.766 user 0m22.936s 00:16:47.766 sys 0m2.589s 00:16:47.766 00:56:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:47.766 00:56:31 -- common/autotest_common.sh@10 -- # set +x 00:16:47.766 ************************************ 00:16:47.766 END TEST nvmf_host_management 00:16:47.766 ************************************ 00:16:47.766 00:56:31 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:47.766 00:56:31 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:47.766 00:56:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:47.766 00:56:31 -- common/autotest_common.sh@10 -- # set +x 00:16:47.766 ************************************ 00:16:47.766 START TEST nvmf_lvol 00:16:47.766 ************************************ 00:16:47.766 00:56:31 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:47.766 * Looking for test storage... 00:16:47.766 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:47.766 00:56:31 -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:47.766 00:56:31 -- nvmf/common.sh@7 -- # uname -s 00:16:47.766 00:56:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:47.766 00:56:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:47.766 00:56:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:47.766 00:56:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:47.766 00:56:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:47.766 00:56:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:47.766 00:56:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:47.766 00:56:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:47.766 00:56:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:47.766 00:56:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:47.766 00:56:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:47.766 00:56:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:47.766 00:56:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:47.766 00:56:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:47.766 00:56:31 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:47.766 00:56:31 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:47.766 00:56:31 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:47.766 00:56:31 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:47.766 00:56:31 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:47.766 00:56:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:47.766 00:56:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:47.767 00:56:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:47.767 00:56:31 -- paths/export.sh@5 -- # export PATH 00:16:47.767 00:56:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:47.767 00:56:31 -- nvmf/common.sh@46 -- # : 0 00:16:47.767 00:56:31 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:47.767 00:56:31 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:47.767 00:56:31 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:47.767 00:56:31 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:47.767 00:56:31 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:47.767 00:56:31 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:47.767 00:56:31 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:47.767 00:56:31 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:47.767 00:56:31 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:47.767 00:56:31 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:47.767 00:56:31 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:16:47.767 00:56:31 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:16:47.767 00:56:31 -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:47.767 00:56:31 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:16:47.767 00:56:31 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:47.767 00:56:31 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:47.767 00:56:31 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:47.767 00:56:31 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:47.767 00:56:31 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:47.767 00:56:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:47.767 00:56:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:47.767 00:56:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:47.767 00:56:31 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:47.767 00:56:31 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:47.767 00:56:31 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:47.767 00:56:31 -- common/autotest_common.sh@10 -- # set +x 00:16:49.675 00:56:33 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:49.675 00:56:33 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:49.675 00:56:33 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:49.675 00:56:33 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:49.675 00:56:33 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:49.675 00:56:33 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:49.675 00:56:33 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:49.675 00:56:33 -- nvmf/common.sh@294 -- # net_devs=() 00:16:49.675 00:56:33 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:49.675 00:56:33 -- nvmf/common.sh@295 -- # e810=() 00:16:49.675 00:56:33 -- nvmf/common.sh@295 -- # local -ga e810 00:16:49.675 00:56:33 -- nvmf/common.sh@296 -- # x722=() 00:16:49.675 00:56:33 -- nvmf/common.sh@296 -- # local -ga x722 00:16:49.675 00:56:33 -- nvmf/common.sh@297 -- # mlx=() 00:16:49.675 00:56:33 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:49.675 00:56:33 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:49.675 00:56:33 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:49.675 00:56:33 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:49.675 00:56:33 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:49.675 00:56:33 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:49.675 00:56:33 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:49.675 00:56:33 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:49.675 00:56:33 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:49.675 00:56:33 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:49.675 00:56:33 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:49.675 00:56:33 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:49.675 00:56:33 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:49.675 00:56:33 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:16:49.675 00:56:33 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:16:49.675 00:56:33 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:16:49.675 00:56:33 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:16:49.675 00:56:33 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:49.675 00:56:33 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:49.675 00:56:33 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:49.675 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:49.675 00:56:33 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:49.675 00:56:33 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:49.675 00:56:33 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:49.675 00:56:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:49.675 00:56:33 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:49.675 00:56:33 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:49.675 00:56:33 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:49.675 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:49.675 00:56:33 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:49.675 00:56:33 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:49.675 00:56:33 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:49.675 00:56:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:49.675 00:56:33 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:49.675 00:56:33 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:49.675 00:56:33 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:16:49.675 00:56:33 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:16:49.675 00:56:33 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:49.675 00:56:33 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:49.675 00:56:33 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:49.675 00:56:33 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:49.675 00:56:33 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:49.675 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:49.675 00:56:33 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:49.676 00:56:33 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:49.676 00:56:33 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:49.676 00:56:33 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:49.676 00:56:33 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:49.676 00:56:33 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:49.676 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:49.676 00:56:33 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:49.676 00:56:33 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:49.676 00:56:33 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:49.676 00:56:33 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:49.676 00:56:33 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:16:49.676 00:56:33 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:16:49.676 00:56:33 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:49.676 00:56:33 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:49.676 00:56:33 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:49.676 00:56:33 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:16:49.676 00:56:33 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:49.676 00:56:33 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:49.676 00:56:33 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:16:49.676 00:56:33 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:49.676 00:56:33 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:49.676 00:56:33 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:16:49.676 00:56:33 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:16:49.676 00:56:33 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:16:49.676 00:56:33 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:49.676 00:56:33 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:49.676 00:56:33 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:49.676 00:56:33 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:16:49.676 00:56:33 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:49.676 00:56:33 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:49.676 00:56:33 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:49.676 00:56:33 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:16:49.676 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:49.676 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.124 ms 00:16:49.676 00:16:49.676 --- 10.0.0.2 ping statistics --- 00:16:49.676 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:49.676 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:16:49.676 00:56:33 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:49.676 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:49.676 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:16:49.676 00:16:49.676 --- 10.0.0.1 ping statistics --- 00:16:49.676 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:49.676 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:16:49.676 00:56:33 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:49.676 00:56:33 -- nvmf/common.sh@410 -- # return 0 00:16:49.676 00:56:33 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:49.676 00:56:33 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:49.676 00:56:33 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:49.676 00:56:33 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:49.676 00:56:33 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:49.676 00:56:33 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:49.676 00:56:33 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:49.676 00:56:33 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:16:49.676 00:56:33 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:49.676 00:56:33 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:49.676 00:56:33 -- common/autotest_common.sh@10 -- # set +x 00:16:49.676 00:56:33 -- nvmf/common.sh@469 -- # nvmfpid=3387573 00:16:49.676 00:56:33 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:49.676 00:56:33 -- nvmf/common.sh@470 -- # waitforlisten 3387573 00:16:49.676 00:56:33 -- common/autotest_common.sh@819 -- # '[' -z 3387573 ']' 00:16:49.676 00:56:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:49.676 00:56:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:49.676 00:56:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:49.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:49.676 00:56:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:49.676 00:56:33 -- common/autotest_common.sh@10 -- # set +x 00:16:49.676 [2024-07-23 00:56:33.734791] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:16:49.676 [2024-07-23 00:56:33.734874] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:49.676 EAL: No free 2048 kB hugepages reported on node 1 00:16:49.676 [2024-07-23 00:56:33.806450] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:49.935 [2024-07-23 00:56:33.894008] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:49.935 [2024-07-23 00:56:33.894170] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:49.935 [2024-07-23 00:56:33.894187] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:49.935 [2024-07-23 00:56:33.894199] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:49.935 [2024-07-23 00:56:33.894292] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:49.935 [2024-07-23 00:56:33.894351] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:49.935 [2024-07-23 00:56:33.894354] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:50.554 00:56:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:50.554 00:56:34 -- common/autotest_common.sh@852 -- # return 0 00:16:50.554 00:56:34 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:50.554 00:56:34 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:50.554 00:56:34 -- common/autotest_common.sh@10 -- # set +x 00:16:50.554 00:56:34 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:50.554 00:56:34 -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:50.813 [2024-07-23 00:56:34.921895] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:50.813 00:56:34 -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:51.071 00:56:35 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:16:51.071 00:56:35 -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:51.329 00:56:35 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:16:51.329 00:56:35 -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:16:51.587 00:56:35 -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:16:51.845 00:56:36 -- target/nvmf_lvol.sh@29 -- # lvs=304ac2d8-0e68-4b8c-a9cc-9cd945b03d53 00:16:51.845 00:56:36 -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 304ac2d8-0e68-4b8c-a9cc-9cd945b03d53 lvol 20 00:16:52.102 00:56:36 -- target/nvmf_lvol.sh@32 -- # lvol=47237dbf-46f5-4052-b893-7ae0e68ec409 00:16:52.103 00:56:36 -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:52.361 00:56:36 -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 47237dbf-46f5-4052-b893-7ae0e68ec409 00:16:52.619 00:56:36 -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:52.877 [2024-07-23 00:56:36.931352] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:52.877 00:56:36 -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:53.135 00:56:37 -- target/nvmf_lvol.sh@42 -- # perf_pid=3388122 00:16:53.135 00:56:37 -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:16:53.135 00:56:37 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:16:53.135 EAL: No free 2048 kB hugepages reported on node 1 00:16:54.071 00:56:38 -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 47237dbf-46f5-4052-b893-7ae0e68ec409 MY_SNAPSHOT 00:16:54.330 00:56:38 -- target/nvmf_lvol.sh@47 -- # snapshot=ad831164-a91c-4e9f-96e8-947a95ec3e48 00:16:54.330 00:56:38 -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 47237dbf-46f5-4052-b893-7ae0e68ec409 30 00:16:54.588 00:56:38 -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone ad831164-a91c-4e9f-96e8-947a95ec3e48 MY_CLONE 00:16:54.846 00:56:38 -- target/nvmf_lvol.sh@49 -- # clone=ee1cd616-8df5-45d5-b493-0c6a693b18e8 00:16:54.846 00:56:38 -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate ee1cd616-8df5-45d5-b493-0c6a693b18e8 00:16:55.414 00:56:39 -- target/nvmf_lvol.sh@53 -- # wait 3388122 00:17:03.558 Initializing NVMe Controllers 00:17:03.558 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:17:03.558 Controller IO queue size 128, less than required. 00:17:03.558 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:03.558 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:17:03.558 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:17:03.558 Initialization complete. Launching workers. 00:17:03.558 ======================================================== 00:17:03.558 Latency(us) 00:17:03.558 Device Information : IOPS MiB/s Average min max 00:17:03.558 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10255.79 40.06 12488.89 2068.08 78068.17 00:17:03.558 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10883.29 42.51 11765.44 2088.52 61949.51 00:17:03.558 ======================================================== 00:17:03.558 Total : 21139.09 82.57 12116.43 2068.08 78068.17 00:17:03.558 00:17:03.558 00:56:47 -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:03.815 00:56:47 -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 47237dbf-46f5-4052-b893-7ae0e68ec409 00:17:04.073 00:56:48 -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 304ac2d8-0e68-4b8c-a9cc-9cd945b03d53 00:17:04.334 00:56:48 -- target/nvmf_lvol.sh@60 -- # rm -f 00:17:04.334 00:56:48 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:17:04.334 00:56:48 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:17:04.334 00:56:48 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:04.334 00:56:48 -- nvmf/common.sh@116 -- # sync 00:17:04.334 00:56:48 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:04.334 00:56:48 -- nvmf/common.sh@119 -- # set +e 00:17:04.334 00:56:48 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:04.334 00:56:48 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:04.334 rmmod nvme_tcp 00:17:04.334 rmmod nvme_fabrics 00:17:04.334 rmmod nvme_keyring 00:17:04.334 00:56:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:04.334 00:56:48 -- nvmf/common.sh@123 -- # set -e 00:17:04.334 00:56:48 -- nvmf/common.sh@124 -- # return 0 00:17:04.334 00:56:48 -- nvmf/common.sh@477 -- # '[' -n 3387573 ']' 00:17:04.334 00:56:48 -- nvmf/common.sh@478 -- # killprocess 3387573 00:17:04.334 00:56:48 -- common/autotest_common.sh@926 -- # '[' -z 3387573 ']' 00:17:04.334 00:56:48 -- common/autotest_common.sh@930 -- # kill -0 3387573 00:17:04.334 00:56:48 -- common/autotest_common.sh@931 -- # uname 00:17:04.334 00:56:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:04.334 00:56:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3387573 00:17:04.334 00:56:48 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:04.334 00:56:48 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:04.334 00:56:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3387573' 00:17:04.334 killing process with pid 3387573 00:17:04.334 00:56:48 -- common/autotest_common.sh@945 -- # kill 3387573 00:17:04.334 00:56:48 -- common/autotest_common.sh@950 -- # wait 3387573 00:17:04.592 00:56:48 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:04.592 00:56:48 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:04.592 00:56:48 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:04.592 00:56:48 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:04.592 00:56:48 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:04.592 00:56:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:04.592 00:56:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:04.592 00:56:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:07.122 00:56:50 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:17:07.122 00:17:07.122 real 0m19.218s 00:17:07.122 user 1m5.578s 00:17:07.122 sys 0m5.550s 00:17:07.122 00:56:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:07.122 00:56:50 -- common/autotest_common.sh@10 -- # set +x 00:17:07.122 ************************************ 00:17:07.122 END TEST nvmf_lvol 00:17:07.122 ************************************ 00:17:07.122 00:56:50 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:17:07.122 00:56:50 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:07.122 00:56:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:07.122 00:56:50 -- common/autotest_common.sh@10 -- # set +x 00:17:07.122 ************************************ 00:17:07.122 START TEST nvmf_lvs_grow 00:17:07.122 ************************************ 00:17:07.122 00:56:50 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:17:07.122 * Looking for test storage... 00:17:07.122 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:07.122 00:56:50 -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:07.122 00:56:50 -- nvmf/common.sh@7 -- # uname -s 00:17:07.122 00:56:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:07.122 00:56:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:07.122 00:56:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:07.122 00:56:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:07.122 00:56:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:07.122 00:56:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:07.122 00:56:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:07.122 00:56:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:07.122 00:56:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:07.122 00:56:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:07.122 00:56:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:07.122 00:56:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:07.122 00:56:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:07.122 00:56:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:07.122 00:56:50 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:07.122 00:56:50 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:07.122 00:56:50 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:07.122 00:56:50 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:07.122 00:56:50 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:07.122 00:56:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.122 00:56:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.122 00:56:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.122 00:56:50 -- paths/export.sh@5 -- # export PATH 00:17:07.122 00:56:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.122 00:56:50 -- nvmf/common.sh@46 -- # : 0 00:17:07.122 00:56:50 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:07.122 00:56:50 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:07.122 00:56:50 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:07.122 00:56:50 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:07.122 00:56:50 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:07.122 00:56:50 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:07.122 00:56:50 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:07.122 00:56:50 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:07.122 00:56:50 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:07.122 00:56:50 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:07.122 00:56:50 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:17:07.122 00:56:50 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:07.122 00:56:50 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:07.122 00:56:50 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:07.122 00:56:50 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:07.122 00:56:50 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:07.122 00:56:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:07.122 00:56:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:07.122 00:56:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:07.122 00:56:50 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:07.122 00:56:50 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:07.122 00:56:50 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:07.122 00:56:50 -- common/autotest_common.sh@10 -- # set +x 00:17:08.498 00:56:52 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:08.498 00:56:52 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:08.498 00:56:52 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:08.498 00:56:52 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:08.498 00:56:52 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:08.498 00:56:52 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:08.498 00:56:52 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:08.498 00:56:52 -- nvmf/common.sh@294 -- # net_devs=() 00:17:08.498 00:56:52 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:08.498 00:56:52 -- nvmf/common.sh@295 -- # e810=() 00:17:08.499 00:56:52 -- nvmf/common.sh@295 -- # local -ga e810 00:17:08.499 00:56:52 -- nvmf/common.sh@296 -- # x722=() 00:17:08.499 00:56:52 -- nvmf/common.sh@296 -- # local -ga x722 00:17:08.499 00:56:52 -- nvmf/common.sh@297 -- # mlx=() 00:17:08.499 00:56:52 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:08.499 00:56:52 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:08.499 00:56:52 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:08.499 00:56:52 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:08.499 00:56:52 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:08.499 00:56:52 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:08.499 00:56:52 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:08.499 00:56:52 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:08.499 00:56:52 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:08.499 00:56:52 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:08.499 00:56:52 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:08.499 00:56:52 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:08.499 00:56:52 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:08.499 00:56:52 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:17:08.499 00:56:52 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:17:08.499 00:56:52 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:17:08.499 00:56:52 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:17:08.499 00:56:52 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:08.499 00:56:52 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:08.499 00:56:52 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:08.499 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:08.499 00:56:52 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:08.499 00:56:52 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:08.499 00:56:52 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:08.499 00:56:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:08.499 00:56:52 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:08.499 00:56:52 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:08.499 00:56:52 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:08.499 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:08.499 00:56:52 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:08.499 00:56:52 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:08.499 00:56:52 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:08.499 00:56:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:08.499 00:56:52 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:08.499 00:56:52 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:08.499 00:56:52 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:17:08.499 00:56:52 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:17:08.499 00:56:52 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:08.499 00:56:52 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:08.499 00:56:52 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:08.499 00:56:52 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:08.499 00:56:52 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:08.499 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:08.499 00:56:52 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:08.499 00:56:52 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:08.499 00:56:52 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:08.499 00:56:52 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:08.499 00:56:52 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:08.499 00:56:52 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:08.499 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:08.499 00:56:52 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:08.499 00:56:52 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:08.499 00:56:52 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:08.499 00:56:52 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:08.499 00:56:52 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:17:08.499 00:56:52 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:17:08.499 00:56:52 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:08.499 00:56:52 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:08.499 00:56:52 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:08.499 00:56:52 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:17:08.499 00:56:52 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:08.499 00:56:52 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:08.499 00:56:52 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:17:08.499 00:56:52 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:08.499 00:56:52 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:08.499 00:56:52 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:17:08.499 00:56:52 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:17:08.757 00:56:52 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:17:08.757 00:56:52 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:08.757 00:56:52 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:08.757 00:56:52 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:08.758 00:56:52 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:17:08.758 00:56:52 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:08.758 00:56:52 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:08.758 00:56:52 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:08.758 00:56:52 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:17:08.758 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:08.758 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.169 ms 00:17:08.758 00:17:08.758 --- 10.0.0.2 ping statistics --- 00:17:08.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:08.758 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:17:08.758 00:56:52 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:08.758 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:08.758 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:17:08.758 00:17:08.758 --- 10.0.0.1 ping statistics --- 00:17:08.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:08.758 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:17:08.758 00:56:52 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:08.758 00:56:52 -- nvmf/common.sh@410 -- # return 0 00:17:08.758 00:56:52 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:08.758 00:56:52 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:08.758 00:56:52 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:08.758 00:56:52 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:08.758 00:56:52 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:08.758 00:56:52 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:08.758 00:56:52 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:08.758 00:56:52 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:17:08.758 00:56:52 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:08.758 00:56:52 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:08.758 00:56:52 -- common/autotest_common.sh@10 -- # set +x 00:17:08.758 00:56:52 -- nvmf/common.sh@469 -- # nvmfpid=3391305 00:17:08.758 00:56:52 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:08.758 00:56:52 -- nvmf/common.sh@470 -- # waitforlisten 3391305 00:17:08.758 00:56:52 -- common/autotest_common.sh@819 -- # '[' -z 3391305 ']' 00:17:08.758 00:56:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:08.758 00:56:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:08.758 00:56:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:08.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:08.758 00:56:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:08.758 00:56:52 -- common/autotest_common.sh@10 -- # set +x 00:17:08.758 [2024-07-23 00:56:52.894178] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:17:08.758 [2024-07-23 00:56:52.894261] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:08.758 EAL: No free 2048 kB hugepages reported on node 1 00:17:09.017 [2024-07-23 00:56:52.964647] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:09.017 [2024-07-23 00:56:53.054230] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:09.017 [2024-07-23 00:56:53.054398] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:09.017 [2024-07-23 00:56:53.054417] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:09.017 [2024-07-23 00:56:53.054431] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:09.017 [2024-07-23 00:56:53.054468] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:09.950 00:56:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:09.950 00:56:53 -- common/autotest_common.sh@852 -- # return 0 00:17:09.950 00:56:53 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:09.950 00:56:53 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:09.950 00:56:53 -- common/autotest_common.sh@10 -- # set +x 00:17:09.950 00:56:53 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:09.950 00:56:53 -- target/nvmf_lvs_grow.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:09.950 [2024-07-23 00:56:54.079275] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:09.950 00:56:54 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:17:09.950 00:56:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:17:09.950 00:56:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:09.950 00:56:54 -- common/autotest_common.sh@10 -- # set +x 00:17:09.950 ************************************ 00:17:09.950 START TEST lvs_grow_clean 00:17:09.950 ************************************ 00:17:09.950 00:56:54 -- common/autotest_common.sh@1104 -- # lvs_grow 00:17:09.950 00:56:54 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:09.950 00:56:54 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:09.950 00:56:54 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:09.950 00:56:54 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:09.950 00:56:54 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:09.950 00:56:54 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:09.951 00:56:54 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:09.951 00:56:54 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:09.951 00:56:54 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:10.210 00:56:54 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:10.210 00:56:54 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:10.468 00:56:54 -- target/nvmf_lvs_grow.sh@28 -- # lvs=9b57ece8-22b5-48d4-8e02-ec001065e37e 00:17:10.468 00:56:54 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9b57ece8-22b5-48d4-8e02-ec001065e37e 00:17:10.468 00:56:54 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:10.726 00:56:54 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:10.726 00:56:54 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:10.726 00:56:54 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 9b57ece8-22b5-48d4-8e02-ec001065e37e lvol 150 00:17:10.984 00:56:55 -- target/nvmf_lvs_grow.sh@33 -- # lvol=b6bfb7eb-03dc-40b1-8bf3-bb74d3f66161 00:17:10.984 00:56:55 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:10.984 00:56:55 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:11.244 [2024-07-23 00:56:55.339873] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:11.244 [2024-07-23 00:56:55.339971] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:11.244 true 00:17:11.244 00:56:55 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9b57ece8-22b5-48d4-8e02-ec001065e37e 00:17:11.244 00:56:55 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:11.503 00:56:55 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:11.503 00:56:55 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:11.761 00:56:55 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b6bfb7eb-03dc-40b1-8bf3-bb74d3f66161 00:17:12.021 00:56:56 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:12.279 [2024-07-23 00:56:56.367138] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:12.279 00:56:56 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:12.536 00:56:56 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3391884 00:17:12.536 00:56:56 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:12.536 00:56:56 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:12.536 00:56:56 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3391884 /var/tmp/bdevperf.sock 00:17:12.536 00:56:56 -- common/autotest_common.sh@819 -- # '[' -z 3391884 ']' 00:17:12.536 00:56:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:12.536 00:56:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:12.536 00:56:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:12.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:12.536 00:56:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:12.536 00:56:56 -- common/autotest_common.sh@10 -- # set +x 00:17:12.536 [2024-07-23 00:56:56.652386] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:17:12.536 [2024-07-23 00:56:56.652476] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3391884 ] 00:17:12.536 EAL: No free 2048 kB hugepages reported on node 1 00:17:12.536 [2024-07-23 00:56:56.710523] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:12.795 [2024-07-23 00:56:56.794595] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:13.731 00:56:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:13.731 00:56:57 -- common/autotest_common.sh@852 -- # return 0 00:17:13.731 00:56:57 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:13.731 Nvme0n1 00:17:13.989 00:56:57 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:14.248 [ 00:17:14.248 { 00:17:14.248 "name": "Nvme0n1", 00:17:14.248 "aliases": [ 00:17:14.248 "b6bfb7eb-03dc-40b1-8bf3-bb74d3f66161" 00:17:14.248 ], 00:17:14.248 "product_name": "NVMe disk", 00:17:14.248 "block_size": 4096, 00:17:14.248 "num_blocks": 38912, 00:17:14.248 "uuid": "b6bfb7eb-03dc-40b1-8bf3-bb74d3f66161", 00:17:14.248 "assigned_rate_limits": { 00:17:14.248 "rw_ios_per_sec": 0, 00:17:14.248 "rw_mbytes_per_sec": 0, 00:17:14.248 "r_mbytes_per_sec": 0, 00:17:14.248 "w_mbytes_per_sec": 0 00:17:14.248 }, 00:17:14.248 "claimed": false, 00:17:14.248 "zoned": false, 00:17:14.248 "supported_io_types": { 00:17:14.248 "read": true, 00:17:14.248 "write": true, 00:17:14.248 "unmap": true, 00:17:14.248 "write_zeroes": true, 00:17:14.248 "flush": true, 00:17:14.248 "reset": true, 00:17:14.248 "compare": true, 00:17:14.248 "compare_and_write": true, 00:17:14.248 "abort": true, 00:17:14.248 "nvme_admin": true, 00:17:14.248 "nvme_io": true 00:17:14.248 }, 00:17:14.248 "driver_specific": { 00:17:14.248 "nvme": [ 00:17:14.248 { 00:17:14.248 "trid": { 00:17:14.248 "trtype": "TCP", 00:17:14.248 "adrfam": "IPv4", 00:17:14.248 "traddr": "10.0.0.2", 00:17:14.248 "trsvcid": "4420", 00:17:14.248 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:14.248 }, 00:17:14.248 "ctrlr_data": { 00:17:14.248 "cntlid": 1, 00:17:14.248 "vendor_id": "0x8086", 00:17:14.248 "model_number": "SPDK bdev Controller", 00:17:14.248 "serial_number": "SPDK0", 00:17:14.248 "firmware_revision": "24.01.1", 00:17:14.248 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:14.248 "oacs": { 00:17:14.248 "security": 0, 00:17:14.248 "format": 0, 00:17:14.248 "firmware": 0, 00:17:14.248 "ns_manage": 0 00:17:14.248 }, 00:17:14.248 "multi_ctrlr": true, 00:17:14.248 "ana_reporting": false 00:17:14.248 }, 00:17:14.248 "vs": { 00:17:14.248 "nvme_version": "1.3" 00:17:14.248 }, 00:17:14.248 "ns_data": { 00:17:14.248 "id": 1, 00:17:14.248 "can_share": true 00:17:14.248 } 00:17:14.248 } 00:17:14.248 ], 00:17:14.248 "mp_policy": "active_passive" 00:17:14.248 } 00:17:14.248 } 00:17:14.248 ] 00:17:14.248 00:56:58 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3392026 00:17:14.248 00:56:58 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:14.248 00:56:58 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:14.248 Running I/O for 10 seconds... 00:17:15.185 Latency(us) 00:17:15.185 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:15.185 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:15.185 Nvme0n1 : 1.00 14408.00 56.28 0.00 0.00 0.00 0.00 0.00 00:17:15.185 =================================================================================================================== 00:17:15.185 Total : 14408.00 56.28 0.00 0.00 0.00 0.00 0.00 00:17:15.185 00:17:16.121 00:57:00 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 9b57ece8-22b5-48d4-8e02-ec001065e37e 00:17:16.378 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:16.378 Nvme0n1 : 2.00 14564.00 56.89 0.00 0.00 0.00 0.00 0.00 00:17:16.378 =================================================================================================================== 00:17:16.378 Total : 14564.00 56.89 0.00 0.00 0.00 0.00 0.00 00:17:16.378 00:17:16.378 true 00:17:16.378 00:57:00 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9b57ece8-22b5-48d4-8e02-ec001065e37e 00:17:16.378 00:57:00 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:16.636 00:57:00 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:16.636 00:57:00 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:16.636 00:57:00 -- target/nvmf_lvs_grow.sh@65 -- # wait 3392026 00:17:17.204 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:17.204 Nvme0n1 : 3.00 14678.00 57.34 0.00 0.00 0.00 0.00 0.00 00:17:17.204 =================================================================================================================== 00:17:17.204 Total : 14678.00 57.34 0.00 0.00 0.00 0.00 0.00 00:17:17.204 00:17:18.580 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:18.580 Nvme0n1 : 4.00 14770.00 57.70 0.00 0.00 0.00 0.00 0.00 00:17:18.580 =================================================================================================================== 00:17:18.580 Total : 14770.00 57.70 0.00 0.00 0.00 0.00 0.00 00:17:18.580 00:17:19.519 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:19.519 Nvme0n1 : 5.00 14835.60 57.95 0.00 0.00 0.00 0.00 0.00 00:17:19.520 =================================================================================================================== 00:17:19.520 Total : 14835.60 57.95 0.00 0.00 0.00 0.00 0.00 00:17:19.520 00:17:20.487 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:20.487 Nvme0n1 : 6.00 14902.67 58.21 0.00 0.00 0.00 0.00 0.00 00:17:20.487 =================================================================================================================== 00:17:20.487 Total : 14902.67 58.21 0.00 0.00 0.00 0.00 0.00 00:17:20.487 00:17:21.424 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:21.425 Nvme0n1 : 7.00 14940.57 58.36 0.00 0.00 0.00 0.00 0.00 00:17:21.425 =================================================================================================================== 00:17:21.425 Total : 14940.57 58.36 0.00 0.00 0.00 0.00 0.00 00:17:21.425 00:17:22.361 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:22.361 Nvme0n1 : 8.00 14977.00 58.50 0.00 0.00 0.00 0.00 0.00 00:17:22.361 =================================================================================================================== 00:17:22.361 Total : 14977.00 58.50 0.00 0.00 0.00 0.00 0.00 00:17:22.361 00:17:23.296 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:23.296 Nvme0n1 : 9.00 15007.22 58.62 0.00 0.00 0.00 0.00 0.00 00:17:23.296 =================================================================================================================== 00:17:23.296 Total : 15007.22 58.62 0.00 0.00 0.00 0.00 0.00 00:17:23.296 00:17:24.231 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:24.231 Nvme0n1 : 10.00 15034.30 58.73 0.00 0.00 0.00 0.00 0.00 00:17:24.231 =================================================================================================================== 00:17:24.231 Total : 15034.30 58.73 0.00 0.00 0.00 0.00 0.00 00:17:24.231 00:17:24.231 00:17:24.231 Latency(us) 00:17:24.231 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:24.231 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:24.231 Nvme0n1 : 10.01 15035.26 58.73 0.00 0.00 8507.53 5048.70 18058.81 00:17:24.231 =================================================================================================================== 00:17:24.231 Total : 15035.26 58.73 0.00 0.00 8507.53 5048.70 18058.81 00:17:24.231 0 00:17:24.231 00:57:08 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3391884 00:17:24.231 00:57:08 -- common/autotest_common.sh@926 -- # '[' -z 3391884 ']' 00:17:24.231 00:57:08 -- common/autotest_common.sh@930 -- # kill -0 3391884 00:17:24.231 00:57:08 -- common/autotest_common.sh@931 -- # uname 00:17:24.231 00:57:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:24.231 00:57:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3391884 00:17:24.231 00:57:08 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:24.231 00:57:08 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:24.231 00:57:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3391884' 00:17:24.231 killing process with pid 3391884 00:17:24.231 00:57:08 -- common/autotest_common.sh@945 -- # kill 3391884 00:17:24.231 Received shutdown signal, test time was about 10.000000 seconds 00:17:24.231 00:17:24.231 Latency(us) 00:17:24.231 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:24.231 =================================================================================================================== 00:17:24.231 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:24.231 00:57:08 -- common/autotest_common.sh@950 -- # wait 3391884 00:17:24.490 00:57:08 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:24.748 00:57:08 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9b57ece8-22b5-48d4-8e02-ec001065e37e 00:17:24.748 00:57:08 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:17:25.005 00:57:09 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:17:25.005 00:57:09 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:17:25.005 00:57:09 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:25.263 [2024-07-23 00:57:09.336377] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:25.263 00:57:09 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9b57ece8-22b5-48d4-8e02-ec001065e37e 00:17:25.263 00:57:09 -- common/autotest_common.sh@640 -- # local es=0 00:17:25.263 00:57:09 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9b57ece8-22b5-48d4-8e02-ec001065e37e 00:17:25.263 00:57:09 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:25.263 00:57:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:25.263 00:57:09 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:25.263 00:57:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:25.263 00:57:09 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:25.263 00:57:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:25.263 00:57:09 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:25.263 00:57:09 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:25.263 00:57:09 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9b57ece8-22b5-48d4-8e02-ec001065e37e 00:17:25.522 request: 00:17:25.522 { 00:17:25.522 "uuid": "9b57ece8-22b5-48d4-8e02-ec001065e37e", 00:17:25.522 "method": "bdev_lvol_get_lvstores", 00:17:25.522 "req_id": 1 00:17:25.522 } 00:17:25.522 Got JSON-RPC error response 00:17:25.523 response: 00:17:25.523 { 00:17:25.523 "code": -19, 00:17:25.523 "message": "No such device" 00:17:25.523 } 00:17:25.523 00:57:09 -- common/autotest_common.sh@643 -- # es=1 00:17:25.523 00:57:09 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:25.523 00:57:09 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:25.523 00:57:09 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:25.523 00:57:09 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:25.782 aio_bdev 00:17:25.782 00:57:09 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev b6bfb7eb-03dc-40b1-8bf3-bb74d3f66161 00:17:25.782 00:57:09 -- common/autotest_common.sh@887 -- # local bdev_name=b6bfb7eb-03dc-40b1-8bf3-bb74d3f66161 00:17:25.782 00:57:09 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:25.782 00:57:09 -- common/autotest_common.sh@889 -- # local i 00:17:25.782 00:57:09 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:25.782 00:57:09 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:25.782 00:57:09 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:26.041 00:57:10 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b b6bfb7eb-03dc-40b1-8bf3-bb74d3f66161 -t 2000 00:17:26.300 [ 00:17:26.300 { 00:17:26.300 "name": "b6bfb7eb-03dc-40b1-8bf3-bb74d3f66161", 00:17:26.300 "aliases": [ 00:17:26.300 "lvs/lvol" 00:17:26.300 ], 00:17:26.300 "product_name": "Logical Volume", 00:17:26.300 "block_size": 4096, 00:17:26.300 "num_blocks": 38912, 00:17:26.300 "uuid": "b6bfb7eb-03dc-40b1-8bf3-bb74d3f66161", 00:17:26.300 "assigned_rate_limits": { 00:17:26.300 "rw_ios_per_sec": 0, 00:17:26.300 "rw_mbytes_per_sec": 0, 00:17:26.300 "r_mbytes_per_sec": 0, 00:17:26.300 "w_mbytes_per_sec": 0 00:17:26.300 }, 00:17:26.300 "claimed": false, 00:17:26.300 "zoned": false, 00:17:26.300 "supported_io_types": { 00:17:26.300 "read": true, 00:17:26.300 "write": true, 00:17:26.300 "unmap": true, 00:17:26.300 "write_zeroes": true, 00:17:26.300 "flush": false, 00:17:26.300 "reset": true, 00:17:26.300 "compare": false, 00:17:26.300 "compare_and_write": false, 00:17:26.300 "abort": false, 00:17:26.300 "nvme_admin": false, 00:17:26.300 "nvme_io": false 00:17:26.300 }, 00:17:26.300 "driver_specific": { 00:17:26.300 "lvol": { 00:17:26.300 "lvol_store_uuid": "9b57ece8-22b5-48d4-8e02-ec001065e37e", 00:17:26.300 "base_bdev": "aio_bdev", 00:17:26.300 "thin_provision": false, 00:17:26.300 "snapshot": false, 00:17:26.300 "clone": false, 00:17:26.300 "esnap_clone": false 00:17:26.300 } 00:17:26.300 } 00:17:26.300 } 00:17:26.300 ] 00:17:26.300 00:57:10 -- common/autotest_common.sh@895 -- # return 0 00:17:26.300 00:57:10 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9b57ece8-22b5-48d4-8e02-ec001065e37e 00:17:26.300 00:57:10 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:17:26.560 00:57:10 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:17:26.560 00:57:10 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9b57ece8-22b5-48d4-8e02-ec001065e37e 00:17:26.560 00:57:10 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:17:26.819 00:57:10 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:17:26.819 00:57:10 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b6bfb7eb-03dc-40b1-8bf3-bb74d3f66161 00:17:27.077 00:57:11 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9b57ece8-22b5-48d4-8e02-ec001065e37e 00:17:27.335 00:57:11 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:27.596 00:57:11 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:27.596 00:17:27.596 real 0m17.492s 00:17:27.596 user 0m16.926s 00:17:27.596 sys 0m2.010s 00:17:27.596 00:57:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:27.596 00:57:11 -- common/autotest_common.sh@10 -- # set +x 00:17:27.596 ************************************ 00:17:27.596 END TEST lvs_grow_clean 00:17:27.596 ************************************ 00:17:27.596 00:57:11 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:17:27.596 00:57:11 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:27.596 00:57:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:27.596 00:57:11 -- common/autotest_common.sh@10 -- # set +x 00:17:27.596 ************************************ 00:17:27.596 START TEST lvs_grow_dirty 00:17:27.596 ************************************ 00:17:27.596 00:57:11 -- common/autotest_common.sh@1104 -- # lvs_grow dirty 00:17:27.596 00:57:11 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:27.596 00:57:11 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:27.596 00:57:11 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:27.596 00:57:11 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:27.596 00:57:11 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:27.596 00:57:11 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:27.596 00:57:11 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:27.596 00:57:11 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:27.596 00:57:11 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:27.856 00:57:11 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:27.856 00:57:11 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:28.116 00:57:12 -- target/nvmf_lvs_grow.sh@28 -- # lvs=08d6ff84-df8a-46fb-9403-de52d79b433d 00:17:28.116 00:57:12 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 08d6ff84-df8a-46fb-9403-de52d79b433d 00:17:28.116 00:57:12 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:28.374 00:57:12 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:28.374 00:57:12 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:28.374 00:57:12 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 08d6ff84-df8a-46fb-9403-de52d79b433d lvol 150 00:17:28.633 00:57:12 -- target/nvmf_lvs_grow.sh@33 -- # lvol=bcfd3c80-fdf9-42f2-8c68-93de30251b97 00:17:28.633 00:57:12 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:28.633 00:57:12 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:28.891 [2024-07-23 00:57:12.840822] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:28.891 [2024-07-23 00:57:12.840898] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:28.891 true 00:17:28.891 00:57:12 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 08d6ff84-df8a-46fb-9403-de52d79b433d 00:17:28.891 00:57:12 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:28.891 00:57:13 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:28.891 00:57:13 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:29.150 00:57:13 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bcfd3c80-fdf9-42f2-8c68-93de30251b97 00:17:29.409 00:57:13 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:29.669 00:57:13 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:29.928 00:57:14 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3394609 00:17:29.928 00:57:14 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:29.928 00:57:14 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:29.928 00:57:14 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3394609 /var/tmp/bdevperf.sock 00:17:29.928 00:57:14 -- common/autotest_common.sh@819 -- # '[' -z 3394609 ']' 00:17:29.928 00:57:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:29.928 00:57:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:29.928 00:57:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:29.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:29.928 00:57:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:29.928 00:57:14 -- common/autotest_common.sh@10 -- # set +x 00:17:29.928 [2024-07-23 00:57:14.096053] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:17:29.928 [2024-07-23 00:57:14.096120] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3394609 ] 00:17:29.928 EAL: No free 2048 kB hugepages reported on node 1 00:17:30.187 [2024-07-23 00:57:14.156339] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:30.187 [2024-07-23 00:57:14.246474] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:31.125 00:57:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:31.125 00:57:15 -- common/autotest_common.sh@852 -- # return 0 00:17:31.125 00:57:15 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:31.383 Nvme0n1 00:17:31.383 00:57:15 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:31.642 [ 00:17:31.642 { 00:17:31.642 "name": "Nvme0n1", 00:17:31.642 "aliases": [ 00:17:31.642 "bcfd3c80-fdf9-42f2-8c68-93de30251b97" 00:17:31.642 ], 00:17:31.642 "product_name": "NVMe disk", 00:17:31.642 "block_size": 4096, 00:17:31.642 "num_blocks": 38912, 00:17:31.642 "uuid": "bcfd3c80-fdf9-42f2-8c68-93de30251b97", 00:17:31.642 "assigned_rate_limits": { 00:17:31.642 "rw_ios_per_sec": 0, 00:17:31.643 "rw_mbytes_per_sec": 0, 00:17:31.643 "r_mbytes_per_sec": 0, 00:17:31.643 "w_mbytes_per_sec": 0 00:17:31.643 }, 00:17:31.643 "claimed": false, 00:17:31.643 "zoned": false, 00:17:31.643 "supported_io_types": { 00:17:31.643 "read": true, 00:17:31.643 "write": true, 00:17:31.643 "unmap": true, 00:17:31.643 "write_zeroes": true, 00:17:31.643 "flush": true, 00:17:31.643 "reset": true, 00:17:31.643 "compare": true, 00:17:31.643 "compare_and_write": true, 00:17:31.643 "abort": true, 00:17:31.643 "nvme_admin": true, 00:17:31.643 "nvme_io": true 00:17:31.643 }, 00:17:31.643 "driver_specific": { 00:17:31.643 "nvme": [ 00:17:31.643 { 00:17:31.643 "trid": { 00:17:31.643 "trtype": "TCP", 00:17:31.643 "adrfam": "IPv4", 00:17:31.643 "traddr": "10.0.0.2", 00:17:31.643 "trsvcid": "4420", 00:17:31.643 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:31.643 }, 00:17:31.643 "ctrlr_data": { 00:17:31.643 "cntlid": 1, 00:17:31.643 "vendor_id": "0x8086", 00:17:31.643 "model_number": "SPDK bdev Controller", 00:17:31.643 "serial_number": "SPDK0", 00:17:31.643 "firmware_revision": "24.01.1", 00:17:31.643 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:31.643 "oacs": { 00:17:31.643 "security": 0, 00:17:31.643 "format": 0, 00:17:31.643 "firmware": 0, 00:17:31.643 "ns_manage": 0 00:17:31.643 }, 00:17:31.643 "multi_ctrlr": true, 00:17:31.643 "ana_reporting": false 00:17:31.643 }, 00:17:31.643 "vs": { 00:17:31.643 "nvme_version": "1.3" 00:17:31.643 }, 00:17:31.643 "ns_data": { 00:17:31.643 "id": 1, 00:17:31.643 "can_share": true 00:17:31.643 } 00:17:31.643 } 00:17:31.643 ], 00:17:31.643 "mp_policy": "active_passive" 00:17:31.643 } 00:17:31.643 } 00:17:31.643 ] 00:17:31.643 00:57:15 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3394878 00:17:31.643 00:57:15 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:31.643 00:57:15 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:31.643 Running I/O for 10 seconds... 00:17:33.024 Latency(us) 00:17:33.024 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:33.024 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:33.024 Nvme0n1 : 1.00 14867.00 58.07 0.00 0.00 0.00 0.00 0.00 00:17:33.024 =================================================================================================================== 00:17:33.024 Total : 14867.00 58.07 0.00 0.00 0.00 0.00 0.00 00:17:33.024 00:17:33.590 00:57:17 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 08d6ff84-df8a-46fb-9403-de52d79b433d 00:17:33.849 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:33.849 Nvme0n1 : 2.00 14849.00 58.00 0.00 0.00 0.00 0.00 0.00 00:17:33.849 =================================================================================================================== 00:17:33.849 Total : 14849.00 58.00 0.00 0.00 0.00 0.00 0.00 00:17:33.849 00:17:33.849 true 00:17:33.849 00:57:17 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 08d6ff84-df8a-46fb-9403-de52d79b433d 00:17:33.849 00:57:17 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:34.109 00:57:18 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:34.109 00:57:18 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:34.109 00:57:18 -- target/nvmf_lvs_grow.sh@65 -- # wait 3394878 00:17:34.715 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:34.715 Nvme0n1 : 3.00 14848.33 58.00 0.00 0.00 0.00 0.00 0.00 00:17:34.715 =================================================================================================================== 00:17:34.715 Total : 14848.33 58.00 0.00 0.00 0.00 0.00 0.00 00:17:34.715 00:17:35.653 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:35.653 Nvme0n1 : 4.00 14976.25 58.50 0.00 0.00 0.00 0.00 0.00 00:17:35.653 =================================================================================================================== 00:17:35.653 Total : 14976.25 58.50 0.00 0.00 0.00 0.00 0.00 00:17:35.653 00:17:37.027 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:37.027 Nvme0n1 : 5.00 14967.00 58.46 0.00 0.00 0.00 0.00 0.00 00:17:37.027 =================================================================================================================== 00:17:37.027 Total : 14967.00 58.46 0.00 0.00 0.00 0.00 0.00 00:17:37.027 00:17:37.963 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:37.963 Nvme0n1 : 6.00 15008.33 58.63 0.00 0.00 0.00 0.00 0.00 00:17:37.963 =================================================================================================================== 00:17:37.963 Total : 15008.33 58.63 0.00 0.00 0.00 0.00 0.00 00:17:37.963 00:17:38.898 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:38.898 Nvme0n1 : 7.00 15040.29 58.75 0.00 0.00 0.00 0.00 0.00 00:17:38.898 =================================================================================================================== 00:17:38.898 Total : 15040.29 58.75 0.00 0.00 0.00 0.00 0.00 00:17:38.898 00:17:39.833 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:39.834 Nvme0n1 : 8.00 15072.12 58.88 0.00 0.00 0.00 0.00 0.00 00:17:39.834 =================================================================================================================== 00:17:39.834 Total : 15072.12 58.88 0.00 0.00 0.00 0.00 0.00 00:17:39.834 00:17:40.772 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:40.772 Nvme0n1 : 9.00 15097.11 58.97 0.00 0.00 0.00 0.00 0.00 00:17:40.772 =================================================================================================================== 00:17:40.772 Total : 15097.11 58.97 0.00 0.00 0.00 0.00 0.00 00:17:40.772 00:17:41.709 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:41.709 Nvme0n1 : 10.00 15116.90 59.05 0.00 0.00 0.00 0.00 0.00 00:17:41.709 =================================================================================================================== 00:17:41.709 Total : 15116.90 59.05 0.00 0.00 0.00 0.00 0.00 00:17:41.709 00:17:41.709 00:17:41.709 Latency(us) 00:17:41.709 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:41.709 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:41.709 Nvme0n1 : 10.01 15115.69 59.05 0.00 0.00 8462.23 2912.71 12718.84 00:17:41.709 =================================================================================================================== 00:17:41.709 Total : 15115.69 59.05 0.00 0.00 8462.23 2912.71 12718.84 00:17:41.709 0 00:17:41.709 00:57:25 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3394609 00:17:41.709 00:57:25 -- common/autotest_common.sh@926 -- # '[' -z 3394609 ']' 00:17:41.709 00:57:25 -- common/autotest_common.sh@930 -- # kill -0 3394609 00:17:41.709 00:57:25 -- common/autotest_common.sh@931 -- # uname 00:17:41.709 00:57:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:41.709 00:57:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3394609 00:17:41.709 00:57:25 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:41.709 00:57:25 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:41.709 00:57:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3394609' 00:17:41.709 killing process with pid 3394609 00:17:41.709 00:57:25 -- common/autotest_common.sh@945 -- # kill 3394609 00:17:41.709 Received shutdown signal, test time was about 10.000000 seconds 00:17:41.709 00:17:41.709 Latency(us) 00:17:41.709 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:41.709 =================================================================================================================== 00:17:41.709 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:41.709 00:57:25 -- common/autotest_common.sh@950 -- # wait 3394609 00:17:41.967 00:57:26 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:42.225 00:57:26 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 08d6ff84-df8a-46fb-9403-de52d79b433d 00:17:42.225 00:57:26 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:17:42.483 00:57:26 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:17:42.483 00:57:26 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:17:42.483 00:57:26 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 3391305 00:17:42.483 00:57:26 -- target/nvmf_lvs_grow.sh@74 -- # wait 3391305 00:17:42.483 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 3391305 Killed "${NVMF_APP[@]}" "$@" 00:17:42.483 00:57:26 -- target/nvmf_lvs_grow.sh@74 -- # true 00:17:42.483 00:57:26 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:17:42.483 00:57:26 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:42.483 00:57:26 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:42.483 00:57:26 -- common/autotest_common.sh@10 -- # set +x 00:17:42.483 00:57:26 -- nvmf/common.sh@469 -- # nvmfpid=3396130 00:17:42.483 00:57:26 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:42.483 00:57:26 -- nvmf/common.sh@470 -- # waitforlisten 3396130 00:17:42.483 00:57:26 -- common/autotest_common.sh@819 -- # '[' -z 3396130 ']' 00:17:42.483 00:57:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:42.483 00:57:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:42.483 00:57:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:42.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:42.483 00:57:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:42.483 00:57:26 -- common/autotest_common.sh@10 -- # set +x 00:17:42.742 [2024-07-23 00:57:26.706078] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:17:42.742 [2024-07-23 00:57:26.706169] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:42.742 EAL: No free 2048 kB hugepages reported on node 1 00:17:42.742 [2024-07-23 00:57:26.769422] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:42.742 [2024-07-23 00:57:26.851450] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:42.742 [2024-07-23 00:57:26.851620] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:42.742 [2024-07-23 00:57:26.851639] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:42.742 [2024-07-23 00:57:26.851651] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:42.742 [2024-07-23 00:57:26.851677] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:43.679 00:57:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:43.679 00:57:27 -- common/autotest_common.sh@852 -- # return 0 00:17:43.679 00:57:27 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:43.679 00:57:27 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:43.679 00:57:27 -- common/autotest_common.sh@10 -- # set +x 00:17:43.679 00:57:27 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:43.679 00:57:27 -- target/nvmf_lvs_grow.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:43.937 [2024-07-23 00:57:27.935706] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:17:43.937 [2024-07-23 00:57:27.935847] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:17:43.937 [2024-07-23 00:57:27.935903] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:17:43.937 00:57:27 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:17:43.937 00:57:27 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev bcfd3c80-fdf9-42f2-8c68-93de30251b97 00:17:43.937 00:57:27 -- common/autotest_common.sh@887 -- # local bdev_name=bcfd3c80-fdf9-42f2-8c68-93de30251b97 00:17:43.937 00:57:27 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:43.937 00:57:27 -- common/autotest_common.sh@889 -- # local i 00:17:43.937 00:57:27 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:43.937 00:57:27 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:43.937 00:57:27 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:44.194 00:57:28 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b bcfd3c80-fdf9-42f2-8c68-93de30251b97 -t 2000 00:17:44.453 [ 00:17:44.453 { 00:17:44.453 "name": "bcfd3c80-fdf9-42f2-8c68-93de30251b97", 00:17:44.453 "aliases": [ 00:17:44.453 "lvs/lvol" 00:17:44.453 ], 00:17:44.453 "product_name": "Logical Volume", 00:17:44.453 "block_size": 4096, 00:17:44.453 "num_blocks": 38912, 00:17:44.453 "uuid": "bcfd3c80-fdf9-42f2-8c68-93de30251b97", 00:17:44.453 "assigned_rate_limits": { 00:17:44.453 "rw_ios_per_sec": 0, 00:17:44.453 "rw_mbytes_per_sec": 0, 00:17:44.453 "r_mbytes_per_sec": 0, 00:17:44.453 "w_mbytes_per_sec": 0 00:17:44.453 }, 00:17:44.453 "claimed": false, 00:17:44.453 "zoned": false, 00:17:44.453 "supported_io_types": { 00:17:44.453 "read": true, 00:17:44.453 "write": true, 00:17:44.453 "unmap": true, 00:17:44.453 "write_zeroes": true, 00:17:44.453 "flush": false, 00:17:44.453 "reset": true, 00:17:44.453 "compare": false, 00:17:44.453 "compare_and_write": false, 00:17:44.453 "abort": false, 00:17:44.453 "nvme_admin": false, 00:17:44.453 "nvme_io": false 00:17:44.453 }, 00:17:44.453 "driver_specific": { 00:17:44.453 "lvol": { 00:17:44.453 "lvol_store_uuid": "08d6ff84-df8a-46fb-9403-de52d79b433d", 00:17:44.453 "base_bdev": "aio_bdev", 00:17:44.453 "thin_provision": false, 00:17:44.453 "snapshot": false, 00:17:44.453 "clone": false, 00:17:44.453 "esnap_clone": false 00:17:44.453 } 00:17:44.453 } 00:17:44.453 } 00:17:44.453 ] 00:17:44.453 00:57:28 -- common/autotest_common.sh@895 -- # return 0 00:17:44.453 00:57:28 -- target/nvmf_lvs_grow.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 08d6ff84-df8a-46fb-9403-de52d79b433d 00:17:44.453 00:57:28 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:17:44.713 00:57:28 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:17:44.713 00:57:28 -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 08d6ff84-df8a-46fb-9403-de52d79b433d 00:17:44.713 00:57:28 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:17:44.971 00:57:28 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:17:44.971 00:57:28 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:44.971 [2024-07-23 00:57:29.168621] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:45.231 00:57:29 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 08d6ff84-df8a-46fb-9403-de52d79b433d 00:17:45.231 00:57:29 -- common/autotest_common.sh@640 -- # local es=0 00:17:45.231 00:57:29 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 08d6ff84-df8a-46fb-9403-de52d79b433d 00:17:45.231 00:57:29 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:45.231 00:57:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:45.231 00:57:29 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:45.231 00:57:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:45.231 00:57:29 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:45.231 00:57:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:45.231 00:57:29 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:45.231 00:57:29 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:45.231 00:57:29 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 08d6ff84-df8a-46fb-9403-de52d79b433d 00:17:45.489 request: 00:17:45.489 { 00:17:45.489 "uuid": "08d6ff84-df8a-46fb-9403-de52d79b433d", 00:17:45.489 "method": "bdev_lvol_get_lvstores", 00:17:45.489 "req_id": 1 00:17:45.489 } 00:17:45.489 Got JSON-RPC error response 00:17:45.489 response: 00:17:45.489 { 00:17:45.489 "code": -19, 00:17:45.489 "message": "No such device" 00:17:45.489 } 00:17:45.489 00:57:29 -- common/autotest_common.sh@643 -- # es=1 00:17:45.489 00:57:29 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:45.489 00:57:29 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:45.489 00:57:29 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:45.489 00:57:29 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:45.746 aio_bdev 00:17:45.746 00:57:29 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev bcfd3c80-fdf9-42f2-8c68-93de30251b97 00:17:45.746 00:57:29 -- common/autotest_common.sh@887 -- # local bdev_name=bcfd3c80-fdf9-42f2-8c68-93de30251b97 00:17:45.746 00:57:29 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:45.746 00:57:29 -- common/autotest_common.sh@889 -- # local i 00:17:45.746 00:57:29 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:45.746 00:57:29 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:45.746 00:57:29 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:46.003 00:57:29 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b bcfd3c80-fdf9-42f2-8c68-93de30251b97 -t 2000 00:17:46.003 [ 00:17:46.003 { 00:17:46.003 "name": "bcfd3c80-fdf9-42f2-8c68-93de30251b97", 00:17:46.003 "aliases": [ 00:17:46.003 "lvs/lvol" 00:17:46.003 ], 00:17:46.003 "product_name": "Logical Volume", 00:17:46.003 "block_size": 4096, 00:17:46.003 "num_blocks": 38912, 00:17:46.003 "uuid": "bcfd3c80-fdf9-42f2-8c68-93de30251b97", 00:17:46.003 "assigned_rate_limits": { 00:17:46.003 "rw_ios_per_sec": 0, 00:17:46.003 "rw_mbytes_per_sec": 0, 00:17:46.003 "r_mbytes_per_sec": 0, 00:17:46.003 "w_mbytes_per_sec": 0 00:17:46.003 }, 00:17:46.003 "claimed": false, 00:17:46.003 "zoned": false, 00:17:46.003 "supported_io_types": { 00:17:46.003 "read": true, 00:17:46.003 "write": true, 00:17:46.003 "unmap": true, 00:17:46.003 "write_zeroes": true, 00:17:46.003 "flush": false, 00:17:46.003 "reset": true, 00:17:46.003 "compare": false, 00:17:46.003 "compare_and_write": false, 00:17:46.003 "abort": false, 00:17:46.003 "nvme_admin": false, 00:17:46.003 "nvme_io": false 00:17:46.003 }, 00:17:46.003 "driver_specific": { 00:17:46.003 "lvol": { 00:17:46.003 "lvol_store_uuid": "08d6ff84-df8a-46fb-9403-de52d79b433d", 00:17:46.003 "base_bdev": "aio_bdev", 00:17:46.003 "thin_provision": false, 00:17:46.003 "snapshot": false, 00:17:46.003 "clone": false, 00:17:46.003 "esnap_clone": false 00:17:46.003 } 00:17:46.003 } 00:17:46.003 } 00:17:46.003 ] 00:17:46.263 00:57:30 -- common/autotest_common.sh@895 -- # return 0 00:17:46.263 00:57:30 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 08d6ff84-df8a-46fb-9403-de52d79b433d 00:17:46.263 00:57:30 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:17:46.263 00:57:30 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:17:46.263 00:57:30 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 08d6ff84-df8a-46fb-9403-de52d79b433d 00:17:46.263 00:57:30 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:17:46.522 00:57:30 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:17:46.522 00:57:30 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete bcfd3c80-fdf9-42f2-8c68-93de30251b97 00:17:46.782 00:57:30 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 08d6ff84-df8a-46fb-9403-de52d79b433d 00:17:47.041 00:57:31 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:47.612 00:57:31 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:47.612 00:17:47.612 real 0m19.930s 00:17:47.612 user 0m49.553s 00:17:47.612 sys 0m4.787s 00:17:47.612 00:57:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:47.612 00:57:31 -- common/autotest_common.sh@10 -- # set +x 00:17:47.612 ************************************ 00:17:47.612 END TEST lvs_grow_dirty 00:17:47.612 ************************************ 00:17:47.612 00:57:31 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:17:47.612 00:57:31 -- common/autotest_common.sh@796 -- # type=--id 00:17:47.612 00:57:31 -- common/autotest_common.sh@797 -- # id=0 00:17:47.612 00:57:31 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:17:47.612 00:57:31 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:47.612 00:57:31 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:17:47.612 00:57:31 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:17:47.612 00:57:31 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:17:47.612 00:57:31 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:47.612 nvmf_trace.0 00:17:47.612 00:57:31 -- common/autotest_common.sh@811 -- # return 0 00:17:47.612 00:57:31 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:17:47.612 00:57:31 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:47.612 00:57:31 -- nvmf/common.sh@116 -- # sync 00:17:47.612 00:57:31 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:47.612 00:57:31 -- nvmf/common.sh@119 -- # set +e 00:17:47.612 00:57:31 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:47.612 00:57:31 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:47.612 rmmod nvme_tcp 00:17:47.612 rmmod nvme_fabrics 00:17:47.612 rmmod nvme_keyring 00:17:47.612 00:57:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:47.612 00:57:31 -- nvmf/common.sh@123 -- # set -e 00:17:47.612 00:57:31 -- nvmf/common.sh@124 -- # return 0 00:17:47.612 00:57:31 -- nvmf/common.sh@477 -- # '[' -n 3396130 ']' 00:17:47.612 00:57:31 -- nvmf/common.sh@478 -- # killprocess 3396130 00:17:47.612 00:57:31 -- common/autotest_common.sh@926 -- # '[' -z 3396130 ']' 00:17:47.612 00:57:31 -- common/autotest_common.sh@930 -- # kill -0 3396130 00:17:47.612 00:57:31 -- common/autotest_common.sh@931 -- # uname 00:17:47.612 00:57:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:47.612 00:57:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3396130 00:17:47.612 00:57:31 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:47.612 00:57:31 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:47.612 00:57:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3396130' 00:17:47.612 killing process with pid 3396130 00:17:47.612 00:57:31 -- common/autotest_common.sh@945 -- # kill 3396130 00:17:47.612 00:57:31 -- common/autotest_common.sh@950 -- # wait 3396130 00:17:47.873 00:57:31 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:47.873 00:57:31 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:47.873 00:57:31 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:47.873 00:57:31 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:47.873 00:57:31 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:47.873 00:57:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:47.873 00:57:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:47.873 00:57:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:49.783 00:57:33 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:17:49.783 00:17:49.783 real 0m43.218s 00:17:49.783 user 1m12.909s 00:17:49.783 sys 0m8.586s 00:17:49.783 00:57:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:49.783 00:57:33 -- common/autotest_common.sh@10 -- # set +x 00:17:49.783 ************************************ 00:17:49.783 END TEST nvmf_lvs_grow 00:17:49.783 ************************************ 00:17:49.783 00:57:33 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:49.783 00:57:33 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:49.783 00:57:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:49.783 00:57:33 -- common/autotest_common.sh@10 -- # set +x 00:17:50.043 ************************************ 00:17:50.043 START TEST nvmf_bdev_io_wait 00:17:50.043 ************************************ 00:17:50.043 00:57:33 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:50.043 * Looking for test storage... 00:17:50.043 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:50.043 00:57:34 -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:50.043 00:57:34 -- nvmf/common.sh@7 -- # uname -s 00:17:50.043 00:57:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:50.043 00:57:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:50.043 00:57:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:50.043 00:57:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:50.043 00:57:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:50.043 00:57:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:50.043 00:57:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:50.043 00:57:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:50.043 00:57:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:50.043 00:57:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:50.043 00:57:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:50.043 00:57:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:50.043 00:57:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:50.043 00:57:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:50.043 00:57:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:50.043 00:57:34 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:50.043 00:57:34 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:50.043 00:57:34 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:50.043 00:57:34 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:50.043 00:57:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.043 00:57:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.043 00:57:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.043 00:57:34 -- paths/export.sh@5 -- # export PATH 00:17:50.043 00:57:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.043 00:57:34 -- nvmf/common.sh@46 -- # : 0 00:17:50.043 00:57:34 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:50.043 00:57:34 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:50.043 00:57:34 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:50.043 00:57:34 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:50.043 00:57:34 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:50.043 00:57:34 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:50.043 00:57:34 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:50.043 00:57:34 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:50.043 00:57:34 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:50.043 00:57:34 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:50.043 00:57:34 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:17:50.043 00:57:34 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:50.043 00:57:34 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:50.043 00:57:34 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:50.043 00:57:34 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:50.043 00:57:34 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:50.043 00:57:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:50.043 00:57:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:50.043 00:57:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:50.043 00:57:34 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:50.043 00:57:34 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:50.043 00:57:34 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:50.043 00:57:34 -- common/autotest_common.sh@10 -- # set +x 00:17:51.975 00:57:35 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:51.975 00:57:35 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:51.975 00:57:35 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:51.975 00:57:35 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:51.975 00:57:35 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:51.975 00:57:35 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:51.975 00:57:35 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:51.975 00:57:35 -- nvmf/common.sh@294 -- # net_devs=() 00:17:51.975 00:57:35 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:51.975 00:57:35 -- nvmf/common.sh@295 -- # e810=() 00:17:51.975 00:57:35 -- nvmf/common.sh@295 -- # local -ga e810 00:17:51.975 00:57:35 -- nvmf/common.sh@296 -- # x722=() 00:17:51.975 00:57:35 -- nvmf/common.sh@296 -- # local -ga x722 00:17:51.975 00:57:35 -- nvmf/common.sh@297 -- # mlx=() 00:17:51.975 00:57:35 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:51.975 00:57:35 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:51.975 00:57:35 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:51.975 00:57:35 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:51.975 00:57:35 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:51.975 00:57:35 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:51.975 00:57:35 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:51.975 00:57:35 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:51.975 00:57:35 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:51.975 00:57:35 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:51.975 00:57:35 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:51.975 00:57:35 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:51.975 00:57:35 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:51.975 00:57:35 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:17:51.975 00:57:35 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:17:51.975 00:57:35 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:17:51.975 00:57:35 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:17:51.975 00:57:35 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:51.975 00:57:35 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:51.975 00:57:35 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:51.976 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:51.976 00:57:35 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:51.976 00:57:35 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:51.976 00:57:35 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:51.976 00:57:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:51.976 00:57:35 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:51.976 00:57:35 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:51.976 00:57:35 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:51.976 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:51.976 00:57:35 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:51.976 00:57:35 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:51.976 00:57:35 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:51.976 00:57:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:51.976 00:57:35 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:51.976 00:57:35 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:51.976 00:57:35 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:17:51.976 00:57:35 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:17:51.976 00:57:35 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:51.976 00:57:35 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:51.976 00:57:35 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:51.976 00:57:35 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:51.976 00:57:35 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:51.976 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:51.976 00:57:35 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:51.976 00:57:35 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:51.976 00:57:35 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:51.976 00:57:35 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:51.976 00:57:35 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:51.976 00:57:35 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:51.976 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:51.976 00:57:35 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:51.976 00:57:35 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:51.976 00:57:35 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:51.976 00:57:35 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:51.976 00:57:35 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:17:51.976 00:57:35 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:17:51.976 00:57:35 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:51.976 00:57:35 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:51.976 00:57:35 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:51.976 00:57:35 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:17:51.976 00:57:35 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:51.976 00:57:35 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:51.976 00:57:35 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:17:51.976 00:57:35 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:51.976 00:57:35 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:51.976 00:57:35 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:17:51.976 00:57:35 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:17:51.976 00:57:35 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:17:51.976 00:57:35 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:51.976 00:57:36 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:51.976 00:57:36 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:51.976 00:57:36 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:17:51.976 00:57:36 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:51.976 00:57:36 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:51.976 00:57:36 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:51.976 00:57:36 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:17:51.976 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:51.976 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.260 ms 00:17:51.976 00:17:51.976 --- 10.0.0.2 ping statistics --- 00:17:51.976 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:51.976 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:17:51.976 00:57:36 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:51.976 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:51.976 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:17:51.976 00:17:51.976 --- 10.0.0.1 ping statistics --- 00:17:51.976 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:51.976 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:17:51.976 00:57:36 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:51.976 00:57:36 -- nvmf/common.sh@410 -- # return 0 00:17:51.976 00:57:36 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:51.976 00:57:36 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:51.976 00:57:36 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:51.976 00:57:36 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:51.976 00:57:36 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:51.976 00:57:36 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:51.976 00:57:36 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:51.976 00:57:36 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:17:51.976 00:57:36 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:51.976 00:57:36 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:51.976 00:57:36 -- common/autotest_common.sh@10 -- # set +x 00:17:51.976 00:57:36 -- nvmf/common.sh@469 -- # nvmfpid=3398800 00:17:51.976 00:57:36 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:17:51.976 00:57:36 -- nvmf/common.sh@470 -- # waitforlisten 3398800 00:17:51.976 00:57:36 -- common/autotest_common.sh@819 -- # '[' -z 3398800 ']' 00:17:51.976 00:57:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:51.976 00:57:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:51.976 00:57:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:51.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:51.976 00:57:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:51.976 00:57:36 -- common/autotest_common.sh@10 -- # set +x 00:17:51.976 [2024-07-23 00:57:36.163865] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:17:51.976 [2024-07-23 00:57:36.163947] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:52.236 EAL: No free 2048 kB hugepages reported on node 1 00:17:52.237 [2024-07-23 00:57:36.231846] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:52.237 [2024-07-23 00:57:36.322050] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:52.237 [2024-07-23 00:57:36.322210] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:52.237 [2024-07-23 00:57:36.322229] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:52.237 [2024-07-23 00:57:36.322243] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:52.237 [2024-07-23 00:57:36.322443] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:52.237 [2024-07-23 00:57:36.322497] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:52.237 [2024-07-23 00:57:36.322610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:52.237 [2024-07-23 00:57:36.322617] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:52.237 00:57:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:52.237 00:57:36 -- common/autotest_common.sh@852 -- # return 0 00:17:52.237 00:57:36 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:52.237 00:57:36 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:52.237 00:57:36 -- common/autotest_common.sh@10 -- # set +x 00:17:52.237 00:57:36 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:52.237 00:57:36 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:17:52.237 00:57:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:52.237 00:57:36 -- common/autotest_common.sh@10 -- # set +x 00:17:52.237 00:57:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:52.237 00:57:36 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:17:52.237 00:57:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:52.237 00:57:36 -- common/autotest_common.sh@10 -- # set +x 00:17:52.496 00:57:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:52.496 00:57:36 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:52.496 00:57:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:52.496 00:57:36 -- common/autotest_common.sh@10 -- # set +x 00:17:52.496 [2024-07-23 00:57:36.466757] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:52.496 00:57:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:52.496 00:57:36 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:52.496 00:57:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:52.496 00:57:36 -- common/autotest_common.sh@10 -- # set +x 00:17:52.496 Malloc0 00:17:52.496 00:57:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:52.496 00:57:36 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:52.496 00:57:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:52.496 00:57:36 -- common/autotest_common.sh@10 -- # set +x 00:17:52.496 00:57:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:52.496 00:57:36 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:52.496 00:57:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:52.496 00:57:36 -- common/autotest_common.sh@10 -- # set +x 00:17:52.496 00:57:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:52.496 00:57:36 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:52.496 00:57:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:52.497 00:57:36 -- common/autotest_common.sh@10 -- # set +x 00:17:52.497 [2024-07-23 00:57:36.536194] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:52.497 00:57:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:52.497 00:57:36 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3398823 00:17:52.497 00:57:36 -- target/bdev_io_wait.sh@30 -- # READ_PID=3398824 00:17:52.497 00:57:36 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:17:52.497 00:57:36 -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:17:52.497 00:57:36 -- nvmf/common.sh@520 -- # config=() 00:17:52.497 00:57:36 -- nvmf/common.sh@520 -- # local subsystem config 00:17:52.497 00:57:36 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3398827 00:17:52.497 00:57:36 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:52.497 00:57:36 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:17:52.497 00:57:36 -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:17:52.497 00:57:36 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:52.497 { 00:17:52.497 "params": { 00:17:52.497 "name": "Nvme$subsystem", 00:17:52.497 "trtype": "$TEST_TRANSPORT", 00:17:52.497 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:52.497 "adrfam": "ipv4", 00:17:52.497 "trsvcid": "$NVMF_PORT", 00:17:52.497 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:52.497 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:52.497 "hdgst": ${hdgst:-false}, 00:17:52.497 "ddgst": ${ddgst:-false} 00:17:52.497 }, 00:17:52.497 "method": "bdev_nvme_attach_controller" 00:17:52.497 } 00:17:52.497 EOF 00:17:52.497 )") 00:17:52.497 00:57:36 -- nvmf/common.sh@520 -- # config=() 00:17:52.497 00:57:36 -- nvmf/common.sh@520 -- # local subsystem config 00:17:52.497 00:57:36 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:52.497 00:57:36 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3398829 00:17:52.497 00:57:36 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:52.497 { 00:17:52.497 "params": { 00:17:52.497 "name": "Nvme$subsystem", 00:17:52.497 "trtype": "$TEST_TRANSPORT", 00:17:52.497 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:52.497 "adrfam": "ipv4", 00:17:52.497 "trsvcid": "$NVMF_PORT", 00:17:52.497 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:52.497 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:52.497 "hdgst": ${hdgst:-false}, 00:17:52.497 "ddgst": ${ddgst:-false} 00:17:52.497 }, 00:17:52.497 "method": "bdev_nvme_attach_controller" 00:17:52.497 } 00:17:52.497 EOF 00:17:52.497 )") 00:17:52.497 00:57:36 -- target/bdev_io_wait.sh@35 -- # sync 00:17:52.497 00:57:36 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:17:52.497 00:57:36 -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:17:52.497 00:57:36 -- nvmf/common.sh@520 -- # config=() 00:17:52.497 00:57:36 -- nvmf/common.sh@520 -- # local subsystem config 00:17:52.497 00:57:36 -- nvmf/common.sh@542 -- # cat 00:17:52.497 00:57:36 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:17:52.497 00:57:36 -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:17:52.497 00:57:36 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:52.497 00:57:36 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:52.497 { 00:17:52.497 "params": { 00:17:52.497 "name": "Nvme$subsystem", 00:17:52.497 "trtype": "$TEST_TRANSPORT", 00:17:52.497 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:52.497 "adrfam": "ipv4", 00:17:52.497 "trsvcid": "$NVMF_PORT", 00:17:52.497 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:52.497 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:52.497 "hdgst": ${hdgst:-false}, 00:17:52.497 "ddgst": ${ddgst:-false} 00:17:52.497 }, 00:17:52.497 "method": "bdev_nvme_attach_controller" 00:17:52.497 } 00:17:52.497 EOF 00:17:52.497 )") 00:17:52.497 00:57:36 -- nvmf/common.sh@520 -- # config=() 00:17:52.497 00:57:36 -- nvmf/common.sh@520 -- # local subsystem config 00:17:52.497 00:57:36 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:52.497 00:57:36 -- nvmf/common.sh@542 -- # cat 00:17:52.497 00:57:36 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:52.497 { 00:17:52.497 "params": { 00:17:52.497 "name": "Nvme$subsystem", 00:17:52.497 "trtype": "$TEST_TRANSPORT", 00:17:52.497 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:52.497 "adrfam": "ipv4", 00:17:52.497 "trsvcid": "$NVMF_PORT", 00:17:52.497 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:52.497 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:52.497 "hdgst": ${hdgst:-false}, 00:17:52.497 "ddgst": ${ddgst:-false} 00:17:52.497 }, 00:17:52.497 "method": "bdev_nvme_attach_controller" 00:17:52.497 } 00:17:52.497 EOF 00:17:52.497 )") 00:17:52.497 00:57:36 -- nvmf/common.sh@542 -- # cat 00:17:52.497 00:57:36 -- target/bdev_io_wait.sh@37 -- # wait 3398823 00:17:52.497 00:57:36 -- nvmf/common.sh@542 -- # cat 00:17:52.497 00:57:36 -- nvmf/common.sh@544 -- # jq . 00:17:52.497 00:57:36 -- nvmf/common.sh@544 -- # jq . 00:17:52.497 00:57:36 -- nvmf/common.sh@544 -- # jq . 00:17:52.497 00:57:36 -- nvmf/common.sh@545 -- # IFS=, 00:17:52.497 00:57:36 -- nvmf/common.sh@544 -- # jq . 00:17:52.497 00:57:36 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:52.497 "params": { 00:17:52.497 "name": "Nvme1", 00:17:52.497 "trtype": "tcp", 00:17:52.497 "traddr": "10.0.0.2", 00:17:52.497 "adrfam": "ipv4", 00:17:52.497 "trsvcid": "4420", 00:17:52.497 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:52.497 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:52.497 "hdgst": false, 00:17:52.497 "ddgst": false 00:17:52.497 }, 00:17:52.497 "method": "bdev_nvme_attach_controller" 00:17:52.497 }' 00:17:52.497 00:57:36 -- nvmf/common.sh@545 -- # IFS=, 00:17:52.497 00:57:36 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:52.497 "params": { 00:17:52.497 "name": "Nvme1", 00:17:52.497 "trtype": "tcp", 00:17:52.497 "traddr": "10.0.0.2", 00:17:52.497 "adrfam": "ipv4", 00:17:52.497 "trsvcid": "4420", 00:17:52.497 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:52.497 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:52.497 "hdgst": false, 00:17:52.497 "ddgst": false 00:17:52.497 }, 00:17:52.497 "method": "bdev_nvme_attach_controller" 00:17:52.497 }' 00:17:52.497 00:57:36 -- nvmf/common.sh@545 -- # IFS=, 00:17:52.497 00:57:36 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:52.497 "params": { 00:17:52.497 "name": "Nvme1", 00:17:52.497 "trtype": "tcp", 00:17:52.497 "traddr": "10.0.0.2", 00:17:52.497 "adrfam": "ipv4", 00:17:52.497 "trsvcid": "4420", 00:17:52.497 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:52.497 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:52.497 "hdgst": false, 00:17:52.497 "ddgst": false 00:17:52.497 }, 00:17:52.497 "method": "bdev_nvme_attach_controller" 00:17:52.497 }' 00:17:52.497 00:57:36 -- nvmf/common.sh@545 -- # IFS=, 00:17:52.497 00:57:36 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:52.497 "params": { 00:17:52.497 "name": "Nvme1", 00:17:52.497 "trtype": "tcp", 00:17:52.497 "traddr": "10.0.0.2", 00:17:52.497 "adrfam": "ipv4", 00:17:52.497 "trsvcid": "4420", 00:17:52.497 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:52.497 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:52.497 "hdgst": false, 00:17:52.497 "ddgst": false 00:17:52.497 }, 00:17:52.497 "method": "bdev_nvme_attach_controller" 00:17:52.497 }' 00:17:52.497 [2024-07-23 00:57:36.582364] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:17:52.498 [2024-07-23 00:57:36.582435] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:17:52.498 [2024-07-23 00:57:36.582869] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:17:52.498 [2024-07-23 00:57:36.582867] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:17:52.498 [2024-07-23 00:57:36.582867] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:17:52.498 [2024-07-23 00:57:36.582952] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-23 00:57:36.582953] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-23 00:57:36.582953] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:17:52.498 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:17:52.498 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:17:52.498 EAL: No free 2048 kB hugepages reported on node 1 00:17:52.756 EAL: No free 2048 kB hugepages reported on node 1 00:17:52.756 [2024-07-23 00:57:36.756366] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:52.756 EAL: No free 2048 kB hugepages reported on node 1 00:17:52.757 [2024-07-23 00:57:36.830659] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:52.757 [2024-07-23 00:57:36.857765] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:52.757 EAL: No free 2048 kB hugepages reported on node 1 00:17:52.757 [2024-07-23 00:57:36.931962] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:17:53.015 [2024-07-23 00:57:36.961834] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:53.015 [2024-07-23 00:57:37.034316] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:53.015 [2024-07-23 00:57:37.038786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:17:53.015 [2024-07-23 00:57:37.102434] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:17:53.273 Running I/O for 1 seconds... 00:17:53.273 Running I/O for 1 seconds... 00:17:53.273 Running I/O for 1 seconds... 00:17:53.273 Running I/O for 1 seconds... 00:17:54.213 00:17:54.213 Latency(us) 00:17:54.213 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:54.213 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:17:54.213 Nvme1n1 : 1.01 11222.91 43.84 0.00 0.00 11367.64 5631.24 16699.54 00:17:54.213 =================================================================================================================== 00:17:54.213 Total : 11222.91 43.84 0.00 0.00 11367.64 5631.24 16699.54 00:17:54.213 00:17:54.213 Latency(us) 00:17:54.213 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:54.213 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:17:54.213 Nvme1n1 : 1.00 200952.30 784.97 0.00 0.00 634.72 248.79 843.47 00:17:54.213 =================================================================================================================== 00:17:54.213 Total : 200952.30 784.97 0.00 0.00 634.72 248.79 843.47 00:17:54.213 00:17:54.213 Latency(us) 00:17:54.213 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:54.213 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:17:54.213 Nvme1n1 : 1.01 9296.74 36.32 0.00 0.00 13704.29 8641.04 24078.41 00:17:54.213 =================================================================================================================== 00:17:54.213 Total : 9296.74 36.32 0.00 0.00 13704.29 8641.04 24078.41 00:17:54.213 00:17:54.213 Latency(us) 00:17:54.213 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:54.213 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:17:54.213 Nvme1n1 : 1.01 9258.03 36.16 0.00 0.00 13763.88 5752.60 23495.87 00:17:54.213 =================================================================================================================== 00:17:54.213 Total : 9258.03 36.16 0.00 0.00 13763.88 5752.60 23495.87 00:17:54.781 00:57:38 -- target/bdev_io_wait.sh@38 -- # wait 3398824 00:17:54.781 00:57:38 -- target/bdev_io_wait.sh@39 -- # wait 3398827 00:17:54.781 00:57:38 -- target/bdev_io_wait.sh@40 -- # wait 3398829 00:17:54.781 00:57:38 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:54.781 00:57:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:54.781 00:57:38 -- common/autotest_common.sh@10 -- # set +x 00:17:54.781 00:57:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:54.781 00:57:38 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:17:54.781 00:57:38 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:17:54.781 00:57:38 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:54.781 00:57:38 -- nvmf/common.sh@116 -- # sync 00:17:54.781 00:57:38 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:54.781 00:57:38 -- nvmf/common.sh@119 -- # set +e 00:17:54.781 00:57:38 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:54.781 00:57:38 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:54.781 rmmod nvme_tcp 00:17:54.781 rmmod nvme_fabrics 00:17:54.781 rmmod nvme_keyring 00:17:54.781 00:57:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:54.781 00:57:38 -- nvmf/common.sh@123 -- # set -e 00:17:54.781 00:57:38 -- nvmf/common.sh@124 -- # return 0 00:17:54.781 00:57:38 -- nvmf/common.sh@477 -- # '[' -n 3398800 ']' 00:17:54.781 00:57:38 -- nvmf/common.sh@478 -- # killprocess 3398800 00:17:54.781 00:57:38 -- common/autotest_common.sh@926 -- # '[' -z 3398800 ']' 00:17:54.781 00:57:38 -- common/autotest_common.sh@930 -- # kill -0 3398800 00:17:54.781 00:57:38 -- common/autotest_common.sh@931 -- # uname 00:17:54.781 00:57:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:54.781 00:57:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3398800 00:17:54.781 00:57:38 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:54.781 00:57:38 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:54.781 00:57:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3398800' 00:17:54.781 killing process with pid 3398800 00:17:54.781 00:57:38 -- common/autotest_common.sh@945 -- # kill 3398800 00:17:54.781 00:57:38 -- common/autotest_common.sh@950 -- # wait 3398800 00:17:55.042 00:57:39 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:55.042 00:57:39 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:55.042 00:57:39 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:55.042 00:57:39 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:55.042 00:57:39 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:55.042 00:57:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:55.042 00:57:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:55.042 00:57:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:56.949 00:57:41 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:17:56.949 00:17:56.949 real 0m7.071s 00:17:56.949 user 0m14.979s 00:17:56.949 sys 0m3.772s 00:17:56.949 00:57:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:56.949 00:57:41 -- common/autotest_common.sh@10 -- # set +x 00:17:56.949 ************************************ 00:17:56.949 END TEST nvmf_bdev_io_wait 00:17:56.949 ************************************ 00:17:56.949 00:57:41 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:56.949 00:57:41 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:56.949 00:57:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:56.949 00:57:41 -- common/autotest_common.sh@10 -- # set +x 00:17:56.949 ************************************ 00:17:56.949 START TEST nvmf_queue_depth 00:17:56.949 ************************************ 00:17:56.949 00:57:41 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:56.949 * Looking for test storage... 00:17:56.949 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:56.949 00:57:41 -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:56.949 00:57:41 -- nvmf/common.sh@7 -- # uname -s 00:17:56.949 00:57:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:56.949 00:57:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:56.949 00:57:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:56.950 00:57:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:56.950 00:57:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:56.950 00:57:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:56.950 00:57:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:56.950 00:57:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:56.950 00:57:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:56.950 00:57:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:56.950 00:57:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:56.950 00:57:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:56.950 00:57:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:56.950 00:57:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:57.207 00:57:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:57.207 00:57:41 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:57.207 00:57:41 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:57.207 00:57:41 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:57.207 00:57:41 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:57.207 00:57:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.207 00:57:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.207 00:57:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.207 00:57:41 -- paths/export.sh@5 -- # export PATH 00:17:57.207 00:57:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.207 00:57:41 -- nvmf/common.sh@46 -- # : 0 00:17:57.207 00:57:41 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:57.207 00:57:41 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:57.207 00:57:41 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:57.207 00:57:41 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:57.207 00:57:41 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:57.207 00:57:41 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:57.207 00:57:41 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:57.207 00:57:41 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:57.207 00:57:41 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:17:57.207 00:57:41 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:17:57.207 00:57:41 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:57.207 00:57:41 -- target/queue_depth.sh@19 -- # nvmftestinit 00:17:57.207 00:57:41 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:57.207 00:57:41 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:57.207 00:57:41 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:57.207 00:57:41 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:57.207 00:57:41 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:57.207 00:57:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:57.207 00:57:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:57.207 00:57:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:57.207 00:57:41 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:57.207 00:57:41 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:57.207 00:57:41 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:57.207 00:57:41 -- common/autotest_common.sh@10 -- # set +x 00:17:59.112 00:57:43 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:59.112 00:57:43 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:59.112 00:57:43 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:59.112 00:57:43 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:59.112 00:57:43 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:59.112 00:57:43 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:59.112 00:57:43 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:59.112 00:57:43 -- nvmf/common.sh@294 -- # net_devs=() 00:17:59.112 00:57:43 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:59.112 00:57:43 -- nvmf/common.sh@295 -- # e810=() 00:17:59.112 00:57:43 -- nvmf/common.sh@295 -- # local -ga e810 00:17:59.112 00:57:43 -- nvmf/common.sh@296 -- # x722=() 00:17:59.112 00:57:43 -- nvmf/common.sh@296 -- # local -ga x722 00:17:59.112 00:57:43 -- nvmf/common.sh@297 -- # mlx=() 00:17:59.112 00:57:43 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:59.112 00:57:43 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:59.112 00:57:43 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:59.112 00:57:43 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:59.112 00:57:43 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:59.112 00:57:43 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:59.112 00:57:43 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:59.112 00:57:43 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:59.112 00:57:43 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:59.112 00:57:43 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:59.112 00:57:43 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:59.112 00:57:43 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:59.112 00:57:43 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:59.112 00:57:43 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:17:59.112 00:57:43 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:17:59.112 00:57:43 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:17:59.112 00:57:43 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:17:59.112 00:57:43 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:59.112 00:57:43 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:59.112 00:57:43 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:59.112 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:59.112 00:57:43 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:59.112 00:57:43 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:59.112 00:57:43 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:59.112 00:57:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:59.112 00:57:43 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:59.112 00:57:43 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:59.112 00:57:43 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:59.112 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:59.112 00:57:43 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:59.112 00:57:43 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:59.112 00:57:43 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:59.112 00:57:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:59.112 00:57:43 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:59.112 00:57:43 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:59.112 00:57:43 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:17:59.112 00:57:43 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:17:59.112 00:57:43 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:59.112 00:57:43 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:59.112 00:57:43 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:59.112 00:57:43 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:59.112 00:57:43 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:59.112 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:59.112 00:57:43 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:59.112 00:57:43 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:59.112 00:57:43 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:59.112 00:57:43 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:59.112 00:57:43 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:59.112 00:57:43 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:59.112 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:59.112 00:57:43 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:59.112 00:57:43 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:59.112 00:57:43 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:59.112 00:57:43 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:59.112 00:57:43 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:17:59.112 00:57:43 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:17:59.112 00:57:43 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:59.112 00:57:43 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:59.112 00:57:43 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:59.112 00:57:43 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:17:59.112 00:57:43 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:59.112 00:57:43 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:59.112 00:57:43 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:17:59.112 00:57:43 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:59.112 00:57:43 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:59.112 00:57:43 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:17:59.112 00:57:43 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:17:59.112 00:57:43 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:17:59.112 00:57:43 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:59.112 00:57:43 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:59.112 00:57:43 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:59.112 00:57:43 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:17:59.112 00:57:43 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:59.112 00:57:43 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:59.112 00:57:43 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:59.112 00:57:43 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:17:59.112 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:59.112 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:17:59.112 00:17:59.112 --- 10.0.0.2 ping statistics --- 00:17:59.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:59.112 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:17:59.112 00:57:43 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:59.112 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:59.112 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:17:59.112 00:17:59.112 --- 10.0.0.1 ping statistics --- 00:17:59.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:59.112 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:17:59.112 00:57:43 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:59.112 00:57:43 -- nvmf/common.sh@410 -- # return 0 00:17:59.112 00:57:43 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:59.112 00:57:43 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:59.112 00:57:43 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:59.112 00:57:43 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:59.112 00:57:43 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:59.112 00:57:43 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:59.112 00:57:43 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:59.112 00:57:43 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:17:59.112 00:57:43 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:59.112 00:57:43 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:59.112 00:57:43 -- common/autotest_common.sh@10 -- # set +x 00:17:59.112 00:57:43 -- nvmf/common.sh@469 -- # nvmfpid=3401070 00:17:59.112 00:57:43 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:59.112 00:57:43 -- nvmf/common.sh@470 -- # waitforlisten 3401070 00:17:59.112 00:57:43 -- common/autotest_common.sh@819 -- # '[' -z 3401070 ']' 00:17:59.112 00:57:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:59.112 00:57:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:59.112 00:57:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:59.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:59.112 00:57:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:59.112 00:57:43 -- common/autotest_common.sh@10 -- # set +x 00:17:59.370 [2024-07-23 00:57:43.317873] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:17:59.370 [2024-07-23 00:57:43.317983] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:59.370 EAL: No free 2048 kB hugepages reported on node 1 00:17:59.370 [2024-07-23 00:57:43.379879] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:59.370 [2024-07-23 00:57:43.462032] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:59.370 [2024-07-23 00:57:43.462189] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:59.370 [2024-07-23 00:57:43.462207] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:59.370 [2024-07-23 00:57:43.462220] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:59.370 [2024-07-23 00:57:43.462246] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:00.310 00:57:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:00.310 00:57:44 -- common/autotest_common.sh@852 -- # return 0 00:18:00.310 00:57:44 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:00.310 00:57:44 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:00.310 00:57:44 -- common/autotest_common.sh@10 -- # set +x 00:18:00.310 00:57:44 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:00.310 00:57:44 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:00.310 00:57:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:00.310 00:57:44 -- common/autotest_common.sh@10 -- # set +x 00:18:00.310 [2024-07-23 00:57:44.278294] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:00.310 00:57:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:00.310 00:57:44 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:00.310 00:57:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:00.310 00:57:44 -- common/autotest_common.sh@10 -- # set +x 00:18:00.310 Malloc0 00:18:00.310 00:57:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:00.310 00:57:44 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:00.310 00:57:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:00.310 00:57:44 -- common/autotest_common.sh@10 -- # set +x 00:18:00.310 00:57:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:00.310 00:57:44 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:00.310 00:57:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:00.310 00:57:44 -- common/autotest_common.sh@10 -- # set +x 00:18:00.310 00:57:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:00.310 00:57:44 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:00.310 00:57:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:00.310 00:57:44 -- common/autotest_common.sh@10 -- # set +x 00:18:00.310 [2024-07-23 00:57:44.340218] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:00.310 00:57:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:00.310 00:57:44 -- target/queue_depth.sh@30 -- # bdevperf_pid=3401223 00:18:00.310 00:57:44 -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:18:00.310 00:57:44 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:00.310 00:57:44 -- target/queue_depth.sh@33 -- # waitforlisten 3401223 /var/tmp/bdevperf.sock 00:18:00.310 00:57:44 -- common/autotest_common.sh@819 -- # '[' -z 3401223 ']' 00:18:00.310 00:57:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:00.310 00:57:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:00.310 00:57:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:00.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:00.310 00:57:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:00.310 00:57:44 -- common/autotest_common.sh@10 -- # set +x 00:18:00.310 [2024-07-23 00:57:44.387567] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:18:00.310 [2024-07-23 00:57:44.387668] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3401223 ] 00:18:00.310 EAL: No free 2048 kB hugepages reported on node 1 00:18:00.310 [2024-07-23 00:57:44.452439] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:00.570 [2024-07-23 00:57:44.540527] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:01.138 00:57:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:01.138 00:57:45 -- common/autotest_common.sh@852 -- # return 0 00:18:01.138 00:57:45 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:01.138 00:57:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:01.138 00:57:45 -- common/autotest_common.sh@10 -- # set +x 00:18:01.398 NVMe0n1 00:18:01.398 00:57:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:01.398 00:57:45 -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:01.398 Running I/O for 10 seconds... 00:18:13.612 00:18:13.612 Latency(us) 00:18:13.612 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:13.612 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:18:13.612 Verification LBA range: start 0x0 length 0x4000 00:18:13.612 NVMe0n1 : 10.07 12238.80 47.81 0.00 0.00 83340.81 14951.92 64468.01 00:18:13.612 =================================================================================================================== 00:18:13.612 Total : 12238.80 47.81 0.00 0.00 83340.81 14951.92 64468.01 00:18:13.612 0 00:18:13.612 00:57:55 -- target/queue_depth.sh@39 -- # killprocess 3401223 00:18:13.612 00:57:55 -- common/autotest_common.sh@926 -- # '[' -z 3401223 ']' 00:18:13.612 00:57:55 -- common/autotest_common.sh@930 -- # kill -0 3401223 00:18:13.612 00:57:55 -- common/autotest_common.sh@931 -- # uname 00:18:13.612 00:57:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:13.612 00:57:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3401223 00:18:13.612 00:57:55 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:13.612 00:57:55 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:13.612 00:57:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3401223' 00:18:13.612 killing process with pid 3401223 00:18:13.612 00:57:55 -- common/autotest_common.sh@945 -- # kill 3401223 00:18:13.612 Received shutdown signal, test time was about 10.000000 seconds 00:18:13.612 00:18:13.612 Latency(us) 00:18:13.612 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:13.612 =================================================================================================================== 00:18:13.612 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:13.612 00:57:55 -- common/autotest_common.sh@950 -- # wait 3401223 00:18:13.612 00:57:55 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:18:13.612 00:57:55 -- target/queue_depth.sh@43 -- # nvmftestfini 00:18:13.612 00:57:55 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:13.612 00:57:55 -- nvmf/common.sh@116 -- # sync 00:18:13.612 00:57:55 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:13.612 00:57:55 -- nvmf/common.sh@119 -- # set +e 00:18:13.612 00:57:55 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:13.612 00:57:55 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:13.612 rmmod nvme_tcp 00:18:13.612 rmmod nvme_fabrics 00:18:13.612 rmmod nvme_keyring 00:18:13.612 00:57:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:13.612 00:57:55 -- nvmf/common.sh@123 -- # set -e 00:18:13.612 00:57:55 -- nvmf/common.sh@124 -- # return 0 00:18:13.612 00:57:55 -- nvmf/common.sh@477 -- # '[' -n 3401070 ']' 00:18:13.612 00:57:55 -- nvmf/common.sh@478 -- # killprocess 3401070 00:18:13.612 00:57:55 -- common/autotest_common.sh@926 -- # '[' -z 3401070 ']' 00:18:13.612 00:57:55 -- common/autotest_common.sh@930 -- # kill -0 3401070 00:18:13.612 00:57:55 -- common/autotest_common.sh@931 -- # uname 00:18:13.612 00:57:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:13.612 00:57:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3401070 00:18:13.612 00:57:55 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:18:13.612 00:57:55 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:18:13.612 00:57:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3401070' 00:18:13.612 killing process with pid 3401070 00:18:13.612 00:57:55 -- common/autotest_common.sh@945 -- # kill 3401070 00:18:13.612 00:57:55 -- common/autotest_common.sh@950 -- # wait 3401070 00:18:13.612 00:57:56 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:13.612 00:57:56 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:13.612 00:57:56 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:13.612 00:57:56 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:13.612 00:57:56 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:13.612 00:57:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:13.612 00:57:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:13.612 00:57:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:14.191 00:57:58 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:14.191 00:18:14.191 real 0m17.178s 00:18:14.191 user 0m24.608s 00:18:14.191 sys 0m3.106s 00:18:14.191 00:57:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:14.191 00:57:58 -- common/autotest_common.sh@10 -- # set +x 00:18:14.191 ************************************ 00:18:14.191 END TEST nvmf_queue_depth 00:18:14.191 ************************************ 00:18:14.191 00:57:58 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:18:14.191 00:57:58 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:14.191 00:57:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:14.191 00:57:58 -- common/autotest_common.sh@10 -- # set +x 00:18:14.191 ************************************ 00:18:14.191 START TEST nvmf_multipath 00:18:14.191 ************************************ 00:18:14.191 00:57:58 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:18:14.191 * Looking for test storage... 00:18:14.191 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:14.191 00:57:58 -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:14.191 00:57:58 -- nvmf/common.sh@7 -- # uname -s 00:18:14.191 00:57:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:14.191 00:57:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:14.191 00:57:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:14.191 00:57:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:14.191 00:57:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:14.191 00:57:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:14.192 00:57:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:14.192 00:57:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:14.192 00:57:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:14.192 00:57:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:14.192 00:57:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:14.192 00:57:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:14.192 00:57:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:14.192 00:57:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:14.192 00:57:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:14.192 00:57:58 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:14.192 00:57:58 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:14.192 00:57:58 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:14.192 00:57:58 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:14.192 00:57:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.192 00:57:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.192 00:57:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.192 00:57:58 -- paths/export.sh@5 -- # export PATH 00:18:14.192 00:57:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.192 00:57:58 -- nvmf/common.sh@46 -- # : 0 00:18:14.192 00:57:58 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:14.192 00:57:58 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:14.192 00:57:58 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:14.192 00:57:58 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:14.192 00:57:58 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:14.192 00:57:58 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:14.192 00:57:58 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:14.192 00:57:58 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:14.192 00:57:58 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:14.192 00:57:58 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:14.192 00:57:58 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:18:14.192 00:57:58 -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:14.192 00:57:58 -- target/multipath.sh@43 -- # nvmftestinit 00:18:14.192 00:57:58 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:14.192 00:57:58 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:14.192 00:57:58 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:14.192 00:57:58 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:14.192 00:57:58 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:14.192 00:57:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:14.192 00:57:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:14.192 00:57:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:14.192 00:57:58 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:14.192 00:57:58 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:14.192 00:57:58 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:14.192 00:57:58 -- common/autotest_common.sh@10 -- # set +x 00:18:16.135 00:58:00 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:16.135 00:58:00 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:16.135 00:58:00 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:16.135 00:58:00 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:16.135 00:58:00 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:16.135 00:58:00 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:16.135 00:58:00 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:16.135 00:58:00 -- nvmf/common.sh@294 -- # net_devs=() 00:18:16.135 00:58:00 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:16.135 00:58:00 -- nvmf/common.sh@295 -- # e810=() 00:18:16.135 00:58:00 -- nvmf/common.sh@295 -- # local -ga e810 00:18:16.135 00:58:00 -- nvmf/common.sh@296 -- # x722=() 00:18:16.135 00:58:00 -- nvmf/common.sh@296 -- # local -ga x722 00:18:16.135 00:58:00 -- nvmf/common.sh@297 -- # mlx=() 00:18:16.135 00:58:00 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:16.135 00:58:00 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:16.135 00:58:00 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:16.135 00:58:00 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:16.135 00:58:00 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:16.135 00:58:00 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:16.135 00:58:00 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:16.135 00:58:00 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:16.135 00:58:00 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:16.135 00:58:00 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:16.135 00:58:00 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:16.135 00:58:00 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:16.135 00:58:00 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:16.135 00:58:00 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:16.135 00:58:00 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:16.135 00:58:00 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:16.135 00:58:00 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:16.135 00:58:00 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:16.135 00:58:00 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:16.135 00:58:00 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:16.135 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:16.135 00:58:00 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:16.135 00:58:00 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:16.135 00:58:00 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:16.135 00:58:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:16.135 00:58:00 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:16.135 00:58:00 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:16.135 00:58:00 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:16.135 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:16.135 00:58:00 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:16.135 00:58:00 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:16.135 00:58:00 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:16.135 00:58:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:16.135 00:58:00 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:16.135 00:58:00 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:16.135 00:58:00 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:16.135 00:58:00 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:16.135 00:58:00 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:16.135 00:58:00 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:16.135 00:58:00 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:16.136 00:58:00 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:16.136 00:58:00 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:16.136 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:16.136 00:58:00 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:16.136 00:58:00 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:16.136 00:58:00 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:16.136 00:58:00 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:16.136 00:58:00 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:16.136 00:58:00 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:16.136 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:16.136 00:58:00 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:16.136 00:58:00 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:16.136 00:58:00 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:16.136 00:58:00 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:16.136 00:58:00 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:16.136 00:58:00 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:16.136 00:58:00 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:16.136 00:58:00 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:16.136 00:58:00 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:16.136 00:58:00 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:16.136 00:58:00 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:16.136 00:58:00 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:16.136 00:58:00 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:16.136 00:58:00 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:16.136 00:58:00 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:16.136 00:58:00 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:16.136 00:58:00 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:16.136 00:58:00 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:16.136 00:58:00 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:16.136 00:58:00 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:16.136 00:58:00 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:16.136 00:58:00 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:16.136 00:58:00 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:16.394 00:58:00 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:16.394 00:58:00 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:16.394 00:58:00 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:16.394 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:16.394 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.211 ms 00:18:16.394 00:18:16.394 --- 10.0.0.2 ping statistics --- 00:18:16.394 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:16.394 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:18:16.394 00:58:00 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:16.394 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:16.394 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:18:16.394 00:18:16.394 --- 10.0.0.1 ping statistics --- 00:18:16.394 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:16.394 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:18:16.394 00:58:00 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:16.394 00:58:00 -- nvmf/common.sh@410 -- # return 0 00:18:16.394 00:58:00 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:16.394 00:58:00 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:16.394 00:58:00 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:16.394 00:58:00 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:16.394 00:58:00 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:16.394 00:58:00 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:16.394 00:58:00 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:16.394 00:58:00 -- target/multipath.sh@45 -- # '[' -z ']' 00:18:16.394 00:58:00 -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:18:16.394 only one NIC for nvmf test 00:18:16.394 00:58:00 -- target/multipath.sh@47 -- # nvmftestfini 00:18:16.394 00:58:00 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:16.394 00:58:00 -- nvmf/common.sh@116 -- # sync 00:18:16.394 00:58:00 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:16.394 00:58:00 -- nvmf/common.sh@119 -- # set +e 00:18:16.394 00:58:00 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:16.394 00:58:00 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:16.394 rmmod nvme_tcp 00:18:16.394 rmmod nvme_fabrics 00:18:16.394 rmmod nvme_keyring 00:18:16.394 00:58:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:16.394 00:58:00 -- nvmf/common.sh@123 -- # set -e 00:18:16.394 00:58:00 -- nvmf/common.sh@124 -- # return 0 00:18:16.394 00:58:00 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:18:16.394 00:58:00 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:16.394 00:58:00 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:16.394 00:58:00 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:16.394 00:58:00 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:16.394 00:58:00 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:16.394 00:58:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:16.394 00:58:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:16.394 00:58:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:18.304 00:58:02 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:18.304 00:58:02 -- target/multipath.sh@48 -- # exit 0 00:18:18.304 00:58:02 -- target/multipath.sh@1 -- # nvmftestfini 00:18:18.304 00:58:02 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:18.304 00:58:02 -- nvmf/common.sh@116 -- # sync 00:18:18.304 00:58:02 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:18.304 00:58:02 -- nvmf/common.sh@119 -- # set +e 00:18:18.304 00:58:02 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:18.304 00:58:02 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:18.304 00:58:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:18.304 00:58:02 -- nvmf/common.sh@123 -- # set -e 00:18:18.304 00:58:02 -- nvmf/common.sh@124 -- # return 0 00:18:18.304 00:58:02 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:18:18.304 00:58:02 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:18.304 00:58:02 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:18.304 00:58:02 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:18.304 00:58:02 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:18.304 00:58:02 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:18.304 00:58:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:18.304 00:58:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:18.304 00:58:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:18.304 00:58:02 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:18.304 00:18:18.304 real 0m4.200s 00:18:18.304 user 0m0.798s 00:18:18.304 sys 0m1.379s 00:18:18.304 00:58:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:18.304 00:58:02 -- common/autotest_common.sh@10 -- # set +x 00:18:18.304 ************************************ 00:18:18.304 END TEST nvmf_multipath 00:18:18.304 ************************************ 00:18:18.563 00:58:02 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:18:18.563 00:58:02 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:18.563 00:58:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:18.563 00:58:02 -- common/autotest_common.sh@10 -- # set +x 00:18:18.563 ************************************ 00:18:18.563 START TEST nvmf_zcopy 00:18:18.563 ************************************ 00:18:18.563 00:58:02 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:18:18.563 * Looking for test storage... 00:18:18.563 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:18.563 00:58:02 -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:18.563 00:58:02 -- nvmf/common.sh@7 -- # uname -s 00:18:18.563 00:58:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:18.563 00:58:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:18.563 00:58:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:18.563 00:58:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:18.563 00:58:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:18.563 00:58:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:18.563 00:58:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:18.563 00:58:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:18.563 00:58:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:18.563 00:58:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:18.563 00:58:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:18.563 00:58:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:18.563 00:58:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:18.563 00:58:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:18.563 00:58:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:18.563 00:58:02 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:18.563 00:58:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:18.563 00:58:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:18.563 00:58:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:18.563 00:58:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.563 00:58:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.563 00:58:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.563 00:58:02 -- paths/export.sh@5 -- # export PATH 00:18:18.563 00:58:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.563 00:58:02 -- nvmf/common.sh@46 -- # : 0 00:18:18.563 00:58:02 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:18.563 00:58:02 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:18.563 00:58:02 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:18.563 00:58:02 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:18.563 00:58:02 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:18.563 00:58:02 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:18.563 00:58:02 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:18.563 00:58:02 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:18.563 00:58:02 -- target/zcopy.sh@12 -- # nvmftestinit 00:18:18.563 00:58:02 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:18.563 00:58:02 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:18.563 00:58:02 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:18.563 00:58:02 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:18.563 00:58:02 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:18.563 00:58:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:18.563 00:58:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:18.563 00:58:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:18.563 00:58:02 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:18.563 00:58:02 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:18.563 00:58:02 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:18.563 00:58:02 -- common/autotest_common.sh@10 -- # set +x 00:18:20.469 00:58:04 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:20.469 00:58:04 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:20.469 00:58:04 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:20.469 00:58:04 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:20.469 00:58:04 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:20.469 00:58:04 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:20.469 00:58:04 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:20.469 00:58:04 -- nvmf/common.sh@294 -- # net_devs=() 00:18:20.469 00:58:04 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:20.469 00:58:04 -- nvmf/common.sh@295 -- # e810=() 00:18:20.469 00:58:04 -- nvmf/common.sh@295 -- # local -ga e810 00:18:20.469 00:58:04 -- nvmf/common.sh@296 -- # x722=() 00:18:20.469 00:58:04 -- nvmf/common.sh@296 -- # local -ga x722 00:18:20.469 00:58:04 -- nvmf/common.sh@297 -- # mlx=() 00:18:20.469 00:58:04 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:20.469 00:58:04 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:20.469 00:58:04 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:20.469 00:58:04 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:20.469 00:58:04 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:20.469 00:58:04 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:20.469 00:58:04 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:20.469 00:58:04 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:20.469 00:58:04 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:20.469 00:58:04 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:20.469 00:58:04 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:20.469 00:58:04 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:20.469 00:58:04 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:20.469 00:58:04 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:20.469 00:58:04 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:20.469 00:58:04 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:20.469 00:58:04 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:20.469 00:58:04 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:20.469 00:58:04 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:20.469 00:58:04 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:20.469 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:20.469 00:58:04 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:20.469 00:58:04 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:20.469 00:58:04 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:20.469 00:58:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:20.469 00:58:04 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:20.469 00:58:04 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:20.469 00:58:04 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:20.469 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:20.469 00:58:04 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:20.469 00:58:04 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:20.469 00:58:04 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:20.469 00:58:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:20.469 00:58:04 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:20.469 00:58:04 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:20.469 00:58:04 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:20.469 00:58:04 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:20.469 00:58:04 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:20.469 00:58:04 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:20.469 00:58:04 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:20.469 00:58:04 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:20.469 00:58:04 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:20.469 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:20.469 00:58:04 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:20.469 00:58:04 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:20.469 00:58:04 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:20.469 00:58:04 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:20.469 00:58:04 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:20.469 00:58:04 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:20.469 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:20.469 00:58:04 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:20.469 00:58:04 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:20.469 00:58:04 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:20.469 00:58:04 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:20.469 00:58:04 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:20.469 00:58:04 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:20.469 00:58:04 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:20.469 00:58:04 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:20.469 00:58:04 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:20.469 00:58:04 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:20.469 00:58:04 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:20.469 00:58:04 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:20.469 00:58:04 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:20.469 00:58:04 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:20.469 00:58:04 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:20.469 00:58:04 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:20.469 00:58:04 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:20.469 00:58:04 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:20.469 00:58:04 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:20.469 00:58:04 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:20.469 00:58:04 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:20.469 00:58:04 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:20.469 00:58:04 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:20.469 00:58:04 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:20.469 00:58:04 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:20.469 00:58:04 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:20.469 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:20.469 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.242 ms 00:18:20.469 00:18:20.469 --- 10.0.0.2 ping statistics --- 00:18:20.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:20.469 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:18:20.469 00:58:04 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:20.469 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:20.469 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:18:20.469 00:18:20.469 --- 10.0.0.1 ping statistics --- 00:18:20.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:20.469 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:18:20.469 00:58:04 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:20.469 00:58:04 -- nvmf/common.sh@410 -- # return 0 00:18:20.469 00:58:04 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:20.469 00:58:04 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:20.469 00:58:04 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:20.469 00:58:04 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:20.469 00:58:04 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:20.469 00:58:04 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:20.469 00:58:04 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:20.469 00:58:04 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:18:20.469 00:58:04 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:20.469 00:58:04 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:20.469 00:58:04 -- common/autotest_common.sh@10 -- # set +x 00:18:20.469 00:58:04 -- nvmf/common.sh@469 -- # nvmfpid=3406466 00:18:20.469 00:58:04 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:20.469 00:58:04 -- nvmf/common.sh@470 -- # waitforlisten 3406466 00:18:20.469 00:58:04 -- common/autotest_common.sh@819 -- # '[' -z 3406466 ']' 00:18:20.469 00:58:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:20.469 00:58:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:20.469 00:58:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:20.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:20.469 00:58:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:20.469 00:58:04 -- common/autotest_common.sh@10 -- # set +x 00:18:20.727 [2024-07-23 00:58:04.697390] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:18:20.727 [2024-07-23 00:58:04.697476] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:20.727 EAL: No free 2048 kB hugepages reported on node 1 00:18:20.727 [2024-07-23 00:58:04.760187] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:20.727 [2024-07-23 00:58:04.842205] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:20.727 [2024-07-23 00:58:04.842358] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:20.727 [2024-07-23 00:58:04.842390] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:20.727 [2024-07-23 00:58:04.842403] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:20.727 [2024-07-23 00:58:04.842430] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:21.665 00:58:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:21.665 00:58:05 -- common/autotest_common.sh@852 -- # return 0 00:18:21.665 00:58:05 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:21.665 00:58:05 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:21.665 00:58:05 -- common/autotest_common.sh@10 -- # set +x 00:18:21.665 00:58:05 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:21.665 00:58:05 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:18:21.665 00:58:05 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:18:21.665 00:58:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:21.665 00:58:05 -- common/autotest_common.sh@10 -- # set +x 00:18:21.665 [2024-07-23 00:58:05.674832] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:21.665 00:58:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:21.665 00:58:05 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:21.665 00:58:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:21.665 00:58:05 -- common/autotest_common.sh@10 -- # set +x 00:18:21.665 00:58:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:21.665 00:58:05 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:21.665 00:58:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:21.665 00:58:05 -- common/autotest_common.sh@10 -- # set +x 00:18:21.665 [2024-07-23 00:58:05.691016] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:21.665 00:58:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:21.665 00:58:05 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:21.665 00:58:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:21.665 00:58:05 -- common/autotest_common.sh@10 -- # set +x 00:18:21.665 00:58:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:21.665 00:58:05 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:18:21.665 00:58:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:21.665 00:58:05 -- common/autotest_common.sh@10 -- # set +x 00:18:21.665 malloc0 00:18:21.665 00:58:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:21.665 00:58:05 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:21.665 00:58:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:21.665 00:58:05 -- common/autotest_common.sh@10 -- # set +x 00:18:21.665 00:58:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:21.665 00:58:05 -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:18:21.665 00:58:05 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:18:21.665 00:58:05 -- nvmf/common.sh@520 -- # config=() 00:18:21.665 00:58:05 -- nvmf/common.sh@520 -- # local subsystem config 00:18:21.665 00:58:05 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:21.665 00:58:05 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:21.665 { 00:18:21.665 "params": { 00:18:21.665 "name": "Nvme$subsystem", 00:18:21.665 "trtype": "$TEST_TRANSPORT", 00:18:21.665 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:21.665 "adrfam": "ipv4", 00:18:21.665 "trsvcid": "$NVMF_PORT", 00:18:21.665 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:21.665 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:21.665 "hdgst": ${hdgst:-false}, 00:18:21.665 "ddgst": ${ddgst:-false} 00:18:21.665 }, 00:18:21.665 "method": "bdev_nvme_attach_controller" 00:18:21.665 } 00:18:21.665 EOF 00:18:21.665 )") 00:18:21.665 00:58:05 -- nvmf/common.sh@542 -- # cat 00:18:21.665 00:58:05 -- nvmf/common.sh@544 -- # jq . 00:18:21.665 00:58:05 -- nvmf/common.sh@545 -- # IFS=, 00:18:21.665 00:58:05 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:21.665 "params": { 00:18:21.665 "name": "Nvme1", 00:18:21.665 "trtype": "tcp", 00:18:21.665 "traddr": "10.0.0.2", 00:18:21.665 "adrfam": "ipv4", 00:18:21.665 "trsvcid": "4420", 00:18:21.665 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:21.665 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:21.665 "hdgst": false, 00:18:21.665 "ddgst": false 00:18:21.665 }, 00:18:21.665 "method": "bdev_nvme_attach_controller" 00:18:21.665 }' 00:18:21.665 [2024-07-23 00:58:05.769416] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:18:21.665 [2024-07-23 00:58:05.769505] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3406622 ] 00:18:21.665 EAL: No free 2048 kB hugepages reported on node 1 00:18:21.665 [2024-07-23 00:58:05.836920] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:21.923 [2024-07-23 00:58:05.928765] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:22.181 Running I/O for 10 seconds... 00:18:32.166 00:18:32.166 Latency(us) 00:18:32.166 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:32.166 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:18:32.166 Verification LBA range: start 0x0 length 0x1000 00:18:32.166 Nvme1n1 : 10.01 8635.24 67.46 0.00 0.00 14787.79 1650.54 21262.79 00:18:32.166 =================================================================================================================== 00:18:32.166 Total : 8635.24 67.46 0.00 0.00 14787.79 1650.54 21262.79 00:18:32.425 00:58:16 -- target/zcopy.sh@39 -- # perfpid=3407853 00:18:32.425 00:58:16 -- target/zcopy.sh@41 -- # xtrace_disable 00:18:32.425 00:58:16 -- common/autotest_common.sh@10 -- # set +x 00:18:32.425 00:58:16 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:18:32.425 00:58:16 -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:18:32.425 00:58:16 -- nvmf/common.sh@520 -- # config=() 00:18:32.425 00:58:16 -- nvmf/common.sh@520 -- # local subsystem config 00:18:32.425 00:58:16 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:32.425 00:58:16 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:32.425 { 00:18:32.425 "params": { 00:18:32.425 "name": "Nvme$subsystem", 00:18:32.425 "trtype": "$TEST_TRANSPORT", 00:18:32.425 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:32.425 "adrfam": "ipv4", 00:18:32.425 "trsvcid": "$NVMF_PORT", 00:18:32.425 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:32.425 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:32.425 "hdgst": ${hdgst:-false}, 00:18:32.425 "ddgst": ${ddgst:-false} 00:18:32.425 }, 00:18:32.425 "method": "bdev_nvme_attach_controller" 00:18:32.425 } 00:18:32.425 EOF 00:18:32.425 )") 00:18:32.425 00:58:16 -- nvmf/common.sh@542 -- # cat 00:18:32.425 [2024-07-23 00:58:16.553480] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.425 [2024-07-23 00:58:16.553527] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.425 00:58:16 -- nvmf/common.sh@544 -- # jq . 00:18:32.425 00:58:16 -- nvmf/common.sh@545 -- # IFS=, 00:18:32.425 00:58:16 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:32.425 "params": { 00:18:32.425 "name": "Nvme1", 00:18:32.425 "trtype": "tcp", 00:18:32.425 "traddr": "10.0.0.2", 00:18:32.425 "adrfam": "ipv4", 00:18:32.425 "trsvcid": "4420", 00:18:32.425 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:32.425 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:32.425 "hdgst": false, 00:18:32.425 "ddgst": false 00:18:32.425 }, 00:18:32.425 "method": "bdev_nvme_attach_controller" 00:18:32.425 }' 00:18:32.425 [2024-07-23 00:58:16.561450] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.425 [2024-07-23 00:58:16.561476] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.425 [2024-07-23 00:58:16.569470] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.425 [2024-07-23 00:58:16.569495] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.425 [2024-07-23 00:58:16.577492] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.425 [2024-07-23 00:58:16.577517] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.425 [2024-07-23 00:58:16.585520] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.425 [2024-07-23 00:58:16.585547] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.425 [2024-07-23 00:58:16.589580] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:18:32.425 [2024-07-23 00:58:16.589675] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3407853 ] 00:18:32.425 [2024-07-23 00:58:16.593538] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.425 [2024-07-23 00:58:16.593563] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.425 [2024-07-23 00:58:16.601560] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.425 [2024-07-23 00:58:16.601584] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.425 [2024-07-23 00:58:16.609581] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.425 [2024-07-23 00:58:16.609605] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.425 EAL: No free 2048 kB hugepages reported on node 1 00:18:32.425 [2024-07-23 00:58:16.617602] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.425 [2024-07-23 00:58:16.617634] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.425 [2024-07-23 00:58:16.625632] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.425 [2024-07-23 00:58:16.625655] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.684 [2024-07-23 00:58:16.633653] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.684 [2024-07-23 00:58:16.633680] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.684 [2024-07-23 00:58:16.641675] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.684 [2024-07-23 00:58:16.641699] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.684 [2024-07-23 00:58:16.649696] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.684 [2024-07-23 00:58:16.649719] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.684 [2024-07-23 00:58:16.651886] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:32.684 [2024-07-23 00:58:16.657741] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.684 [2024-07-23 00:58:16.657771] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.684 [2024-07-23 00:58:16.665763] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.684 [2024-07-23 00:58:16.665800] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.684 [2024-07-23 00:58:16.673760] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.684 [2024-07-23 00:58:16.673793] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.684 [2024-07-23 00:58:16.681781] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.684 [2024-07-23 00:58:16.681805] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.684 [2024-07-23 00:58:16.689804] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.684 [2024-07-23 00:58:16.689827] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.684 [2024-07-23 00:58:16.697827] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.684 [2024-07-23 00:58:16.697850] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.684 [2024-07-23 00:58:16.705869] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.684 [2024-07-23 00:58:16.705903] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.684 [2024-07-23 00:58:16.713883] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.684 [2024-07-23 00:58:16.713913] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.684 [2024-07-23 00:58:16.721891] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.684 [2024-07-23 00:58:16.721914] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.684 [2024-07-23 00:58:16.729914] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.684 [2024-07-23 00:58:16.729938] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.684 [2024-07-23 00:58:16.737935] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.684 [2024-07-23 00:58:16.737958] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.684 [2024-07-23 00:58:16.744708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:32.684 [2024-07-23 00:58:16.745955] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.684 [2024-07-23 00:58:16.745979] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.684 [2024-07-23 00:58:16.753977] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.684 [2024-07-23 00:58:16.754000] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.684 [2024-07-23 00:58:16.762025] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.684 [2024-07-23 00:58:16.762060] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.684 [2024-07-23 00:58:16.770048] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.684 [2024-07-23 00:58:16.770084] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.684 [2024-07-23 00:58:16.778073] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.684 [2024-07-23 00:58:16.778110] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.684 [2024-07-23 00:58:16.786091] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.684 [2024-07-23 00:58:16.786129] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.684 [2024-07-23 00:58:16.794116] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.684 [2024-07-23 00:58:16.794154] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.684 [2024-07-23 00:58:16.802136] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.684 [2024-07-23 00:58:16.802174] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.684 [2024-07-23 00:58:16.810160] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.684 [2024-07-23 00:58:16.810197] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.684 [2024-07-23 00:58:16.818163] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.684 [2024-07-23 00:58:16.818196] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.684 [2024-07-23 00:58:16.826210] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.684 [2024-07-23 00:58:16.826247] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.684 [2024-07-23 00:58:16.834250] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.684 [2024-07-23 00:58:16.834288] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.684 [2024-07-23 00:58:16.842235] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.684 [2024-07-23 00:58:16.842260] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.684 [2024-07-23 00:58:16.850253] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.684 [2024-07-23 00:58:16.850286] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.684 [2024-07-23 00:58:16.858289] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.684 [2024-07-23 00:58:16.858318] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.684 [2024-07-23 00:58:16.866320] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.684 [2024-07-23 00:58:16.866347] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.684 [2024-07-23 00:58:16.874341] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.684 [2024-07-23 00:58:16.874368] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.684 [2024-07-23 00:58:16.882359] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.684 [2024-07-23 00:58:16.882387] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.945 [2024-07-23 00:58:16.890382] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.945 [2024-07-23 00:58:16.890407] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.945 [2024-07-23 00:58:16.898403] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.945 [2024-07-23 00:58:16.898427] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.945 [2024-07-23 00:58:16.906428] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.945 [2024-07-23 00:58:16.906452] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.945 [2024-07-23 00:58:16.914448] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.945 [2024-07-23 00:58:16.914473] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.945 [2024-07-23 00:58:16.922552] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.945 [2024-07-23 00:58:16.922579] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.945 [2024-07-23 00:58:16.930560] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.945 [2024-07-23 00:58:16.930588] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.945 [2024-07-23 00:58:16.938580] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.945 [2024-07-23 00:58:16.938605] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.945 [2024-07-23 00:58:16.946605] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.945 [2024-07-23 00:58:16.946636] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.945 [2024-07-23 00:58:16.954628] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.945 [2024-07-23 00:58:16.954649] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.945 [2024-07-23 00:58:16.962678] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.945 [2024-07-23 00:58:16.962700] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.945 [2024-07-23 00:58:16.970690] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.945 [2024-07-23 00:58:16.970719] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.945 [2024-07-23 00:58:16.978697] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.945 [2024-07-23 00:58:16.978722] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.945 [2024-07-23 00:58:16.986743] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.945 [2024-07-23 00:58:16.986766] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.945 [2024-07-23 00:58:16.994748] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.945 [2024-07-23 00:58:16.994769] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.945 [2024-07-23 00:58:17.002770] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.945 [2024-07-23 00:58:17.002792] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.945 [2024-07-23 00:58:17.010796] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.945 [2024-07-23 00:58:17.010819] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.945 [2024-07-23 00:58:17.018820] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.945 [2024-07-23 00:58:17.018844] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.945 [2024-07-23 00:58:17.026841] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.945 [2024-07-23 00:58:17.026863] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.945 [2024-07-23 00:58:17.034861] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.945 [2024-07-23 00:58:17.034883] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.945 [2024-07-23 00:58:17.042883] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.945 [2024-07-23 00:58:17.042918] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.945 [2024-07-23 00:58:17.050922] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.945 [2024-07-23 00:58:17.050943] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.945 [2024-07-23 00:58:17.058935] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.945 [2024-07-23 00:58:17.058973] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.945 [2024-07-23 00:58:17.066980] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.945 [2024-07-23 00:58:17.067000] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.945 [2024-07-23 00:58:17.075009] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.945 [2024-07-23 00:58:17.075037] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.945 [2024-07-23 00:58:17.083021] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.945 [2024-07-23 00:58:17.083042] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.945 Running I/O for 5 seconds... 00:18:32.945 [2024-07-23 00:58:17.091036] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.945 [2024-07-23 00:58:17.091056] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.945 [2024-07-23 00:58:17.104427] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.945 [2024-07-23 00:58:17.104456] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.946 [2024-07-23 00:58:17.116353] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.946 [2024-07-23 00:58:17.116381] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.946 [2024-07-23 00:58:17.125356] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.946 [2024-07-23 00:58:17.125384] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.946 [2024-07-23 00:58:17.137284] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.946 [2024-07-23 00:58:17.137318] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.205 [2024-07-23 00:58:17.148708] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.205 [2024-07-23 00:58:17.148737] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.205 [2024-07-23 00:58:17.157508] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.205 [2024-07-23 00:58:17.157536] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.205 [2024-07-23 00:58:17.168360] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.205 [2024-07-23 00:58:17.168387] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.205 [2024-07-23 00:58:17.180768] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.205 [2024-07-23 00:58:17.180797] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.205 [2024-07-23 00:58:17.189598] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.205 [2024-07-23 00:58:17.189632] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.205 [2024-07-23 00:58:17.201666] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.205 [2024-07-23 00:58:17.201694] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.205 [2024-07-23 00:58:17.211058] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.205 [2024-07-23 00:58:17.211085] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.205 [2024-07-23 00:58:17.220733] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.205 [2024-07-23 00:58:17.220760] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.205 [2024-07-23 00:58:17.230545] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.205 [2024-07-23 00:58:17.230573] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.205 [2024-07-23 00:58:17.240786] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.205 [2024-07-23 00:58:17.240814] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.205 [2024-07-23 00:58:17.252722] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.205 [2024-07-23 00:58:17.252750] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.205 [2024-07-23 00:58:17.261558] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.205 [2024-07-23 00:58:17.261586] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.205 [2024-07-23 00:58:17.272137] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.205 [2024-07-23 00:58:17.272165] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.205 [2024-07-23 00:58:17.281950] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.205 [2024-07-23 00:58:17.281977] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.205 [2024-07-23 00:58:17.291772] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.205 [2024-07-23 00:58:17.291799] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.205 [2024-07-23 00:58:17.301831] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.205 [2024-07-23 00:58:17.301869] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.205 [2024-07-23 00:58:17.311924] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.205 [2024-07-23 00:58:17.311952] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.205 [2024-07-23 00:58:17.322132] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.205 [2024-07-23 00:58:17.322159] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.205 [2024-07-23 00:58:17.332340] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.205 [2024-07-23 00:58:17.332367] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.205 [2024-07-23 00:58:17.344338] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.205 [2024-07-23 00:58:17.344367] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.205 [2024-07-23 00:58:17.353405] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.205 [2024-07-23 00:58:17.353432] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.205 [2024-07-23 00:58:17.364038] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.205 [2024-07-23 00:58:17.364065] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.206 [2024-07-23 00:58:17.375729] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.206 [2024-07-23 00:58:17.375756] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.206 [2024-07-23 00:58:17.384857] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.206 [2024-07-23 00:58:17.384884] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.206 [2024-07-23 00:58:17.395677] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.206 [2024-07-23 00:58:17.395704] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.465 [2024-07-23 00:58:17.407709] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.465 [2024-07-23 00:58:17.407737] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.465 [2024-07-23 00:58:17.417085] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.465 [2024-07-23 00:58:17.417113] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.465 [2024-07-23 00:58:17.429502] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.465 [2024-07-23 00:58:17.429530] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.465 [2024-07-23 00:58:17.441199] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.465 [2024-07-23 00:58:17.441226] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.465 [2024-07-23 00:58:17.449589] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.465 [2024-07-23 00:58:17.449623] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.465 [2024-07-23 00:58:17.460385] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.465 [2024-07-23 00:58:17.460413] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.465 [2024-07-23 00:58:17.470414] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.465 [2024-07-23 00:58:17.470441] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.465 [2024-07-23 00:58:17.481312] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.465 [2024-07-23 00:58:17.481342] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.465 [2024-07-23 00:58:17.491927] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.465 [2024-07-23 00:58:17.491953] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.465 [2024-07-23 00:58:17.502148] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.465 [2024-07-23 00:58:17.502178] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.465 [2024-07-23 00:58:17.512762] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.465 [2024-07-23 00:58:17.512790] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.465 [2024-07-23 00:58:17.523644] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.465 [2024-07-23 00:58:17.523671] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.465 [2024-07-23 00:58:17.534313] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.465 [2024-07-23 00:58:17.534341] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.465 [2024-07-23 00:58:17.545229] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.465 [2024-07-23 00:58:17.545259] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.465 [2024-07-23 00:58:17.556318] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.465 [2024-07-23 00:58:17.556346] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.465 [2024-07-23 00:58:17.566966] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.465 [2024-07-23 00:58:17.566993] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.465 [2024-07-23 00:58:17.577868] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.466 [2024-07-23 00:58:17.577896] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.466 [2024-07-23 00:58:17.588367] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.466 [2024-07-23 00:58:17.588398] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.466 [2024-07-23 00:58:17.598958] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.466 [2024-07-23 00:58:17.598989] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.466 [2024-07-23 00:58:17.609569] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.466 [2024-07-23 00:58:17.609596] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.466 [2024-07-23 00:58:17.622670] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.466 [2024-07-23 00:58:17.622698] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.466 [2024-07-23 00:58:17.632310] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.466 [2024-07-23 00:58:17.632341] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.466 [2024-07-23 00:58:17.643642] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.466 [2024-07-23 00:58:17.643673] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.466 [2024-07-23 00:58:17.654343] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.466 [2024-07-23 00:58:17.654373] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.466 [2024-07-23 00:58:17.664991] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.466 [2024-07-23 00:58:17.665019] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.726 [2024-07-23 00:58:17.677742] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.726 [2024-07-23 00:58:17.677770] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.726 [2024-07-23 00:58:17.687409] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.726 [2024-07-23 00:58:17.687439] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.726 [2024-07-23 00:58:17.698182] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.726 [2024-07-23 00:58:17.698212] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.726 [2024-07-23 00:58:17.710988] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.726 [2024-07-23 00:58:17.711015] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.726 [2024-07-23 00:58:17.720852] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.726 [2024-07-23 00:58:17.720880] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.726 [2024-07-23 00:58:17.732507] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.726 [2024-07-23 00:58:17.732537] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.726 [2024-07-23 00:58:17.743244] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.726 [2024-07-23 00:58:17.743274] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.726 [2024-07-23 00:58:17.754184] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.726 [2024-07-23 00:58:17.754214] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.726 [2024-07-23 00:58:17.764825] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.726 [2024-07-23 00:58:17.764852] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.726 [2024-07-23 00:58:17.775694] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.726 [2024-07-23 00:58:17.775721] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.726 [2024-07-23 00:58:17.786533] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.726 [2024-07-23 00:58:17.786560] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.726 [2024-07-23 00:58:17.797255] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.726 [2024-07-23 00:58:17.797282] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.726 [2024-07-23 00:58:17.809802] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.726 [2024-07-23 00:58:17.809829] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.726 [2024-07-23 00:58:17.819116] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.726 [2024-07-23 00:58:17.819146] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.726 [2024-07-23 00:58:17.830751] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.726 [2024-07-23 00:58:17.830777] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.726 [2024-07-23 00:58:17.841463] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.726 [2024-07-23 00:58:17.841493] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.726 [2024-07-23 00:58:17.852348] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.726 [2024-07-23 00:58:17.852378] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.726 [2024-07-23 00:58:17.863098] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.726 [2024-07-23 00:58:17.863129] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.726 [2024-07-23 00:58:17.873985] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.726 [2024-07-23 00:58:17.874015] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.726 [2024-07-23 00:58:17.885180] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.726 [2024-07-23 00:58:17.885209] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.726 [2024-07-23 00:58:17.896479] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.726 [2024-07-23 00:58:17.896509] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.726 [2024-07-23 00:58:17.907258] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.726 [2024-07-23 00:58:17.907288] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.726 [2024-07-23 00:58:17.918390] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.726 [2024-07-23 00:58:17.918420] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.987 [2024-07-23 00:58:17.929181] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.987 [2024-07-23 00:58:17.929212] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.987 [2024-07-23 00:58:17.940285] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.987 [2024-07-23 00:58:17.940316] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.987 [2024-07-23 00:58:17.951425] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.987 [2024-07-23 00:58:17.951456] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.987 [2024-07-23 00:58:17.962407] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.987 [2024-07-23 00:58:17.962438] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.987 [2024-07-23 00:58:17.973304] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.987 [2024-07-23 00:58:17.973336] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.987 [2024-07-23 00:58:17.984338] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.987 [2024-07-23 00:58:17.984369] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.987 [2024-07-23 00:58:17.994925] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.987 [2024-07-23 00:58:17.994953] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.987 [2024-07-23 00:58:18.005897] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.987 [2024-07-23 00:58:18.005924] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.987 [2024-07-23 00:58:18.016837] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.987 [2024-07-23 00:58:18.016875] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.987 [2024-07-23 00:58:18.027807] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.987 [2024-07-23 00:58:18.027834] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.987 [2024-07-23 00:58:18.038660] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.987 [2024-07-23 00:58:18.038690] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.987 [2024-07-23 00:58:18.049372] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.987 [2024-07-23 00:58:18.049403] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.987 [2024-07-23 00:58:18.061969] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.987 [2024-07-23 00:58:18.061997] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.987 [2024-07-23 00:58:18.071490] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.987 [2024-07-23 00:58:18.071530] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.987 [2024-07-23 00:58:18.082728] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.987 [2024-07-23 00:58:18.082755] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.987 [2024-07-23 00:58:18.093273] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.987 [2024-07-23 00:58:18.093305] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.987 [2024-07-23 00:58:18.104245] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.987 [2024-07-23 00:58:18.104275] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.987 [2024-07-23 00:58:18.115187] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.987 [2024-07-23 00:58:18.115217] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.987 [2024-07-23 00:58:18.125871] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.987 [2024-07-23 00:58:18.125909] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.987 [2024-07-23 00:58:18.136808] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.987 [2024-07-23 00:58:18.136835] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.987 [2024-07-23 00:58:18.147789] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.987 [2024-07-23 00:58:18.147824] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.987 [2024-07-23 00:58:18.158770] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.987 [2024-07-23 00:58:18.158797] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.987 [2024-07-23 00:58:18.169612] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.987 [2024-07-23 00:58:18.169649] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.987 [2024-07-23 00:58:18.180722] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.987 [2024-07-23 00:58:18.180749] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.246 [2024-07-23 00:58:18.193811] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.246 [2024-07-23 00:58:18.193839] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.246 [2024-07-23 00:58:18.203377] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.246 [2024-07-23 00:58:18.203408] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.246 [2024-07-23 00:58:18.214672] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.246 [2024-07-23 00:58:18.214700] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.246 [2024-07-23 00:58:18.225638] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.246 [2024-07-23 00:58:18.225665] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.246 [2024-07-23 00:58:18.236626] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.246 [2024-07-23 00:58:18.236653] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.246 [2024-07-23 00:58:18.247496] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.246 [2024-07-23 00:58:18.247526] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.246 [2024-07-23 00:58:18.258039] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.246 [2024-07-23 00:58:18.258066] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.246 [2024-07-23 00:58:18.271127] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.246 [2024-07-23 00:58:18.271158] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.246 [2024-07-23 00:58:18.281690] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.246 [2024-07-23 00:58:18.281718] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.246 [2024-07-23 00:58:18.292142] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.246 [2024-07-23 00:58:18.292173] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.246 [2024-07-23 00:58:18.302730] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.246 [2024-07-23 00:58:18.302757] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.246 [2024-07-23 00:58:18.313865] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.246 [2024-07-23 00:58:18.313892] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.246 [2024-07-23 00:58:18.326848] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.246 [2024-07-23 00:58:18.326875] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.246 [2024-07-23 00:58:18.336689] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.246 [2024-07-23 00:58:18.336719] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.246 [2024-07-23 00:58:18.347971] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.246 [2024-07-23 00:58:18.347999] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.246 [2024-07-23 00:58:18.360553] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.246 [2024-07-23 00:58:18.360595] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.246 [2024-07-23 00:58:18.370305] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.246 [2024-07-23 00:58:18.370335] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.246 [2024-07-23 00:58:18.382011] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.246 [2024-07-23 00:58:18.382039] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.246 [2024-07-23 00:58:18.394681] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.246 [2024-07-23 00:58:18.394710] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.246 [2024-07-23 00:58:18.404102] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.246 [2024-07-23 00:58:18.404133] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.246 [2024-07-23 00:58:18.415312] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.246 [2024-07-23 00:58:18.415342] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.246 [2024-07-23 00:58:18.426019] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.246 [2024-07-23 00:58:18.426064] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.246 [2024-07-23 00:58:18.437083] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.246 [2024-07-23 00:58:18.437113] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.246 [2024-07-23 00:58:18.447841] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.246 [2024-07-23 00:58:18.447868] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.504 [2024-07-23 00:58:18.458489] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.504 [2024-07-23 00:58:18.458519] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.504 [2024-07-23 00:58:18.469326] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.504 [2024-07-23 00:58:18.469357] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.504 [2024-07-23 00:58:18.480001] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.504 [2024-07-23 00:58:18.480027] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.504 [2024-07-23 00:58:18.491018] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.504 [2024-07-23 00:58:18.491050] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.504 [2024-07-23 00:58:18.501889] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.504 [2024-07-23 00:58:18.501917] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.504 [2024-07-23 00:58:18.512888] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.504 [2024-07-23 00:58:18.512920] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.504 [2024-07-23 00:58:18.523786] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.504 [2024-07-23 00:58:18.523816] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.504 [2024-07-23 00:58:18.534410] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.504 [2024-07-23 00:58:18.534440] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.504 [2024-07-23 00:58:18.547650] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.504 [2024-07-23 00:58:18.547677] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.504 [2024-07-23 00:58:18.556730] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.504 [2024-07-23 00:58:18.556757] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.504 [2024-07-23 00:58:18.568291] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.504 [2024-07-23 00:58:18.568326] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.504 [2024-07-23 00:58:18.578885] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.504 [2024-07-23 00:58:18.578912] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.504 [2024-07-23 00:58:18.589520] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.504 [2024-07-23 00:58:18.589546] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.504 [2024-07-23 00:58:18.600094] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.504 [2024-07-23 00:58:18.600124] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.504 [2024-07-23 00:58:18.610973] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.504 [2024-07-23 00:58:18.611003] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.504 [2024-07-23 00:58:18.621737] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.504 [2024-07-23 00:58:18.621764] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.504 [2024-07-23 00:58:18.632758] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.504 [2024-07-23 00:58:18.632785] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.504 [2024-07-23 00:58:18.645330] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.504 [2024-07-23 00:58:18.645360] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.504 [2024-07-23 00:58:18.655331] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.504 [2024-07-23 00:58:18.655361] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.504 [2024-07-23 00:58:18.666170] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.504 [2024-07-23 00:58:18.666200] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.504 [2024-07-23 00:58:18.678393] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.504 [2024-07-23 00:58:18.678423] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.504 [2024-07-23 00:58:18.687489] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.504 [2024-07-23 00:58:18.687519] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.505 [2024-07-23 00:58:18.699366] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.505 [2024-07-23 00:58:18.699397] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.763 [2024-07-23 00:58:18.712349] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.763 [2024-07-23 00:58:18.712380] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.763 [2024-07-23 00:58:18.722878] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.763 [2024-07-23 00:58:18.722904] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.763 [2024-07-23 00:58:18.733534] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.763 [2024-07-23 00:58:18.733562] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.763 [2024-07-23 00:58:18.746538] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.763 [2024-07-23 00:58:18.746569] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.763 [2024-07-23 00:58:18.757235] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.763 [2024-07-23 00:58:18.757265] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.763 [2024-07-23 00:58:18.768117] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.763 [2024-07-23 00:58:18.768147] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.763 [2024-07-23 00:58:18.779046] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.763 [2024-07-23 00:58:18.779084] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.763 [2024-07-23 00:58:18.790110] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.763 [2024-07-23 00:58:18.790140] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.763 [2024-07-23 00:58:18.801127] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.763 [2024-07-23 00:58:18.801157] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.763 [2024-07-23 00:58:18.811358] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.763 [2024-07-23 00:58:18.811384] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.763 [2024-07-23 00:58:18.822528] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.763 [2024-07-23 00:58:18.822558] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.763 [2024-07-23 00:58:18.833089] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.763 [2024-07-23 00:58:18.833119] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.763 [2024-07-23 00:58:18.845874] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.763 [2024-07-23 00:58:18.845901] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.763 [2024-07-23 00:58:18.856133] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.763 [2024-07-23 00:58:18.856163] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.763 [2024-07-23 00:58:18.867673] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.763 [2024-07-23 00:58:18.867701] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.763 [2024-07-23 00:58:18.878788] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.763 [2024-07-23 00:58:18.878815] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.763 [2024-07-23 00:58:18.889677] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.763 [2024-07-23 00:58:18.889704] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.763 [2024-07-23 00:58:18.900629] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.763 [2024-07-23 00:58:18.900657] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.763 [2024-07-23 00:58:18.911769] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.763 [2024-07-23 00:58:18.911796] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.763 [2024-07-23 00:58:18.922568] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.763 [2024-07-23 00:58:18.922598] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.763 [2024-07-23 00:58:18.933033] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.763 [2024-07-23 00:58:18.933063] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.763 [2024-07-23 00:58:18.943340] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.763 [2024-07-23 00:58:18.943370] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.763 [2024-07-23 00:58:18.954254] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.763 [2024-07-23 00:58:18.954285] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.021 [2024-07-23 00:58:18.965564] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.021 [2024-07-23 00:58:18.965595] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.021 [2024-07-23 00:58:18.976523] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.022 [2024-07-23 00:58:18.976553] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.022 [2024-07-23 00:58:18.987317] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.022 [2024-07-23 00:58:18.987347] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.022 [2024-07-23 00:58:18.998226] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.022 [2024-07-23 00:58:18.998256] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.022 [2024-07-23 00:58:19.009035] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.022 [2024-07-23 00:58:19.009065] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.022 [2024-07-23 00:58:19.019762] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.022 [2024-07-23 00:58:19.019790] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.022 [2024-07-23 00:58:19.032603] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.022 [2024-07-23 00:58:19.032645] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.022 [2024-07-23 00:58:19.041563] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.022 [2024-07-23 00:58:19.041593] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.022 [2024-07-23 00:58:19.054817] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.022 [2024-07-23 00:58:19.054844] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.022 [2024-07-23 00:58:19.065489] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.022 [2024-07-23 00:58:19.065519] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.022 [2024-07-23 00:58:19.076562] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.022 [2024-07-23 00:58:19.076589] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.022 [2024-07-23 00:58:19.088827] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.022 [2024-07-23 00:58:19.088855] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.022 [2024-07-23 00:58:19.098666] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.022 [2024-07-23 00:58:19.098694] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.022 [2024-07-23 00:58:19.110020] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.022 [2024-07-23 00:58:19.110051] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.022 [2024-07-23 00:58:19.121169] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.022 [2024-07-23 00:58:19.121200] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.022 [2024-07-23 00:58:19.132300] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.022 [2024-07-23 00:58:19.132331] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.022 [2024-07-23 00:58:19.143115] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.022 [2024-07-23 00:58:19.143145] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.022 [2024-07-23 00:58:19.154390] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.022 [2024-07-23 00:58:19.154421] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.022 [2024-07-23 00:58:19.165389] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.022 [2024-07-23 00:58:19.165421] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.022 [2024-07-23 00:58:19.178654] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.022 [2024-07-23 00:58:19.178682] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.022 [2024-07-23 00:58:19.188589] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.022 [2024-07-23 00:58:19.188626] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.022 [2024-07-23 00:58:19.199806] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.022 [2024-07-23 00:58:19.199837] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.022 [2024-07-23 00:58:19.209690] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.022 [2024-07-23 00:58:19.209717] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.022 [2024-07-23 00:58:19.221038] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.022 [2024-07-23 00:58:19.221069] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.282 [2024-07-23 00:58:19.231773] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.282 [2024-07-23 00:58:19.231803] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.282 [2024-07-23 00:58:19.242326] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.282 [2024-07-23 00:58:19.242356] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.282 [2024-07-23 00:58:19.252266] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.282 [2024-07-23 00:58:19.252297] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.282 [2024-07-23 00:58:19.263994] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.282 [2024-07-23 00:58:19.264038] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.282 [2024-07-23 00:58:19.274719] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.282 [2024-07-23 00:58:19.274746] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.282 [2024-07-23 00:58:19.285784] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.282 [2024-07-23 00:58:19.285811] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.282 [2024-07-23 00:58:19.296404] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.282 [2024-07-23 00:58:19.296435] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.282 [2024-07-23 00:58:19.307356] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.282 [2024-07-23 00:58:19.307386] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.282 [2024-07-23 00:58:19.318501] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.282 [2024-07-23 00:58:19.318531] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.282 [2024-07-23 00:58:19.329642] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.282 [2024-07-23 00:58:19.329668] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.282 [2024-07-23 00:58:19.340581] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.282 [2024-07-23 00:58:19.340609] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.282 [2024-07-23 00:58:19.351821] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.282 [2024-07-23 00:58:19.351849] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.282 [2024-07-23 00:58:19.362765] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.282 [2024-07-23 00:58:19.362793] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.282 [2024-07-23 00:58:19.373532] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.282 [2024-07-23 00:58:19.373559] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.282 [2024-07-23 00:58:19.384785] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.282 [2024-07-23 00:58:19.384812] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.282 [2024-07-23 00:58:19.395766] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.282 [2024-07-23 00:58:19.395794] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.282 [2024-07-23 00:58:19.406499] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.282 [2024-07-23 00:58:19.406530] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.282 [2024-07-23 00:58:19.419007] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.282 [2024-07-23 00:58:19.419034] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.282 [2024-07-23 00:58:19.428985] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.282 [2024-07-23 00:58:19.429016] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.282 [2024-07-23 00:58:19.440416] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.282 [2024-07-23 00:58:19.440446] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.282 [2024-07-23 00:58:19.451081] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.282 [2024-07-23 00:58:19.451107] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.282 [2024-07-23 00:58:19.461819] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.282 [2024-07-23 00:58:19.461846] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.282 [2024-07-23 00:58:19.472506] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.282 [2024-07-23 00:58:19.472536] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.282 [2024-07-23 00:58:19.483094] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.282 [2024-07-23 00:58:19.483125] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.542 [2024-07-23 00:58:19.493401] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.542 [2024-07-23 00:58:19.493432] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.542 [2024-07-23 00:58:19.504103] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.542 [2024-07-23 00:58:19.504133] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.542 [2024-07-23 00:58:19.517263] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.542 [2024-07-23 00:58:19.517294] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.542 [2024-07-23 00:58:19.527411] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.542 [2024-07-23 00:58:19.527441] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.542 [2024-07-23 00:58:19.538554] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.542 [2024-07-23 00:58:19.538582] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.542 [2024-07-23 00:58:19.549201] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.542 [2024-07-23 00:58:19.549232] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.542 [2024-07-23 00:58:19.559805] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.543 [2024-07-23 00:58:19.559832] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.543 [2024-07-23 00:58:19.570707] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.543 [2024-07-23 00:58:19.570745] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.543 [2024-07-23 00:58:19.581631] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.543 [2024-07-23 00:58:19.581658] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.543 [2024-07-23 00:58:19.592258] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.543 [2024-07-23 00:58:19.592289] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.543 [2024-07-23 00:58:19.603310] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.543 [2024-07-23 00:58:19.603341] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.543 [2024-07-23 00:58:19.614025] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.543 [2024-07-23 00:58:19.614052] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.543 [2024-07-23 00:58:19.625060] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.543 [2024-07-23 00:58:19.625087] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.543 [2024-07-23 00:58:19.635829] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.543 [2024-07-23 00:58:19.635856] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.543 [2024-07-23 00:58:19.647154] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.543 [2024-07-23 00:58:19.647184] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.543 [2024-07-23 00:58:19.658503] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.543 [2024-07-23 00:58:19.658533] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.543 [2024-07-23 00:58:19.669559] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.543 [2024-07-23 00:58:19.669587] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.543 [2024-07-23 00:58:19.682238] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.543 [2024-07-23 00:58:19.682268] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.543 [2024-07-23 00:58:19.692168] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.543 [2024-07-23 00:58:19.692198] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.543 [2024-07-23 00:58:19.703529] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.543 [2024-07-23 00:58:19.703559] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.543 [2024-07-23 00:58:19.714365] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.543 [2024-07-23 00:58:19.714395] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.543 [2024-07-23 00:58:19.725276] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.543 [2024-07-23 00:58:19.725306] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.543 [2024-07-23 00:58:19.737894] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.543 [2024-07-23 00:58:19.737922] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.801 [2024-07-23 00:58:19.747487] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.801 [2024-07-23 00:58:19.747518] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.801 [2024-07-23 00:58:19.759009] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.801 [2024-07-23 00:58:19.759040] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.801 [2024-07-23 00:58:19.769714] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.802 [2024-07-23 00:58:19.769741] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.802 [2024-07-23 00:58:19.780264] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.802 [2024-07-23 00:58:19.780294] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.802 [2024-07-23 00:58:19.791014] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.802 [2024-07-23 00:58:19.791041] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.802 [2024-07-23 00:58:19.802170] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.802 [2024-07-23 00:58:19.802200] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.802 [2024-07-23 00:58:19.813380] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.802 [2024-07-23 00:58:19.813418] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.802 [2024-07-23 00:58:19.824354] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.802 [2024-07-23 00:58:19.824384] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.802 [2024-07-23 00:58:19.835088] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.802 [2024-07-23 00:58:19.835133] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.802 [2024-07-23 00:58:19.845884] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.802 [2024-07-23 00:58:19.845911] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.802 [2024-07-23 00:58:19.858897] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.802 [2024-07-23 00:58:19.858925] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.802 [2024-07-23 00:58:19.869250] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.802 [2024-07-23 00:58:19.869280] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.802 [2024-07-23 00:58:19.879832] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.802 [2024-07-23 00:58:19.879859] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.802 [2024-07-23 00:58:19.890907] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.802 [2024-07-23 00:58:19.890935] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.802 [2024-07-23 00:58:19.901764] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.802 [2024-07-23 00:58:19.901792] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.802 [2024-07-23 00:58:19.912777] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.802 [2024-07-23 00:58:19.912804] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.802 [2024-07-23 00:58:19.923507] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.802 [2024-07-23 00:58:19.923537] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.802 [2024-07-23 00:58:19.936180] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.802 [2024-07-23 00:58:19.936210] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.802 [2024-07-23 00:58:19.946032] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.802 [2024-07-23 00:58:19.946062] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.802 [2024-07-23 00:58:19.957651] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.802 [2024-07-23 00:58:19.957679] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.802 [2024-07-23 00:58:19.968783] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.802 [2024-07-23 00:58:19.968812] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.802 [2024-07-23 00:58:19.979642] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.802 [2024-07-23 00:58:19.979669] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.802 [2024-07-23 00:58:19.990557] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.802 [2024-07-23 00:58:19.990585] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.802 [2024-07-23 00:58:20.001370] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.802 [2024-07-23 00:58:20.001406] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.060 [2024-07-23 00:58:20.012136] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.060 [2024-07-23 00:58:20.012167] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.060 [2024-07-23 00:58:20.025313] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.060 [2024-07-23 00:58:20.025356] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.060 [2024-07-23 00:58:20.034627] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.060 [2024-07-23 00:58:20.034655] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.060 [2024-07-23 00:58:20.046262] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.060 [2024-07-23 00:58:20.046292] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.060 [2024-07-23 00:58:20.058899] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.060 [2024-07-23 00:58:20.058927] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.060 [2024-07-23 00:58:20.067861] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.060 [2024-07-23 00:58:20.067888] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.060 [2024-07-23 00:58:20.079974] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.060 [2024-07-23 00:58:20.080002] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.060 [2024-07-23 00:58:20.090172] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.060 [2024-07-23 00:58:20.090201] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.060 [2024-07-23 00:58:20.100712] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.060 [2024-07-23 00:58:20.100739] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.060 [2024-07-23 00:58:20.113173] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.060 [2024-07-23 00:58:20.113202] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.060 [2024-07-23 00:58:20.122588] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.060 [2024-07-23 00:58:20.122626] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.060 [2024-07-23 00:58:20.133943] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.060 [2024-07-23 00:58:20.133970] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.060 [2024-07-23 00:58:20.146165] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.060 [2024-07-23 00:58:20.146192] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.060 [2024-07-23 00:58:20.155609] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.060 [2024-07-23 00:58:20.155645] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.060 [2024-07-23 00:58:20.167201] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.060 [2024-07-23 00:58:20.167230] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.060 [2024-07-23 00:58:20.179729] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.060 [2024-07-23 00:58:20.179756] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.060 [2024-07-23 00:58:20.189458] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.060 [2024-07-23 00:58:20.189487] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.060 [2024-07-23 00:58:20.200200] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.060 [2024-07-23 00:58:20.200228] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.060 [2024-07-23 00:58:20.210254] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.060 [2024-07-23 00:58:20.210283] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.060 [2024-07-23 00:58:20.220645] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.060 [2024-07-23 00:58:20.220672] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.060 [2024-07-23 00:58:20.231493] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.060 [2024-07-23 00:58:20.231531] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.060 [2024-07-23 00:58:20.241593] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.060 [2024-07-23 00:58:20.241629] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.060 [2024-07-23 00:58:20.252373] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.060 [2024-07-23 00:58:20.252403] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.319 [2024-07-23 00:58:20.265563] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.319 [2024-07-23 00:58:20.265592] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.319 [2024-07-23 00:58:20.274852] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.319 [2024-07-23 00:58:20.274879] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.319 [2024-07-23 00:58:20.286276] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.319 [2024-07-23 00:58:20.286316] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.320 [2024-07-23 00:58:20.296637] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.320 [2024-07-23 00:58:20.296664] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.320 [2024-07-23 00:58:20.307688] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.320 [2024-07-23 00:58:20.307716] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.320 [2024-07-23 00:58:20.318355] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.320 [2024-07-23 00:58:20.318385] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.320 [2024-07-23 00:58:20.329343] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.320 [2024-07-23 00:58:20.329372] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.320 [2024-07-23 00:58:20.340564] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.320 [2024-07-23 00:58:20.340592] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.320 [2024-07-23 00:58:20.350993] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.320 [2024-07-23 00:58:20.351021] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.320 [2024-07-23 00:58:20.363531] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.320 [2024-07-23 00:58:20.363560] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.320 [2024-07-23 00:58:20.373350] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.320 [2024-07-23 00:58:20.373378] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.320 [2024-07-23 00:58:20.384748] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.320 [2024-07-23 00:58:20.384775] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.320 [2024-07-23 00:58:20.397016] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.320 [2024-07-23 00:58:20.397044] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.320 [2024-07-23 00:58:20.406593] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.320 [2024-07-23 00:58:20.406632] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.320 [2024-07-23 00:58:20.417700] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.320 [2024-07-23 00:58:20.417727] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.320 [2024-07-23 00:58:20.428388] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.320 [2024-07-23 00:58:20.428417] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.320 [2024-07-23 00:58:20.439209] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.320 [2024-07-23 00:58:20.439245] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.320 [2024-07-23 00:58:20.451432] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.320 [2024-07-23 00:58:20.451461] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.320 [2024-07-23 00:58:20.461150] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.320 [2024-07-23 00:58:20.461179] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.320 [2024-07-23 00:58:20.472358] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.320 [2024-07-23 00:58:20.472386] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.320 [2024-07-23 00:58:20.483074] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.320 [2024-07-23 00:58:20.483103] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.320 [2024-07-23 00:58:20.493435] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.320 [2024-07-23 00:58:20.493464] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.320 [2024-07-23 00:58:20.504282] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.320 [2024-07-23 00:58:20.504311] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.320 [2024-07-23 00:58:20.515182] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.320 [2024-07-23 00:58:20.515211] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.578 [2024-07-23 00:58:20.527778] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.578 [2024-07-23 00:58:20.527806] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.578 [2024-07-23 00:58:20.536791] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.578 [2024-07-23 00:58:20.536818] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.578 [2024-07-23 00:58:20.548779] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.578 [2024-07-23 00:58:20.548806] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.578 [2024-07-23 00:58:20.562027] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.578 [2024-07-23 00:58:20.562054] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.578 [2024-07-23 00:58:20.571608] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.578 [2024-07-23 00:58:20.571642] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.578 [2024-07-23 00:58:20.581869] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.578 [2024-07-23 00:58:20.581896] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.578 [2024-07-23 00:58:20.592599] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.578 [2024-07-23 00:58:20.592634] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.578 [2024-07-23 00:58:20.603517] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.578 [2024-07-23 00:58:20.603544] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.578 [2024-07-23 00:58:20.614442] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.578 [2024-07-23 00:58:20.614469] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.578 [2024-07-23 00:58:20.625355] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.578 [2024-07-23 00:58:20.625386] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.578 [2024-07-23 00:58:20.636161] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.578 [2024-07-23 00:58:20.636189] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.578 [2024-07-23 00:58:20.647067] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.579 [2024-07-23 00:58:20.647101] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.579 [2024-07-23 00:58:20.657798] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.579 [2024-07-23 00:58:20.657825] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.579 [2024-07-23 00:58:20.668392] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.579 [2024-07-23 00:58:20.668423] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.579 [2024-07-23 00:58:20.679244] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.579 [2024-07-23 00:58:20.679272] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.579 [2024-07-23 00:58:20.690252] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.579 [2024-07-23 00:58:20.690282] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.579 [2024-07-23 00:58:20.701220] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.579 [2024-07-23 00:58:20.701250] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.579 [2024-07-23 00:58:20.711859] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.579 [2024-07-23 00:58:20.711897] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.579 [2024-07-23 00:58:20.722709] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.579 [2024-07-23 00:58:20.722736] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.579 [2024-07-23 00:58:20.733590] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.579 [2024-07-23 00:58:20.733629] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.579 [2024-07-23 00:58:20.744406] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.579 [2024-07-23 00:58:20.744436] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.579 [2024-07-23 00:58:20.757238] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.579 [2024-07-23 00:58:20.757269] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.579 [2024-07-23 00:58:20.766801] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.579 [2024-07-23 00:58:20.766829] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.579 [2024-07-23 00:58:20.778413] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.579 [2024-07-23 00:58:20.778444] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.838 [2024-07-23 00:58:20.789270] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.838 [2024-07-23 00:58:20.789301] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.838 [2024-07-23 00:58:20.800379] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.838 [2024-07-23 00:58:20.800409] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.838 [2024-07-23 00:58:20.811319] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.838 [2024-07-23 00:58:20.811349] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.838 [2024-07-23 00:58:20.821800] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.838 [2024-07-23 00:58:20.821827] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.838 [2024-07-23 00:58:20.832407] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.838 [2024-07-23 00:58:20.832437] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.838 [2024-07-23 00:58:20.843231] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.838 [2024-07-23 00:58:20.843261] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.838 [2024-07-23 00:58:20.856532] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.838 [2024-07-23 00:58:20.856563] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.838 [2024-07-23 00:58:20.866007] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.838 [2024-07-23 00:58:20.866037] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.838 [2024-07-23 00:58:20.877610] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.838 [2024-07-23 00:58:20.877644] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.838 [2024-07-23 00:58:20.888335] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.838 [2024-07-23 00:58:20.888362] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.838 [2024-07-23 00:58:20.898606] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.838 [2024-07-23 00:58:20.898641] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.838 [2024-07-23 00:58:20.909238] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.838 [2024-07-23 00:58:20.909265] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.838 [2024-07-23 00:58:20.919879] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.838 [2024-07-23 00:58:20.919925] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.838 [2024-07-23 00:58:20.930342] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.838 [2024-07-23 00:58:20.930368] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.838 [2024-07-23 00:58:20.941213] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.838 [2024-07-23 00:58:20.941240] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.838 [2024-07-23 00:58:20.951922] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.838 [2024-07-23 00:58:20.951950] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.838 [2024-07-23 00:58:20.962181] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.838 [2024-07-23 00:58:20.962208] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.838 [2024-07-23 00:58:20.972792] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.838 [2024-07-23 00:58:20.972819] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.838 [2024-07-23 00:58:20.983316] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.838 [2024-07-23 00:58:20.983343] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.838 [2024-07-23 00:58:20.995581] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.838 [2024-07-23 00:58:20.995611] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.838 [2024-07-23 00:58:21.004625] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.838 [2024-07-23 00:58:21.004652] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.839 [2024-07-23 00:58:21.016256] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.839 [2024-07-23 00:58:21.016286] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.839 [2024-07-23 00:58:21.028830] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.839 [2024-07-23 00:58:21.028860] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.839 [2024-07-23 00:58:21.038819] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.839 [2024-07-23 00:58:21.038849] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.099 [2024-07-23 00:58:21.050004] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.099 [2024-07-23 00:58:21.050034] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.099 [2024-07-23 00:58:21.060786] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.099 [2024-07-23 00:58:21.060817] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.099 [2024-07-23 00:58:21.071518] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.099 [2024-07-23 00:58:21.071548] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.099 [2024-07-23 00:58:21.084470] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.099 [2024-07-23 00:58:21.084500] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.099 [2024-07-23 00:58:21.094190] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.099 [2024-07-23 00:58:21.094219] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.099 [2024-07-23 00:58:21.105639] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.099 [2024-07-23 00:58:21.105669] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.099 [2024-07-23 00:58:21.116333] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.099 [2024-07-23 00:58:21.116363] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.099 [2024-07-23 00:58:21.127261] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.099 [2024-07-23 00:58:21.127291] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.099 [2024-07-23 00:58:21.138216] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.099 [2024-07-23 00:58:21.138245] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.099 [2024-07-23 00:58:21.149054] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.099 [2024-07-23 00:58:21.149085] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.099 [2024-07-23 00:58:21.159977] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.099 [2024-07-23 00:58:21.160007] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.099 [2024-07-23 00:58:21.170749] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.099 [2024-07-23 00:58:21.170779] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.099 [2024-07-23 00:58:21.181522] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.099 [2024-07-23 00:58:21.181551] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.099 [2024-07-23 00:58:21.192717] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.099 [2024-07-23 00:58:21.192747] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.099 [2024-07-23 00:58:21.203552] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.099 [2024-07-23 00:58:21.203582] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.099 [2024-07-23 00:58:21.214755] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.099 [2024-07-23 00:58:21.214786] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.099 [2024-07-23 00:58:21.225513] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.099 [2024-07-23 00:58:21.225542] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.099 [2024-07-23 00:58:21.236534] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.099 [2024-07-23 00:58:21.236564] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.099 [2024-07-23 00:58:21.247775] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.099 [2024-07-23 00:58:21.247804] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.099 [2024-07-23 00:58:21.258513] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.099 [2024-07-23 00:58:21.258542] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.099 [2024-07-23 00:58:21.269501] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.099 [2024-07-23 00:58:21.269532] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.099 [2024-07-23 00:58:21.280830] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.099 [2024-07-23 00:58:21.280860] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.099 [2024-07-23 00:58:21.292102] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.099 [2024-07-23 00:58:21.292131] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.358 [2024-07-23 00:58:21.303249] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.358 [2024-07-23 00:58:21.303280] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.358 [2024-07-23 00:58:21.313735] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.358 [2024-07-23 00:58:21.313767] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.358 [2024-07-23 00:58:21.324780] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.358 [2024-07-23 00:58:21.324811] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.358 [2024-07-23 00:58:21.337654] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.358 [2024-07-23 00:58:21.337696] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.358 [2024-07-23 00:58:21.354251] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.358 [2024-07-23 00:58:21.354283] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.358 [2024-07-23 00:58:21.364510] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.358 [2024-07-23 00:58:21.364540] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.358 [2024-07-23 00:58:21.375968] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.358 [2024-07-23 00:58:21.375998] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.358 [2024-07-23 00:58:21.387098] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.358 [2024-07-23 00:58:21.387130] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.358 [2024-07-23 00:58:21.398482] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.358 [2024-07-23 00:58:21.398513] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.358 [2024-07-23 00:58:21.409609] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.358 [2024-07-23 00:58:21.409650] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.358 [2024-07-23 00:58:21.420770] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.358 [2024-07-23 00:58:21.420800] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.358 [2024-07-23 00:58:21.431948] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.358 [2024-07-23 00:58:21.431979] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.358 [2024-07-23 00:58:21.442999] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.358 [2024-07-23 00:58:21.443029] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.358 [2024-07-23 00:58:21.454309] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.358 [2024-07-23 00:58:21.454339] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.358 [2024-07-23 00:58:21.465404] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.358 [2024-07-23 00:58:21.465434] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.358 [2024-07-23 00:58:21.476115] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.358 [2024-07-23 00:58:21.476159] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.358 [2024-07-23 00:58:21.486955] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.358 [2024-07-23 00:58:21.486985] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.358 [2024-07-23 00:58:21.497767] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.358 [2024-07-23 00:58:21.497797] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.358 [2024-07-23 00:58:21.508815] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.358 [2024-07-23 00:58:21.508845] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.358 [2024-07-23 00:58:21.519609] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.358 [2024-07-23 00:58:21.519662] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.358 [2024-07-23 00:58:21.532123] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.358 [2024-07-23 00:58:21.532153] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.358 [2024-07-23 00:58:21.541850] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.358 [2024-07-23 00:58:21.541879] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.358 [2024-07-23 00:58:21.553976] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.358 [2024-07-23 00:58:21.554006] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.618 [2024-07-23 00:58:21.565363] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.618 [2024-07-23 00:58:21.565393] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.618 [2024-07-23 00:58:21.576553] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.618 [2024-07-23 00:58:21.576583] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.618 [2024-07-23 00:58:21.587547] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.618 [2024-07-23 00:58:21.587577] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.618 [2024-07-23 00:58:21.598739] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.618 [2024-07-23 00:58:21.598769] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.618 [2024-07-23 00:58:21.609744] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.618 [2024-07-23 00:58:21.609774] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.618 [2024-07-23 00:58:21.622946] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.618 [2024-07-23 00:58:21.622976] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.618 [2024-07-23 00:58:21.632826] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.618 [2024-07-23 00:58:21.632856] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.618 [2024-07-23 00:58:21.644137] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.618 [2024-07-23 00:58:21.644166] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.618 [2024-07-23 00:58:21.654778] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.618 [2024-07-23 00:58:21.654808] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.618 [2024-07-23 00:58:21.665970] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.618 [2024-07-23 00:58:21.666000] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.618 [2024-07-23 00:58:21.677063] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.618 [2024-07-23 00:58:21.677093] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.618 [2024-07-23 00:58:21.688090] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.618 [2024-07-23 00:58:21.688128] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.618 [2024-07-23 00:58:21.699352] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.618 [2024-07-23 00:58:21.699383] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.618 [2024-07-23 00:58:21.710146] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.618 [2024-07-23 00:58:21.710177] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.618 [2024-07-23 00:58:21.722940] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.618 [2024-07-23 00:58:21.722970] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.618 [2024-07-23 00:58:21.732904] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.618 [2024-07-23 00:58:21.732934] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.618 [2024-07-23 00:58:21.744482] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.618 [2024-07-23 00:58:21.744512] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.618 [2024-07-23 00:58:21.755460] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.618 [2024-07-23 00:58:21.755491] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.618 [2024-07-23 00:58:21.766497] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.618 [2024-07-23 00:58:21.766527] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.618 [2024-07-23 00:58:21.777293] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.618 [2024-07-23 00:58:21.777323] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.618 [2024-07-23 00:58:21.790115] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.618 [2024-07-23 00:58:21.790145] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.618 [2024-07-23 00:58:21.799936] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.618 [2024-07-23 00:58:21.799966] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.618 [2024-07-23 00:58:21.811476] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.618 [2024-07-23 00:58:21.811506] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.908 [2024-07-23 00:58:21.822374] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.908 [2024-07-23 00:58:21.822405] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.908 [2024-07-23 00:58:21.833127] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.908 [2024-07-23 00:58:21.833157] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.908 [2024-07-23 00:58:21.843949] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.908 [2024-07-23 00:58:21.843979] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.908 [2024-07-23 00:58:21.855064] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.908 [2024-07-23 00:58:21.855094] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.908 [2024-07-23 00:58:21.866185] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.908 [2024-07-23 00:58:21.866214] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.908 [2024-07-23 00:58:21.876920] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.908 [2024-07-23 00:58:21.876950] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.908 [2024-07-23 00:58:21.887701] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.908 [2024-07-23 00:58:21.887731] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.908 [2024-07-23 00:58:21.898719] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.908 [2024-07-23 00:58:21.898756] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.908 [2024-07-23 00:58:21.911174] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.908 [2024-07-23 00:58:21.911204] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.908 [2024-07-23 00:58:21.920984] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.908 [2024-07-23 00:58:21.921015] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.908 [2024-07-23 00:58:21.932520] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.908 [2024-07-23 00:58:21.932550] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.908 [2024-07-23 00:58:21.943428] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.908 [2024-07-23 00:58:21.943457] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.908 [2024-07-23 00:58:21.954502] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.908 [2024-07-23 00:58:21.954532] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.908 [2024-07-23 00:58:21.965540] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.908 [2024-07-23 00:58:21.965570] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.908 [2024-07-23 00:58:21.976415] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.908 [2024-07-23 00:58:21.976445] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.908 [2024-07-23 00:58:21.987643] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.908 [2024-07-23 00:58:21.987673] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.908 [2024-07-23 00:58:21.998273] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.908 [2024-07-23 00:58:21.998304] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.908 [2024-07-23 00:58:22.011243] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.908 [2024-07-23 00:58:22.011272] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.908 [2024-07-23 00:58:22.021218] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.908 [2024-07-23 00:58:22.021248] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.908 [2024-07-23 00:58:22.032274] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.908 [2024-07-23 00:58:22.032303] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.908 [2024-07-23 00:58:22.042999] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.908 [2024-07-23 00:58:22.043030] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.908 [2024-07-23 00:58:22.054144] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.908 [2024-07-23 00:58:22.054174] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.908 [2024-07-23 00:58:22.064887] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.908 [2024-07-23 00:58:22.064926] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.908 [2024-07-23 00:58:22.076075] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.908 [2024-07-23 00:58:22.076106] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.908 [2024-07-23 00:58:22.086766] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.908 [2024-07-23 00:58:22.086796] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.168 [2024-07-23 00:58:22.097901] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.168 [2024-07-23 00:58:22.097937] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.168 [2024-07-23 00:58:22.107523] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.168 [2024-07-23 00:58:22.107560] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.168 00:18:38.168 Latency(us) 00:18:38.168 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:38.168 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:18:38.168 Nvme1n1 : 5.01 11792.10 92.13 0.00 0.00 10840.88 4271.98 22816.24 00:18:38.168 =================================================================================================================== 00:18:38.168 Total : 11792.10 92.13 0.00 0.00 10840.88 4271.98 22816.24 00:18:38.168 [2024-07-23 00:58:22.113797] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.168 [2024-07-23 00:58:22.113826] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.168 [2024-07-23 00:58:22.121815] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.168 [2024-07-23 00:58:22.121843] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.168 [2024-07-23 00:58:22.129850] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.168 [2024-07-23 00:58:22.129885] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.168 [2024-07-23 00:58:22.137901] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.168 [2024-07-23 00:58:22.137951] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.168 [2024-07-23 00:58:22.145915] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.168 [2024-07-23 00:58:22.145962] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.168 [2024-07-23 00:58:22.153936] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.168 [2024-07-23 00:58:22.153983] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.168 [2024-07-23 00:58:22.161955] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.168 [2024-07-23 00:58:22.162002] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.168 [2024-07-23 00:58:22.169985] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.168 [2024-07-23 00:58:22.170034] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.168 [2024-07-23 00:58:22.178007] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.168 [2024-07-23 00:58:22.178056] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.168 [2024-07-23 00:58:22.186026] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.168 [2024-07-23 00:58:22.186075] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.168 [2024-07-23 00:58:22.194048] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.168 [2024-07-23 00:58:22.194098] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.168 [2024-07-23 00:58:22.202073] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.168 [2024-07-23 00:58:22.202123] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.168 [2024-07-23 00:58:22.210100] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.168 [2024-07-23 00:58:22.210152] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.168 [2024-07-23 00:58:22.218124] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.168 [2024-07-23 00:58:22.218174] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.168 [2024-07-23 00:58:22.226142] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.168 [2024-07-23 00:58:22.226189] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.168 [2024-07-23 00:58:22.234162] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.168 [2024-07-23 00:58:22.234210] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.168 [2024-07-23 00:58:22.242180] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.168 [2024-07-23 00:58:22.242230] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.168 [2024-07-23 00:58:22.250205] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.168 [2024-07-23 00:58:22.250254] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.168 [2024-07-23 00:58:22.258201] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.168 [2024-07-23 00:58:22.258235] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.168 [2024-07-23 00:58:22.266212] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.168 [2024-07-23 00:58:22.266237] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.169 [2024-07-23 00:58:22.274266] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.169 [2024-07-23 00:58:22.274311] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.169 [2024-07-23 00:58:22.282296] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.169 [2024-07-23 00:58:22.282345] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.169 [2024-07-23 00:58:22.290314] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.169 [2024-07-23 00:58:22.290359] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.169 [2024-07-23 00:58:22.298305] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.169 [2024-07-23 00:58:22.298331] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.169 [2024-07-23 00:58:22.306337] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.169 [2024-07-23 00:58:22.306368] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.169 [2024-07-23 00:58:22.314394] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.169 [2024-07-23 00:58:22.314442] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.169 [2024-07-23 00:58:22.322412] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.169 [2024-07-23 00:58:22.322460] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.169 [2024-07-23 00:58:22.330396] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.169 [2024-07-23 00:58:22.330422] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.169 [2024-07-23 00:58:22.338416] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.169 [2024-07-23 00:58:22.338440] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.169 [2024-07-23 00:58:22.346439] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.169 [2024-07-23 00:58:22.346465] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3407853) - No such process 00:18:38.169 00:58:22 -- target/zcopy.sh@49 -- # wait 3407853 00:18:38.169 00:58:22 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:38.169 00:58:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:38.169 00:58:22 -- common/autotest_common.sh@10 -- # set +x 00:18:38.169 00:58:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:38.169 00:58:22 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:18:38.169 00:58:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:38.169 00:58:22 -- common/autotest_common.sh@10 -- # set +x 00:18:38.169 delay0 00:18:38.169 00:58:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:38.169 00:58:22 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:18:38.169 00:58:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:38.169 00:58:22 -- common/autotest_common.sh@10 -- # set +x 00:18:38.427 00:58:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:38.427 00:58:22 -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:18:38.427 EAL: No free 2048 kB hugepages reported on node 1 00:18:38.427 [2024-07-23 00:58:22.507762] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:18:44.996 Initializing NVMe Controllers 00:18:44.996 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:44.996 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:44.996 Initialization complete. Launching workers. 00:18:44.996 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 747 00:18:44.996 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1034, failed to submit 33 00:18:44.996 success 833, unsuccess 201, failed 0 00:18:44.996 00:58:28 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:18:44.996 00:58:28 -- target/zcopy.sh@60 -- # nvmftestfini 00:18:44.996 00:58:28 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:44.996 00:58:28 -- nvmf/common.sh@116 -- # sync 00:18:44.996 00:58:28 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:44.996 00:58:28 -- nvmf/common.sh@119 -- # set +e 00:18:44.996 00:58:28 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:44.996 00:58:28 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:44.996 rmmod nvme_tcp 00:18:44.996 rmmod nvme_fabrics 00:18:44.996 rmmod nvme_keyring 00:18:44.996 00:58:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:44.996 00:58:28 -- nvmf/common.sh@123 -- # set -e 00:18:44.996 00:58:28 -- nvmf/common.sh@124 -- # return 0 00:18:44.996 00:58:28 -- nvmf/common.sh@477 -- # '[' -n 3406466 ']' 00:18:44.996 00:58:28 -- nvmf/common.sh@478 -- # killprocess 3406466 00:18:44.996 00:58:28 -- common/autotest_common.sh@926 -- # '[' -z 3406466 ']' 00:18:44.996 00:58:28 -- common/autotest_common.sh@930 -- # kill -0 3406466 00:18:44.996 00:58:28 -- common/autotest_common.sh@931 -- # uname 00:18:44.996 00:58:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:44.996 00:58:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3406466 00:18:44.996 00:58:28 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:18:44.996 00:58:28 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:18:44.996 00:58:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3406466' 00:18:44.996 killing process with pid 3406466 00:18:44.996 00:58:28 -- common/autotest_common.sh@945 -- # kill 3406466 00:18:44.996 00:58:28 -- common/autotest_common.sh@950 -- # wait 3406466 00:18:44.996 00:58:29 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:44.996 00:58:29 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:44.996 00:58:29 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:44.996 00:58:29 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:44.996 00:58:29 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:44.996 00:58:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:44.996 00:58:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:44.996 00:58:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:47.528 00:58:31 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:47.528 00:18:47.528 real 0m28.649s 00:18:47.528 user 0m42.432s 00:18:47.528 sys 0m8.252s 00:18:47.528 00:58:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:47.528 00:58:31 -- common/autotest_common.sh@10 -- # set +x 00:18:47.528 ************************************ 00:18:47.528 END TEST nvmf_zcopy 00:18:47.528 ************************************ 00:18:47.528 00:58:31 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:47.528 00:58:31 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:47.528 00:58:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:47.528 00:58:31 -- common/autotest_common.sh@10 -- # set +x 00:18:47.528 ************************************ 00:18:47.528 START TEST nvmf_nmic 00:18:47.528 ************************************ 00:18:47.528 00:58:31 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:47.528 * Looking for test storage... 00:18:47.528 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:47.528 00:58:31 -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:47.528 00:58:31 -- nvmf/common.sh@7 -- # uname -s 00:18:47.528 00:58:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:47.528 00:58:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:47.528 00:58:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:47.528 00:58:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:47.528 00:58:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:47.528 00:58:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:47.528 00:58:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:47.528 00:58:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:47.529 00:58:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:47.529 00:58:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:47.529 00:58:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:47.529 00:58:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:47.529 00:58:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:47.529 00:58:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:47.529 00:58:31 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:47.529 00:58:31 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:47.529 00:58:31 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:47.529 00:58:31 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:47.529 00:58:31 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:47.529 00:58:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.529 00:58:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.529 00:58:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.529 00:58:31 -- paths/export.sh@5 -- # export PATH 00:18:47.529 00:58:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.529 00:58:31 -- nvmf/common.sh@46 -- # : 0 00:18:47.529 00:58:31 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:47.529 00:58:31 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:47.529 00:58:31 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:47.529 00:58:31 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:47.529 00:58:31 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:47.529 00:58:31 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:47.529 00:58:31 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:47.529 00:58:31 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:47.529 00:58:31 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:47.529 00:58:31 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:47.529 00:58:31 -- target/nmic.sh@14 -- # nvmftestinit 00:18:47.529 00:58:31 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:47.529 00:58:31 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:47.529 00:58:31 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:47.529 00:58:31 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:47.529 00:58:31 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:47.529 00:58:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:47.529 00:58:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:47.529 00:58:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:47.529 00:58:31 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:47.529 00:58:31 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:47.529 00:58:31 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:47.529 00:58:31 -- common/autotest_common.sh@10 -- # set +x 00:18:49.435 00:58:33 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:49.435 00:58:33 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:49.435 00:58:33 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:49.435 00:58:33 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:49.435 00:58:33 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:49.435 00:58:33 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:49.435 00:58:33 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:49.435 00:58:33 -- nvmf/common.sh@294 -- # net_devs=() 00:18:49.435 00:58:33 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:49.435 00:58:33 -- nvmf/common.sh@295 -- # e810=() 00:18:49.435 00:58:33 -- nvmf/common.sh@295 -- # local -ga e810 00:18:49.435 00:58:33 -- nvmf/common.sh@296 -- # x722=() 00:18:49.435 00:58:33 -- nvmf/common.sh@296 -- # local -ga x722 00:18:49.435 00:58:33 -- nvmf/common.sh@297 -- # mlx=() 00:18:49.435 00:58:33 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:49.435 00:58:33 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:49.435 00:58:33 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:49.435 00:58:33 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:49.435 00:58:33 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:49.435 00:58:33 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:49.435 00:58:33 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:49.435 00:58:33 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:49.435 00:58:33 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:49.435 00:58:33 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:49.435 00:58:33 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:49.435 00:58:33 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:49.435 00:58:33 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:49.435 00:58:33 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:49.435 00:58:33 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:49.435 00:58:33 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:49.435 00:58:33 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:49.435 00:58:33 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:49.435 00:58:33 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:49.435 00:58:33 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:49.435 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:49.435 00:58:33 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:49.435 00:58:33 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:49.435 00:58:33 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:49.435 00:58:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:49.435 00:58:33 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:49.435 00:58:33 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:49.435 00:58:33 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:49.435 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:49.435 00:58:33 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:49.435 00:58:33 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:49.435 00:58:33 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:49.435 00:58:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:49.435 00:58:33 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:49.435 00:58:33 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:49.435 00:58:33 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:49.435 00:58:33 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:49.435 00:58:33 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:49.435 00:58:33 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:49.435 00:58:33 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:49.435 00:58:33 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:49.435 00:58:33 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:49.435 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:49.435 00:58:33 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:49.435 00:58:33 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:49.435 00:58:33 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:49.435 00:58:33 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:49.435 00:58:33 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:49.435 00:58:33 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:49.435 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:49.435 00:58:33 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:49.435 00:58:33 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:49.435 00:58:33 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:49.435 00:58:33 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:49.435 00:58:33 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:49.435 00:58:33 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:49.435 00:58:33 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:49.435 00:58:33 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:49.435 00:58:33 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:49.435 00:58:33 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:49.435 00:58:33 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:49.435 00:58:33 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:49.435 00:58:33 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:49.435 00:58:33 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:49.435 00:58:33 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:49.435 00:58:33 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:49.435 00:58:33 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:49.435 00:58:33 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:49.435 00:58:33 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:49.435 00:58:33 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:49.435 00:58:33 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:49.435 00:58:33 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:49.435 00:58:33 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:49.435 00:58:33 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:49.435 00:58:33 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:49.435 00:58:33 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:49.435 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:49.435 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:18:49.435 00:18:49.435 --- 10.0.0.2 ping statistics --- 00:18:49.435 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:49.435 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:18:49.435 00:58:33 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:49.435 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:49.435 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:18:49.435 00:18:49.435 --- 10.0.0.1 ping statistics --- 00:18:49.435 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:49.435 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:18:49.435 00:58:33 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:49.435 00:58:33 -- nvmf/common.sh@410 -- # return 0 00:18:49.435 00:58:33 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:49.435 00:58:33 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:49.435 00:58:33 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:49.435 00:58:33 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:49.435 00:58:33 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:49.435 00:58:33 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:49.435 00:58:33 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:49.435 00:58:33 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:18:49.435 00:58:33 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:49.435 00:58:33 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:49.435 00:58:33 -- common/autotest_common.sh@10 -- # set +x 00:18:49.435 00:58:33 -- nvmf/common.sh@469 -- # nvmfpid=3411284 00:18:49.435 00:58:33 -- nvmf/common.sh@470 -- # waitforlisten 3411284 00:18:49.435 00:58:33 -- common/autotest_common.sh@819 -- # '[' -z 3411284 ']' 00:18:49.435 00:58:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:49.435 00:58:33 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:49.435 00:58:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:49.435 00:58:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:49.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:49.435 00:58:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:49.435 00:58:33 -- common/autotest_common.sh@10 -- # set +x 00:18:49.435 [2024-07-23 00:58:33.412915] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:18:49.435 [2024-07-23 00:58:33.412988] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:49.435 EAL: No free 2048 kB hugepages reported on node 1 00:18:49.435 [2024-07-23 00:58:33.482803] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:49.435 [2024-07-23 00:58:33.576621] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:49.435 [2024-07-23 00:58:33.576790] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:49.435 [2024-07-23 00:58:33.576810] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:49.435 [2024-07-23 00:58:33.576824] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:49.435 [2024-07-23 00:58:33.576931] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:49.435 [2024-07-23 00:58:33.576976] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:49.435 [2024-07-23 00:58:33.577027] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:49.435 [2024-07-23 00:58:33.577030] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:50.370 00:58:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:50.370 00:58:34 -- common/autotest_common.sh@852 -- # return 0 00:18:50.370 00:58:34 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:50.370 00:58:34 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:50.370 00:58:34 -- common/autotest_common.sh@10 -- # set +x 00:18:50.370 00:58:34 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:50.370 00:58:34 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:50.370 00:58:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:50.370 00:58:34 -- common/autotest_common.sh@10 -- # set +x 00:18:50.370 [2024-07-23 00:58:34.357130] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:50.370 00:58:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:50.370 00:58:34 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:50.370 00:58:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:50.370 00:58:34 -- common/autotest_common.sh@10 -- # set +x 00:18:50.370 Malloc0 00:18:50.370 00:58:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:50.370 00:58:34 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:50.370 00:58:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:50.370 00:58:34 -- common/autotest_common.sh@10 -- # set +x 00:18:50.370 00:58:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:50.370 00:58:34 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:50.370 00:58:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:50.370 00:58:34 -- common/autotest_common.sh@10 -- # set +x 00:18:50.370 00:58:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:50.370 00:58:34 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:50.370 00:58:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:50.370 00:58:34 -- common/autotest_common.sh@10 -- # set +x 00:18:50.370 [2024-07-23 00:58:34.408103] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:50.370 00:58:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:50.370 00:58:34 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:18:50.370 test case1: single bdev can't be used in multiple subsystems 00:18:50.370 00:58:34 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:18:50.370 00:58:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:50.370 00:58:34 -- common/autotest_common.sh@10 -- # set +x 00:18:50.370 00:58:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:50.370 00:58:34 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:18:50.370 00:58:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:50.370 00:58:34 -- common/autotest_common.sh@10 -- # set +x 00:18:50.370 00:58:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:50.370 00:58:34 -- target/nmic.sh@28 -- # nmic_status=0 00:18:50.370 00:58:34 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:18:50.370 00:58:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:50.370 00:58:34 -- common/autotest_common.sh@10 -- # set +x 00:18:50.370 [2024-07-23 00:58:34.431997] bdev.c:7940:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:18:50.370 [2024-07-23 00:58:34.432025] subsystem.c:1819:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:18:50.370 [2024-07-23 00:58:34.432048] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.370 request: 00:18:50.370 { 00:18:50.370 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:18:50.370 "namespace": { 00:18:50.370 "bdev_name": "Malloc0" 00:18:50.370 }, 00:18:50.370 "method": "nvmf_subsystem_add_ns", 00:18:50.370 "req_id": 1 00:18:50.370 } 00:18:50.370 Got JSON-RPC error response 00:18:50.370 response: 00:18:50.370 { 00:18:50.370 "code": -32602, 00:18:50.370 "message": "Invalid parameters" 00:18:50.370 } 00:18:50.370 00:58:34 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:18:50.370 00:58:34 -- target/nmic.sh@29 -- # nmic_status=1 00:18:50.370 00:58:34 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:18:50.370 00:58:34 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:18:50.370 Adding namespace failed - expected result. 00:18:50.370 00:58:34 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:18:50.370 test case2: host connect to nvmf target in multiple paths 00:18:50.370 00:58:34 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:50.370 00:58:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:50.370 00:58:34 -- common/autotest_common.sh@10 -- # set +x 00:18:50.370 [2024-07-23 00:58:34.440104] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:50.370 00:58:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:50.370 00:58:34 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:50.938 00:58:35 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:18:51.535 00:58:35 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:18:51.535 00:58:35 -- common/autotest_common.sh@1177 -- # local i=0 00:18:51.535 00:58:35 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:51.535 00:58:35 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:51.535 00:58:35 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:54.067 00:58:37 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:54.067 00:58:37 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:54.067 00:58:37 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:18:54.067 00:58:37 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:54.067 00:58:37 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:54.067 00:58:37 -- common/autotest_common.sh@1187 -- # return 0 00:18:54.067 00:58:37 -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:18:54.067 [global] 00:18:54.067 thread=1 00:18:54.067 invalidate=1 00:18:54.067 rw=write 00:18:54.067 time_based=1 00:18:54.067 runtime=1 00:18:54.067 ioengine=libaio 00:18:54.067 direct=1 00:18:54.067 bs=4096 00:18:54.067 iodepth=1 00:18:54.067 norandommap=0 00:18:54.067 numjobs=1 00:18:54.067 00:18:54.067 verify_dump=1 00:18:54.067 verify_backlog=512 00:18:54.067 verify_state_save=0 00:18:54.067 do_verify=1 00:18:54.068 verify=crc32c-intel 00:18:54.068 [job0] 00:18:54.068 filename=/dev/nvme0n1 00:18:54.068 Could not set queue depth (nvme0n1) 00:18:54.068 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:54.068 fio-3.35 00:18:54.068 Starting 1 thread 00:18:55.003 00:18:55.003 job0: (groupid=0, jobs=1): err= 0: pid=3411945: Tue Jul 23 00:58:39 2024 00:18:55.003 read: IOPS=515, BW=2063KiB/s (2113kB/s)(2092KiB/1014msec) 00:18:55.003 slat (nsec): min=7665, max=57610, avg=19394.30, stdev=6880.88 00:18:55.003 clat (usec): min=291, max=42056, avg=1248.60, stdev=5968.27 00:18:55.003 lat (usec): min=309, max=42074, avg=1267.99, stdev=5968.16 00:18:55.003 clat percentiles (usec): 00:18:55.003 | 1.00th=[ 302], 5.00th=[ 318], 10.00th=[ 326], 20.00th=[ 343], 00:18:55.003 | 30.00th=[ 351], 40.00th=[ 363], 50.00th=[ 371], 60.00th=[ 383], 00:18:55.003 | 70.00th=[ 392], 80.00th=[ 396], 90.00th=[ 412], 95.00th=[ 498], 00:18:55.003 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:18:55.003 | 99.99th=[42206] 00:18:55.003 write: IOPS=1009, BW=4039KiB/s (4136kB/s)(4096KiB/1014msec); 0 zone resets 00:18:55.003 slat (nsec): min=7693, max=76258, avg=19177.58, stdev=10529.57 00:18:55.003 clat (usec): min=176, max=3298, avg=314.02, stdev=123.12 00:18:55.003 lat (usec): min=184, max=3317, avg=333.20, stdev=126.57 00:18:55.003 clat percentiles (usec): 00:18:55.003 | 1.00th=[ 184], 5.00th=[ 190], 10.00th=[ 194], 20.00th=[ 206], 00:18:55.003 | 30.00th=[ 273], 40.00th=[ 306], 50.00th=[ 322], 60.00th=[ 347], 00:18:55.003 | 70.00th=[ 359], 80.00th=[ 375], 90.00th=[ 408], 95.00th=[ 424], 00:18:55.003 | 99.00th=[ 461], 99.50th=[ 490], 99.90th=[ 1004], 99.95th=[ 3294], 00:18:55.003 | 99.99th=[ 3294] 00:18:55.003 bw ( KiB/s): min= 1568, max= 6624, per=100.00%, avg=4096.00, stdev=3575.13, samples=2 00:18:55.003 iops : min= 392, max= 1656, avg=1024.00, stdev=893.78, samples=2 00:18:55.003 lat (usec) : 250=17.13%, 500=81.06%, 750=0.90% 00:18:55.003 lat (msec) : 2=0.13%, 4=0.06%, 50=0.71% 00:18:55.003 cpu : usr=2.07%, sys=3.65%, ctx=1547, majf=0, minf=2 00:18:55.003 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:55.004 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:55.004 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:55.004 issued rwts: total=523,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:55.004 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:55.004 00:18:55.004 Run status group 0 (all jobs): 00:18:55.004 READ: bw=2063KiB/s (2113kB/s), 2063KiB/s-2063KiB/s (2113kB/s-2113kB/s), io=2092KiB (2142kB), run=1014-1014msec 00:18:55.004 WRITE: bw=4039KiB/s (4136kB/s), 4039KiB/s-4039KiB/s (4136kB/s-4136kB/s), io=4096KiB (4194kB), run=1014-1014msec 00:18:55.004 00:18:55.004 Disk stats (read/write): 00:18:55.004 nvme0n1: ios=570/1024, merge=0/0, ticks=551/301, in_queue=852, util=91.98% 00:18:55.004 00:58:39 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:55.004 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:18:55.004 00:58:39 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:55.004 00:58:39 -- common/autotest_common.sh@1198 -- # local i=0 00:18:55.004 00:58:39 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:55.004 00:58:39 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:55.004 00:58:39 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:55.004 00:58:39 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:55.004 00:58:39 -- common/autotest_common.sh@1210 -- # return 0 00:18:55.004 00:58:39 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:18:55.004 00:58:39 -- target/nmic.sh@53 -- # nvmftestfini 00:18:55.004 00:58:39 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:55.004 00:58:39 -- nvmf/common.sh@116 -- # sync 00:18:55.004 00:58:39 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:55.004 00:58:39 -- nvmf/common.sh@119 -- # set +e 00:18:55.004 00:58:39 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:55.004 00:58:39 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:55.004 rmmod nvme_tcp 00:18:55.004 rmmod nvme_fabrics 00:18:55.004 rmmod nvme_keyring 00:18:55.004 00:58:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:55.004 00:58:39 -- nvmf/common.sh@123 -- # set -e 00:18:55.004 00:58:39 -- nvmf/common.sh@124 -- # return 0 00:18:55.004 00:58:39 -- nvmf/common.sh@477 -- # '[' -n 3411284 ']' 00:18:55.004 00:58:39 -- nvmf/common.sh@478 -- # killprocess 3411284 00:18:55.004 00:58:39 -- common/autotest_common.sh@926 -- # '[' -z 3411284 ']' 00:18:55.004 00:58:39 -- common/autotest_common.sh@930 -- # kill -0 3411284 00:18:55.004 00:58:39 -- common/autotest_common.sh@931 -- # uname 00:18:55.004 00:58:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:55.004 00:58:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3411284 00:18:55.263 00:58:39 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:55.263 00:58:39 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:55.263 00:58:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3411284' 00:18:55.263 killing process with pid 3411284 00:18:55.263 00:58:39 -- common/autotest_common.sh@945 -- # kill 3411284 00:18:55.263 00:58:39 -- common/autotest_common.sh@950 -- # wait 3411284 00:18:55.523 00:58:39 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:55.523 00:58:39 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:55.523 00:58:39 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:55.523 00:58:39 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:55.523 00:58:39 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:55.523 00:58:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:55.523 00:58:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:55.523 00:58:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:57.428 00:58:41 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:57.429 00:18:57.429 real 0m10.337s 00:18:57.429 user 0m24.143s 00:18:57.429 sys 0m2.512s 00:18:57.429 00:58:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:57.429 00:58:41 -- common/autotest_common.sh@10 -- # set +x 00:18:57.429 ************************************ 00:18:57.429 END TEST nvmf_nmic 00:18:57.429 ************************************ 00:18:57.429 00:58:41 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:18:57.429 00:58:41 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:57.429 00:58:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:57.429 00:58:41 -- common/autotest_common.sh@10 -- # set +x 00:18:57.429 ************************************ 00:18:57.429 START TEST nvmf_fio_target 00:18:57.429 ************************************ 00:18:57.429 00:58:41 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:18:57.429 * Looking for test storage... 00:18:57.429 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:57.429 00:58:41 -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:57.429 00:58:41 -- nvmf/common.sh@7 -- # uname -s 00:18:57.429 00:58:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:57.429 00:58:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:57.429 00:58:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:57.429 00:58:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:57.429 00:58:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:57.429 00:58:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:57.429 00:58:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:57.429 00:58:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:57.429 00:58:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:57.429 00:58:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:57.429 00:58:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:57.429 00:58:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:57.429 00:58:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:57.429 00:58:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:57.429 00:58:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:57.429 00:58:41 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:57.429 00:58:41 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:57.429 00:58:41 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:57.429 00:58:41 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:57.429 00:58:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.429 00:58:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.429 00:58:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.429 00:58:41 -- paths/export.sh@5 -- # export PATH 00:18:57.429 00:58:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.429 00:58:41 -- nvmf/common.sh@46 -- # : 0 00:18:57.429 00:58:41 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:57.429 00:58:41 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:57.429 00:58:41 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:57.429 00:58:41 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:57.429 00:58:41 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:57.429 00:58:41 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:57.429 00:58:41 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:57.429 00:58:41 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:57.429 00:58:41 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:57.429 00:58:41 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:57.429 00:58:41 -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:57.429 00:58:41 -- target/fio.sh@16 -- # nvmftestinit 00:18:57.429 00:58:41 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:57.429 00:58:41 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:57.429 00:58:41 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:57.429 00:58:41 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:57.429 00:58:41 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:57.429 00:58:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:57.429 00:58:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:57.429 00:58:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:57.688 00:58:41 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:57.688 00:58:41 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:57.688 00:58:41 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:57.688 00:58:41 -- common/autotest_common.sh@10 -- # set +x 00:18:59.593 00:58:43 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:59.593 00:58:43 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:59.593 00:58:43 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:59.593 00:58:43 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:59.593 00:58:43 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:59.593 00:58:43 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:59.593 00:58:43 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:59.593 00:58:43 -- nvmf/common.sh@294 -- # net_devs=() 00:18:59.593 00:58:43 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:59.593 00:58:43 -- nvmf/common.sh@295 -- # e810=() 00:18:59.593 00:58:43 -- nvmf/common.sh@295 -- # local -ga e810 00:18:59.593 00:58:43 -- nvmf/common.sh@296 -- # x722=() 00:18:59.593 00:58:43 -- nvmf/common.sh@296 -- # local -ga x722 00:18:59.593 00:58:43 -- nvmf/common.sh@297 -- # mlx=() 00:18:59.593 00:58:43 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:59.593 00:58:43 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:59.593 00:58:43 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:59.593 00:58:43 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:59.593 00:58:43 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:59.593 00:58:43 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:59.593 00:58:43 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:59.593 00:58:43 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:59.593 00:58:43 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:59.593 00:58:43 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:59.593 00:58:43 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:59.593 00:58:43 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:59.593 00:58:43 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:59.593 00:58:43 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:59.593 00:58:43 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:59.593 00:58:43 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:59.593 00:58:43 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:59.593 00:58:43 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:59.593 00:58:43 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:59.593 00:58:43 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:59.593 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:59.593 00:58:43 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:59.593 00:58:43 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:59.593 00:58:43 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:59.593 00:58:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:59.593 00:58:43 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:59.593 00:58:43 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:59.593 00:58:43 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:59.593 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:59.593 00:58:43 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:59.593 00:58:43 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:59.593 00:58:43 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:59.593 00:58:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:59.593 00:58:43 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:59.593 00:58:43 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:59.593 00:58:43 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:59.593 00:58:43 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:59.593 00:58:43 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:59.593 00:58:43 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:59.593 00:58:43 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:59.593 00:58:43 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:59.593 00:58:43 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:59.593 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:59.593 00:58:43 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:59.593 00:58:43 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:59.593 00:58:43 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:59.593 00:58:43 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:59.593 00:58:43 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:59.593 00:58:43 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:59.593 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:59.593 00:58:43 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:59.593 00:58:43 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:59.593 00:58:43 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:59.593 00:58:43 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:59.593 00:58:43 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:59.593 00:58:43 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:59.593 00:58:43 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:59.593 00:58:43 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:59.593 00:58:43 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:59.593 00:58:43 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:59.593 00:58:43 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:59.593 00:58:43 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:59.593 00:58:43 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:59.593 00:58:43 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:59.593 00:58:43 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:59.593 00:58:43 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:59.593 00:58:43 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:59.593 00:58:43 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:59.593 00:58:43 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:59.593 00:58:43 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:59.593 00:58:43 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:59.593 00:58:43 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:59.593 00:58:43 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:59.593 00:58:43 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:59.593 00:58:43 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:59.593 00:58:43 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:59.593 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:59.593 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:18:59.593 00:18:59.593 --- 10.0.0.2 ping statistics --- 00:18:59.593 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:59.593 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:18:59.593 00:58:43 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:59.593 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:59.593 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:18:59.593 00:18:59.593 --- 10.0.0.1 ping statistics --- 00:18:59.593 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:59.593 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:18:59.593 00:58:43 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:59.593 00:58:43 -- nvmf/common.sh@410 -- # return 0 00:18:59.593 00:58:43 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:59.593 00:58:43 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:59.593 00:58:43 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:59.593 00:58:43 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:59.593 00:58:43 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:59.593 00:58:43 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:59.593 00:58:43 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:59.852 00:58:43 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:18:59.852 00:58:43 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:59.852 00:58:43 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:59.852 00:58:43 -- common/autotest_common.sh@10 -- # set +x 00:18:59.852 00:58:43 -- nvmf/common.sh@469 -- # nvmfpid=3414038 00:18:59.852 00:58:43 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:59.852 00:58:43 -- nvmf/common.sh@470 -- # waitforlisten 3414038 00:18:59.852 00:58:43 -- common/autotest_common.sh@819 -- # '[' -z 3414038 ']' 00:18:59.852 00:58:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:59.852 00:58:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:59.852 00:58:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:59.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:59.852 00:58:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:59.852 00:58:43 -- common/autotest_common.sh@10 -- # set +x 00:18:59.852 [2024-07-23 00:58:43.842190] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:18:59.852 [2024-07-23 00:58:43.842256] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:59.852 EAL: No free 2048 kB hugepages reported on node 1 00:18:59.852 [2024-07-23 00:58:43.909572] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:59.852 [2024-07-23 00:58:44.000610] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:59.853 [2024-07-23 00:58:44.000803] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:59.853 [2024-07-23 00:58:44.000823] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:59.853 [2024-07-23 00:58:44.000837] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:59.853 [2024-07-23 00:58:44.000931] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:59.853 [2024-07-23 00:58:44.000988] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:59.853 [2024-07-23 00:58:44.001043] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:59.853 [2024-07-23 00:58:44.001046] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:00.818 00:58:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:00.818 00:58:44 -- common/autotest_common.sh@852 -- # return 0 00:19:00.818 00:58:44 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:00.818 00:58:44 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:00.818 00:58:44 -- common/autotest_common.sh@10 -- # set +x 00:19:00.818 00:58:44 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:00.818 00:58:44 -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:01.077 [2024-07-23 00:58:45.113186] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:01.077 00:58:45 -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:01.335 00:58:45 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:19:01.335 00:58:45 -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:01.593 00:58:45 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:19:01.593 00:58:45 -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:01.851 00:58:45 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:19:01.851 00:58:45 -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:02.109 00:58:46 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:19:02.109 00:58:46 -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:19:02.366 00:58:46 -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:02.624 00:58:46 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:19:02.624 00:58:46 -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:02.882 00:58:46 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:19:02.882 00:58:46 -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:03.140 00:58:47 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:19:03.140 00:58:47 -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:19:03.397 00:58:47 -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:03.655 00:58:47 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:19:03.655 00:58:47 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:03.912 00:58:47 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:19:03.912 00:58:47 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:04.169 00:58:48 -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:04.169 [2024-07-23 00:58:48.343275] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:04.169 00:58:48 -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:19:04.426 00:58:48 -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:19:04.683 00:58:48 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:05.615 00:58:49 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:19:05.615 00:58:49 -- common/autotest_common.sh@1177 -- # local i=0 00:19:05.615 00:58:49 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:19:05.615 00:58:49 -- common/autotest_common.sh@1179 -- # [[ -n 4 ]] 00:19:05.615 00:58:49 -- common/autotest_common.sh@1180 -- # nvme_device_counter=4 00:19:05.615 00:58:49 -- common/autotest_common.sh@1184 -- # sleep 2 00:19:07.514 00:58:51 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:19:07.514 00:58:51 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:19:07.514 00:58:51 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:19:07.514 00:58:51 -- common/autotest_common.sh@1186 -- # nvme_devices=4 00:19:07.514 00:58:51 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:19:07.514 00:58:51 -- common/autotest_common.sh@1187 -- # return 0 00:19:07.514 00:58:51 -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:19:07.514 [global] 00:19:07.514 thread=1 00:19:07.514 invalidate=1 00:19:07.514 rw=write 00:19:07.514 time_based=1 00:19:07.514 runtime=1 00:19:07.514 ioengine=libaio 00:19:07.514 direct=1 00:19:07.514 bs=4096 00:19:07.514 iodepth=1 00:19:07.514 norandommap=0 00:19:07.514 numjobs=1 00:19:07.514 00:19:07.514 verify_dump=1 00:19:07.514 verify_backlog=512 00:19:07.514 verify_state_save=0 00:19:07.514 do_verify=1 00:19:07.514 verify=crc32c-intel 00:19:07.514 [job0] 00:19:07.514 filename=/dev/nvme0n1 00:19:07.514 [job1] 00:19:07.514 filename=/dev/nvme0n2 00:19:07.514 [job2] 00:19:07.514 filename=/dev/nvme0n3 00:19:07.514 [job3] 00:19:07.514 filename=/dev/nvme0n4 00:19:07.514 Could not set queue depth (nvme0n1) 00:19:07.514 Could not set queue depth (nvme0n2) 00:19:07.514 Could not set queue depth (nvme0n3) 00:19:07.514 Could not set queue depth (nvme0n4) 00:19:07.772 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:07.772 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:07.772 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:07.772 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:07.772 fio-3.35 00:19:07.772 Starting 4 threads 00:19:09.144 00:19:09.144 job0: (groupid=0, jobs=1): err= 0: pid=3415148: Tue Jul 23 00:58:52 2024 00:19:09.144 read: IOPS=1109, BW=4440KiB/s (4546kB/s)(4444KiB/1001msec) 00:19:09.144 slat (nsec): min=4814, max=76002, avg=21225.46, stdev=10929.90 00:19:09.144 clat (usec): min=283, max=2750, avg=441.61, stdev=109.55 00:19:09.144 lat (usec): min=289, max=2783, avg=462.83, stdev=111.12 00:19:09.144 clat percentiles (usec): 00:19:09.144 | 1.00th=[ 302], 5.00th=[ 318], 10.00th=[ 330], 20.00th=[ 363], 00:19:09.144 | 30.00th=[ 388], 40.00th=[ 416], 50.00th=[ 449], 60.00th=[ 465], 00:19:09.144 | 70.00th=[ 482], 80.00th=[ 502], 90.00th=[ 537], 95.00th=[ 570], 00:19:09.144 | 99.00th=[ 619], 99.50th=[ 898], 99.90th=[ 1074], 99.95th=[ 2737], 00:19:09.144 | 99.99th=[ 2737] 00:19:09.144 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:19:09.144 slat (nsec): min=6249, max=83646, avg=15448.14, stdev=9393.69 00:19:09.144 clat (usec): min=190, max=3369, avg=291.90, stdev=108.85 00:19:09.144 lat (usec): min=199, max=3377, avg=307.35, stdev=110.22 00:19:09.144 clat percentiles (usec): 00:19:09.144 | 1.00th=[ 200], 5.00th=[ 208], 10.00th=[ 212], 20.00th=[ 219], 00:19:09.144 | 30.00th=[ 227], 40.00th=[ 245], 50.00th=[ 265], 60.00th=[ 302], 00:19:09.144 | 70.00th=[ 330], 80.00th=[ 367], 90.00th=[ 404], 95.00th=[ 424], 00:19:09.144 | 99.00th=[ 474], 99.50th=[ 490], 99.90th=[ 676], 99.95th=[ 3359], 00:19:09.144 | 99.99th=[ 3359] 00:19:09.144 bw ( KiB/s): min= 6736, max= 6736, per=39.74%, avg=6736.00, stdev= 0.00, samples=1 00:19:09.144 iops : min= 1684, max= 1684, avg=1684.00, stdev= 0.00, samples=1 00:19:09.144 lat (usec) : 250=24.78%, 500=66.11%, 750=8.80%, 1000=0.11% 00:19:09.144 lat (msec) : 2=0.11%, 4=0.08% 00:19:09.144 cpu : usr=2.70%, sys=4.80%, ctx=2648, majf=0, minf=2 00:19:09.144 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:09.144 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.144 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.144 issued rwts: total=1111,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.144 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:09.144 job1: (groupid=0, jobs=1): err= 0: pid=3415149: Tue Jul 23 00:58:52 2024 00:19:09.144 read: IOPS=621, BW=2486KiB/s (2546kB/s)(2496KiB/1004msec) 00:19:09.144 slat (nsec): min=6211, max=61406, avg=20386.87, stdev=10234.88 00:19:09.144 clat (usec): min=323, max=42054, avg=958.89, stdev=4297.27 00:19:09.144 lat (usec): min=334, max=42069, avg=979.28, stdev=4296.73 00:19:09.144 clat percentiles (usec): 00:19:09.144 | 1.00th=[ 338], 5.00th=[ 363], 10.00th=[ 379], 20.00th=[ 416], 00:19:09.144 | 30.00th=[ 453], 40.00th=[ 478], 50.00th=[ 498], 60.00th=[ 519], 00:19:09.144 | 70.00th=[ 537], 80.00th=[ 570], 90.00th=[ 611], 95.00th=[ 652], 00:19:09.144 | 99.00th=[40633], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:19:09.144 | 99.99th=[42206] 00:19:09.144 write: IOPS=1019, BW=4080KiB/s (4178kB/s)(4096KiB/1004msec); 0 zone resets 00:19:09.144 slat (nsec): min=8252, max=87888, avg=25494.05, stdev=13772.79 00:19:09.144 clat (usec): min=208, max=1123, avg=347.35, stdev=91.92 00:19:09.144 lat (usec): min=221, max=1136, avg=372.85, stdev=100.11 00:19:09.144 clat percentiles (usec): 00:19:09.144 | 1.00th=[ 221], 5.00th=[ 233], 10.00th=[ 243], 20.00th=[ 255], 00:19:09.144 | 30.00th=[ 269], 40.00th=[ 297], 50.00th=[ 338], 60.00th=[ 375], 00:19:09.144 | 70.00th=[ 408], 80.00th=[ 437], 90.00th=[ 469], 95.00th=[ 494], 00:19:09.144 | 99.00th=[ 537], 99.50th=[ 562], 99.90th=[ 586], 99.95th=[ 1123], 00:19:09.144 | 99.99th=[ 1123] 00:19:09.144 bw ( KiB/s): min= 2840, max= 5352, per=24.16%, avg=4096.00, stdev=1776.25, samples=2 00:19:09.144 iops : min= 710, max= 1338, avg=1024.00, stdev=444.06, samples=2 00:19:09.144 lat (usec) : 250=10.19%, 500=69.05%, 750=19.96% 00:19:09.144 lat (msec) : 2=0.36%, 50=0.42% 00:19:09.144 cpu : usr=2.59%, sys=5.18%, ctx=1651, majf=0, minf=1 00:19:09.144 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:09.144 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.144 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.144 issued rwts: total=624,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.144 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:09.144 job2: (groupid=0, jobs=1): err= 0: pid=3415152: Tue Jul 23 00:58:52 2024 00:19:09.144 read: IOPS=49, BW=197KiB/s (201kB/s)(204KiB/1038msec) 00:19:09.144 slat (nsec): min=7403, max=33960, avg=18471.45, stdev=9768.24 00:19:09.144 clat (usec): min=399, max=42070, avg=17420.67, stdev=20446.51 00:19:09.144 lat (usec): min=408, max=42083, avg=17439.14, stdev=20449.85 00:19:09.144 clat percentiles (usec): 00:19:09.144 | 1.00th=[ 400], 5.00th=[ 449], 10.00th=[ 453], 20.00th=[ 474], 00:19:09.144 | 30.00th=[ 482], 40.00th=[ 502], 50.00th=[ 515], 60.00th=[40633], 00:19:09.144 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:19:09.144 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:09.144 | 99.99th=[42206] 00:19:09.144 write: IOPS=493, BW=1973KiB/s (2020kB/s)(2048KiB/1038msec); 0 zone resets 00:19:09.144 slat (nsec): min=6765, max=44431, avg=12013.67, stdev=6286.00 00:19:09.144 clat (usec): min=196, max=436, avg=273.94, stdev=45.62 00:19:09.144 lat (usec): min=207, max=444, avg=285.96, stdev=45.05 00:19:09.144 clat percentiles (usec): 00:19:09.144 | 1.00th=[ 204], 5.00th=[ 212], 10.00th=[ 223], 20.00th=[ 233], 00:19:09.145 | 30.00th=[ 245], 40.00th=[ 253], 50.00th=[ 262], 60.00th=[ 277], 00:19:09.145 | 70.00th=[ 306], 80.00th=[ 318], 90.00th=[ 330], 95.00th=[ 355], 00:19:09.145 | 99.00th=[ 408], 99.50th=[ 412], 99.90th=[ 437], 99.95th=[ 437], 00:19:09.145 | 99.99th=[ 437] 00:19:09.145 bw ( KiB/s): min= 4096, max= 4096, per=24.16%, avg=4096.00, stdev= 0.00, samples=1 00:19:09.145 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:09.145 lat (usec) : 250=33.21%, 500=61.28%, 750=1.78% 00:19:09.145 lat (msec) : 50=3.73% 00:19:09.145 cpu : usr=0.19%, sys=0.77%, ctx=564, majf=0, minf=1 00:19:09.145 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:09.145 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.145 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.145 issued rwts: total=51,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.145 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:09.145 job3: (groupid=0, jobs=1): err= 0: pid=3415153: Tue Jul 23 00:58:52 2024 00:19:09.145 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:19:09.145 slat (nsec): min=5867, max=66781, avg=19740.35, stdev=10030.06 00:19:09.145 clat (usec): min=348, max=40666, avg=536.77, stdev=1256.49 00:19:09.145 lat (usec): min=373, max=40699, avg=556.51, stdev=1256.90 00:19:09.145 clat percentiles (usec): 00:19:09.145 | 1.00th=[ 388], 5.00th=[ 424], 10.00th=[ 437], 20.00th=[ 449], 00:19:09.145 | 30.00th=[ 461], 40.00th=[ 478], 50.00th=[ 494], 60.00th=[ 506], 00:19:09.145 | 70.00th=[ 523], 80.00th=[ 537], 90.00th=[ 570], 95.00th=[ 603], 00:19:09.145 | 99.00th=[ 668], 99.50th=[ 701], 99.90th=[ 734], 99.95th=[40633], 00:19:09.145 | 99.99th=[40633] 00:19:09.145 write: IOPS=1325, BW=5303KiB/s (5430kB/s)(5308KiB/1001msec); 0 zone resets 00:19:09.145 slat (nsec): min=6922, max=63274, avg=20087.56, stdev=11727.25 00:19:09.145 clat (usec): min=203, max=490, avg=294.55, stdev=57.65 00:19:09.145 lat (usec): min=213, max=508, avg=314.64, stdev=62.86 00:19:09.145 clat percentiles (usec): 00:19:09.145 | 1.00th=[ 215], 5.00th=[ 223], 10.00th=[ 231], 20.00th=[ 241], 00:19:09.145 | 30.00th=[ 253], 40.00th=[ 269], 50.00th=[ 281], 60.00th=[ 297], 00:19:09.145 | 70.00th=[ 322], 80.00th=[ 351], 90.00th=[ 379], 95.00th=[ 400], 00:19:09.145 | 99.00th=[ 433], 99.50th=[ 457], 99.90th=[ 486], 99.95th=[ 490], 00:19:09.145 | 99.99th=[ 490] 00:19:09.145 bw ( KiB/s): min= 4824, max= 4824, per=28.46%, avg=4824.00, stdev= 0.00, samples=1 00:19:09.145 iops : min= 1206, max= 1206, avg=1206.00, stdev= 0.00, samples=1 00:19:09.145 lat (usec) : 250=15.53%, 500=64.31%, 750=20.12% 00:19:09.145 lat (msec) : 50=0.04% 00:19:09.145 cpu : usr=3.20%, sys=4.20%, ctx=2352, majf=0, minf=1 00:19:09.145 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:09.145 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.145 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.145 issued rwts: total=1024,1327,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.145 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:09.145 00:19:09.145 Run status group 0 (all jobs): 00:19:09.145 READ: bw=10.6MiB/s (11.1MB/s), 197KiB/s-4440KiB/s (201kB/s-4546kB/s), io=11.0MiB (11.5MB), run=1001-1038msec 00:19:09.145 WRITE: bw=16.6MiB/s (17.4MB/s), 1973KiB/s-6138KiB/s (2020kB/s-6285kB/s), io=17.2MiB (18.0MB), run=1001-1038msec 00:19:09.145 00:19:09.145 Disk stats (read/write): 00:19:09.145 nvme0n1: ios=1074/1150, merge=0/0, ticks=449/328, in_queue=777, util=86.57% 00:19:09.145 nvme0n2: ios=643/1024, merge=0/0, ticks=1371/324, in_queue=1695, util=97.86% 00:19:09.145 nvme0n3: ios=104/512, merge=0/0, ticks=968/133, in_queue=1101, util=97.80% 00:19:09.145 nvme0n4: ios=941/1024, merge=0/0, ticks=854/305, in_queue=1159, util=97.89% 00:19:09.145 00:58:52 -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:19:09.145 [global] 00:19:09.145 thread=1 00:19:09.145 invalidate=1 00:19:09.145 rw=randwrite 00:19:09.145 time_based=1 00:19:09.145 runtime=1 00:19:09.145 ioengine=libaio 00:19:09.145 direct=1 00:19:09.145 bs=4096 00:19:09.145 iodepth=1 00:19:09.145 norandommap=0 00:19:09.145 numjobs=1 00:19:09.145 00:19:09.145 verify_dump=1 00:19:09.145 verify_backlog=512 00:19:09.145 verify_state_save=0 00:19:09.145 do_verify=1 00:19:09.145 verify=crc32c-intel 00:19:09.145 [job0] 00:19:09.145 filename=/dev/nvme0n1 00:19:09.145 [job1] 00:19:09.145 filename=/dev/nvme0n2 00:19:09.145 [job2] 00:19:09.145 filename=/dev/nvme0n3 00:19:09.145 [job3] 00:19:09.145 filename=/dev/nvme0n4 00:19:09.145 Could not set queue depth (nvme0n1) 00:19:09.145 Could not set queue depth (nvme0n2) 00:19:09.145 Could not set queue depth (nvme0n3) 00:19:09.145 Could not set queue depth (nvme0n4) 00:19:09.145 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:09.145 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:09.145 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:09.145 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:09.145 fio-3.35 00:19:09.145 Starting 4 threads 00:19:10.516 00:19:10.516 job0: (groupid=0, jobs=1): err= 0: pid=3415503: Tue Jul 23 00:58:54 2024 00:19:10.516 read: IOPS=1017, BW=4071KiB/s (4169kB/s)(4120KiB/1012msec) 00:19:10.516 slat (nsec): min=6549, max=61530, avg=19723.15, stdev=8473.43 00:19:10.516 clat (usec): min=269, max=41977, avg=574.94, stdev=3107.22 00:19:10.516 lat (usec): min=276, max=41993, avg=594.67, stdev=3106.91 00:19:10.516 clat percentiles (usec): 00:19:10.516 | 1.00th=[ 281], 5.00th=[ 297], 10.00th=[ 306], 20.00th=[ 314], 00:19:10.516 | 30.00th=[ 318], 40.00th=[ 322], 50.00th=[ 326], 60.00th=[ 330], 00:19:10.516 | 70.00th=[ 338], 80.00th=[ 359], 90.00th=[ 388], 95.00th=[ 420], 00:19:10.516 | 99.00th=[ 570], 99.50th=[41157], 99.90th=[41157], 99.95th=[42206], 00:19:10.516 | 99.99th=[42206] 00:19:10.516 write: IOPS=1517, BW=6071KiB/s (6217kB/s)(6144KiB/1012msec); 0 zone resets 00:19:10.516 slat (nsec): min=6865, max=58019, avg=17891.58, stdev=7156.87 00:19:10.516 clat (usec): min=173, max=1411, avg=232.66, stdev=63.36 00:19:10.516 lat (usec): min=182, max=1428, avg=250.55, stdev=65.94 00:19:10.516 clat percentiles (usec): 00:19:10.516 | 1.00th=[ 180], 5.00th=[ 186], 10.00th=[ 192], 20.00th=[ 206], 00:19:10.516 | 30.00th=[ 217], 40.00th=[ 221], 50.00th=[ 225], 60.00th=[ 229], 00:19:10.516 | 70.00th=[ 235], 80.00th=[ 245], 90.00th=[ 265], 95.00th=[ 285], 00:19:10.516 | 99.00th=[ 465], 99.50th=[ 627], 99.90th=[ 1029], 99.95th=[ 1418], 00:19:10.516 | 99.99th=[ 1418] 00:19:10.516 bw ( KiB/s): min= 4096, max= 8192, per=51.70%, avg=6144.00, stdev=2896.31, samples=2 00:19:10.516 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:19:10.516 lat (usec) : 250=50.70%, 500=47.82%, 750=1.05%, 1000=0.12% 00:19:10.516 lat (msec) : 2=0.08%, 50=0.23% 00:19:10.516 cpu : usr=3.07%, sys=5.74%, ctx=2568, majf=0, minf=2 00:19:10.516 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:10.516 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:10.516 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:10.516 issued rwts: total=1030,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:10.516 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:10.516 job1: (groupid=0, jobs=1): err= 0: pid=3415504: Tue Jul 23 00:58:54 2024 00:19:10.516 read: IOPS=18, BW=75.7KiB/s (77.5kB/s)(76.0KiB/1004msec) 00:19:10.516 slat (nsec): min=16026, max=37672, avg=27140.58, stdev=9339.46 00:19:10.516 clat (usec): min=40902, max=42095, avg=41368.00, stdev=491.46 00:19:10.516 lat (usec): min=40940, max=42115, avg=41395.14, stdev=494.81 00:19:10.516 clat percentiles (usec): 00:19:10.516 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:19:10.516 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:19:10.516 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:19:10.516 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:10.516 | 99.99th=[42206] 00:19:10.516 write: IOPS=509, BW=2040KiB/s (2089kB/s)(2048KiB/1004msec); 0 zone resets 00:19:10.517 slat (nsec): min=10137, max=63706, avg=29107.59, stdev=10095.21 00:19:10.517 clat (usec): min=218, max=2513, avg=387.19, stdev=208.33 00:19:10.517 lat (usec): min=231, max=2553, avg=416.29, stdev=212.14 00:19:10.517 clat percentiles (usec): 00:19:10.517 | 1.00th=[ 229], 5.00th=[ 239], 10.00th=[ 247], 20.00th=[ 262], 00:19:10.517 | 30.00th=[ 281], 40.00th=[ 306], 50.00th=[ 343], 60.00th=[ 388], 00:19:10.517 | 70.00th=[ 420], 80.00th=[ 469], 90.00th=[ 537], 95.00th=[ 627], 00:19:10.517 | 99.00th=[ 1020], 99.50th=[ 2278], 99.90th=[ 2507], 99.95th=[ 2507], 00:19:10.517 | 99.99th=[ 2507] 00:19:10.517 bw ( KiB/s): min= 4096, max= 4096, per=34.47%, avg=4096.00, stdev= 0.00, samples=1 00:19:10.517 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:10.517 lat (usec) : 250=11.49%, 500=71.00%, 750=11.86%, 1000=0.94% 00:19:10.517 lat (msec) : 2=0.56%, 4=0.56%, 50=3.58% 00:19:10.517 cpu : usr=1.30%, sys=1.60%, ctx=532, majf=0, minf=1 00:19:10.517 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:10.517 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:10.517 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:10.517 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:10.517 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:10.517 job2: (groupid=0, jobs=1): err= 0: pid=3415506: Tue Jul 23 00:58:54 2024 00:19:10.517 read: IOPS=20, BW=81.6KiB/s (83.5kB/s)(84.0KiB/1030msec) 00:19:10.517 slat (nsec): min=14266, max=20900, avg=15015.24, stdev=1392.29 00:19:10.517 clat (usec): min=40875, max=41396, avg=40998.63, stdev=97.16 00:19:10.517 lat (usec): min=40896, max=41410, avg=41013.65, stdev=96.71 00:19:10.517 clat percentiles (usec): 00:19:10.517 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:19:10.517 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:10.517 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:19:10.517 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:19:10.517 | 99.99th=[41157] 00:19:10.517 write: IOPS=497, BW=1988KiB/s (2036kB/s)(2048KiB/1030msec); 0 zone resets 00:19:10.517 slat (usec): min=8, max=162, avg=13.88, stdev=15.81 00:19:10.517 clat (usec): min=143, max=1273, avg=310.85, stdev=119.39 00:19:10.517 lat (usec): min=211, max=1288, avg=324.74, stdev=121.79 00:19:10.517 clat percentiles (usec): 00:19:10.517 | 1.00th=[ 210], 5.00th=[ 219], 10.00th=[ 227], 20.00th=[ 239], 00:19:10.517 | 30.00th=[ 245], 40.00th=[ 253], 50.00th=[ 265], 60.00th=[ 277], 00:19:10.517 | 70.00th=[ 306], 80.00th=[ 379], 90.00th=[ 474], 95.00th=[ 519], 00:19:10.517 | 99.00th=[ 807], 99.50th=[ 906], 99.90th=[ 1270], 99.95th=[ 1270], 00:19:10.517 | 99.99th=[ 1270] 00:19:10.517 bw ( KiB/s): min= 4096, max= 4096, per=34.47%, avg=4096.00, stdev= 0.00, samples=1 00:19:10.517 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:10.517 lat (usec) : 250=34.52%, 500=54.41%, 750=5.82%, 1000=1.13% 00:19:10.517 lat (msec) : 2=0.19%, 50=3.94% 00:19:10.517 cpu : usr=0.10%, sys=0.97%, ctx=538, majf=0, minf=1 00:19:10.517 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:10.517 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:10.517 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:10.517 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:10.517 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:10.517 job3: (groupid=0, jobs=1): err= 0: pid=3415507: Tue Jul 23 00:58:54 2024 00:19:10.517 read: IOPS=20, BW=81.2KiB/s (83.2kB/s)(84.0KiB/1034msec) 00:19:10.517 slat (nsec): min=15350, max=33548, avg=26810.67, stdev=8436.26 00:19:10.517 clat (usec): min=40916, max=42166, avg=41544.17, stdev=523.09 00:19:10.517 lat (usec): min=40949, max=42184, avg=41570.99, stdev=525.21 00:19:10.517 clat percentiles (usec): 00:19:10.517 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:19:10.517 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[42206], 00:19:10.517 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:19:10.517 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:10.517 | 99.99th=[42206] 00:19:10.517 write: IOPS=495, BW=1981KiB/s (2028kB/s)(2048KiB/1034msec); 0 zone resets 00:19:10.517 slat (nsec): min=8931, max=63710, avg=19924.60, stdev=8153.25 00:19:10.517 clat (usec): min=208, max=2020, avg=288.59, stdev=142.35 00:19:10.517 lat (usec): min=219, max=2035, avg=308.51, stdev=144.58 00:19:10.517 clat percentiles (usec): 00:19:10.517 | 1.00th=[ 215], 5.00th=[ 221], 10.00th=[ 225], 20.00th=[ 231], 00:19:10.517 | 30.00th=[ 235], 40.00th=[ 241], 50.00th=[ 245], 60.00th=[ 253], 00:19:10.517 | 70.00th=[ 265], 80.00th=[ 285], 90.00th=[ 412], 95.00th=[ 510], 00:19:10.517 | 99.00th=[ 873], 99.50th=[ 930], 99.90th=[ 2024], 99.95th=[ 2024], 00:19:10.517 | 99.99th=[ 2024] 00:19:10.517 bw ( KiB/s): min= 4096, max= 4096, per=34.47%, avg=4096.00, stdev= 0.00, samples=1 00:19:10.517 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:10.517 lat (usec) : 250=54.41%, 500=36.59%, 750=3.00%, 1000=1.69% 00:19:10.517 lat (msec) : 2=0.19%, 4=0.19%, 50=3.94% 00:19:10.517 cpu : usr=0.58%, sys=0.97%, ctx=534, majf=0, minf=1 00:19:10.517 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:10.517 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:10.517 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:10.517 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:10.517 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:10.517 00:19:10.517 Run status group 0 (all jobs): 00:19:10.517 READ: bw=4221KiB/s (4322kB/s), 75.7KiB/s-4071KiB/s (77.5kB/s-4169kB/s), io=4364KiB (4469kB), run=1004-1034msec 00:19:10.517 WRITE: bw=11.6MiB/s (12.2MB/s), 1981KiB/s-6071KiB/s (2028kB/s-6217kB/s), io=12.0MiB (12.6MB), run=1004-1034msec 00:19:10.517 00:19:10.517 Disk stats (read/write): 00:19:10.517 nvme0n1: ios=1051/1536, merge=0/0, ticks=1407/350, in_queue=1757, util=97.60% 00:19:10.517 nvme0n2: ios=65/512, merge=0/0, ticks=815/195, in_queue=1010, util=97.86% 00:19:10.517 nvme0n3: ios=65/512, merge=0/0, ticks=830/155, in_queue=985, util=96.44% 00:19:10.517 nvme0n4: ios=73/512, merge=0/0, ticks=975/146, in_queue=1121, util=97.57% 00:19:10.517 00:58:54 -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:19:10.517 [global] 00:19:10.517 thread=1 00:19:10.517 invalidate=1 00:19:10.517 rw=write 00:19:10.517 time_based=1 00:19:10.517 runtime=1 00:19:10.517 ioengine=libaio 00:19:10.517 direct=1 00:19:10.517 bs=4096 00:19:10.517 iodepth=128 00:19:10.517 norandommap=0 00:19:10.517 numjobs=1 00:19:10.517 00:19:10.517 verify_dump=1 00:19:10.517 verify_backlog=512 00:19:10.517 verify_state_save=0 00:19:10.517 do_verify=1 00:19:10.517 verify=crc32c-intel 00:19:10.517 [job0] 00:19:10.517 filename=/dev/nvme0n1 00:19:10.517 [job1] 00:19:10.517 filename=/dev/nvme0n2 00:19:10.517 [job2] 00:19:10.517 filename=/dev/nvme0n3 00:19:10.517 [job3] 00:19:10.517 filename=/dev/nvme0n4 00:19:10.517 Could not set queue depth (nvme0n1) 00:19:10.517 Could not set queue depth (nvme0n2) 00:19:10.517 Could not set queue depth (nvme0n3) 00:19:10.517 Could not set queue depth (nvme0n4) 00:19:10.517 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:10.517 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:10.517 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:10.517 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:10.517 fio-3.35 00:19:10.517 Starting 4 threads 00:19:11.890 00:19:11.890 job0: (groupid=0, jobs=1): err= 0: pid=3415736: Tue Jul 23 00:58:55 2024 00:19:11.890 read: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec) 00:19:11.890 slat (usec): min=2, max=5022, avg=84.86, stdev=456.20 00:19:11.890 clat (usec): min=6242, max=18029, avg=11591.07, stdev=1594.13 00:19:11.890 lat (usec): min=6530, max=19504, avg=11675.94, stdev=1611.72 00:19:11.890 clat percentiles (usec): 00:19:11.890 | 1.00th=[ 7635], 5.00th=[ 8979], 10.00th=[ 9896], 20.00th=[10552], 00:19:11.890 | 30.00th=[10814], 40.00th=[11076], 50.00th=[11338], 60.00th=[11994], 00:19:11.890 | 70.00th=[12125], 80.00th=[12518], 90.00th=[13435], 95.00th=[14615], 00:19:11.890 | 99.00th=[16188], 99.50th=[16909], 99.90th=[17957], 99.95th=[17957], 00:19:11.890 | 99.99th=[17957] 00:19:11.890 write: IOPS=5624, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1001msec); 0 zone resets 00:19:11.890 slat (usec): min=4, max=5657, avg=90.25, stdev=509.09 00:19:11.890 clat (usec): min=239, max=18454, avg=11952.43, stdev=1553.00 00:19:11.890 lat (usec): min=3585, max=18514, avg=12042.68, stdev=1538.89 00:19:11.890 clat percentiles (usec): 00:19:11.890 | 1.00th=[ 6587], 5.00th=[ 9110], 10.00th=[10159], 20.00th=[11076], 00:19:11.890 | 30.00th=[11600], 40.00th=[11863], 50.00th=[12125], 60.00th=[12518], 00:19:11.890 | 70.00th=[12911], 80.00th=[13042], 90.00th=[13304], 95.00th=[13566], 00:19:11.890 | 99.00th=[15008], 99.50th=[15664], 99.90th=[16712], 99.95th=[17171], 00:19:11.890 | 99.99th=[18482] 00:19:11.890 bw ( KiB/s): min=20192, max=23832, per=29.76%, avg=22012.00, stdev=2573.87, samples=2 00:19:11.890 iops : min= 5048, max= 5958, avg=5503.00, stdev=643.47, samples=2 00:19:11.890 lat (usec) : 250=0.01% 00:19:11.890 lat (msec) : 4=0.15%, 10=9.77%, 20=90.07% 00:19:11.890 cpu : usr=8.10%, sys=10.30%, ctx=415, majf=0, minf=11 00:19:11.890 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:19:11.890 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:11.890 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:11.890 issued rwts: total=5120,5630,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:11.890 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:11.890 job1: (groupid=0, jobs=1): err= 0: pid=3415743: Tue Jul 23 00:58:55 2024 00:19:11.890 read: IOPS=2763, BW=10.8MiB/s (11.3MB/s)(10.8MiB/1003msec) 00:19:11.890 slat (usec): min=3, max=21834, avg=160.93, stdev=1098.38 00:19:11.890 clat (usec): min=2168, max=98060, avg=18780.67, stdev=13062.71 00:19:11.890 lat (usec): min=3118, max=98084, avg=18941.61, stdev=13179.20 00:19:11.890 clat percentiles (usec): 00:19:11.890 | 1.00th=[ 3654], 5.00th=[10290], 10.00th=[11076], 20.00th=[12256], 00:19:11.890 | 30.00th=[12649], 40.00th=[12911], 50.00th=[13829], 60.00th=[14746], 00:19:11.890 | 70.00th=[21627], 80.00th=[22414], 90.00th=[25822], 95.00th=[42730], 00:19:11.890 | 99.00th=[93848], 99.50th=[98042], 99.90th=[98042], 99.95th=[98042], 00:19:11.890 | 99.99th=[98042] 00:19:11.890 write: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec); 0 zone resets 00:19:11.890 slat (usec): min=5, max=21073, avg=169.66, stdev=1167.40 00:19:11.890 clat (usec): min=8687, max=94463, avg=23269.30, stdev=19529.78 00:19:11.890 lat (usec): min=8710, max=94479, avg=23438.96, stdev=19622.54 00:19:11.891 clat percentiles (usec): 00:19:11.891 | 1.00th=[ 9241], 5.00th=[10683], 10.00th=[11469], 20.00th=[12256], 00:19:11.891 | 30.00th=[12518], 40.00th=[13042], 50.00th=[13566], 60.00th=[14615], 00:19:11.891 | 70.00th=[16581], 80.00th=[36963], 90.00th=[58459], 95.00th=[64226], 00:19:11.891 | 99.00th=[87557], 99.50th=[94897], 99.90th=[94897], 99.95th=[94897], 00:19:11.891 | 99.99th=[94897] 00:19:11.891 bw ( KiB/s): min= 7032, max=17544, per=16.61%, avg=12288.00, stdev=7433.11, samples=2 00:19:11.891 iops : min= 1758, max= 4386, avg=3072.00, stdev=1858.28, samples=2 00:19:11.891 lat (msec) : 4=0.56%, 10=3.03%, 20=66.84%, 50=20.28%, 100=9.29% 00:19:11.891 cpu : usr=5.19%, sys=5.79%, ctx=337, majf=0, minf=17 00:19:11.891 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:19:11.891 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:11.891 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:11.891 issued rwts: total=2772,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:11.891 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:11.891 job2: (groupid=0, jobs=1): err= 0: pid=3415744: Tue Jul 23 00:58:55 2024 00:19:11.891 read: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec) 00:19:11.891 slat (usec): min=2, max=13134, avg=116.83, stdev=749.29 00:19:11.891 clat (usec): min=4890, max=42229, avg=15127.76, stdev=4837.96 00:19:11.891 lat (usec): min=4905, max=42235, avg=15244.59, stdev=4874.46 00:19:11.891 clat percentiles (usec): 00:19:11.891 | 1.00th=[ 9110], 5.00th=[10028], 10.00th=[10945], 20.00th=[12256], 00:19:11.891 | 30.00th=[12780], 40.00th=[13042], 50.00th=[13698], 60.00th=[13960], 00:19:11.891 | 70.00th=[15533], 80.00th=[17695], 90.00th=[20841], 95.00th=[24249], 00:19:11.891 | 99.00th=[33817], 99.50th=[40109], 99.90th=[42206], 99.95th=[42206], 00:19:11.891 | 99.99th=[42206] 00:19:11.891 write: IOPS=4404, BW=17.2MiB/s (18.0MB/s)(17.3MiB/1003msec); 0 zone resets 00:19:11.891 slat (usec): min=4, max=10821, avg=109.32, stdev=642.38 00:19:11.891 clat (usec): min=348, max=42225, avg=14759.91, stdev=4857.35 00:19:11.891 lat (usec): min=3325, max=42238, avg=14869.23, stdev=4882.61 00:19:11.891 clat percentiles (usec): 00:19:11.891 | 1.00th=[ 5800], 5.00th=[ 7701], 10.00th=[ 9241], 20.00th=[11600], 00:19:11.891 | 30.00th=[12649], 40.00th=[13042], 50.00th=[13566], 60.00th=[14091], 00:19:11.891 | 70.00th=[14877], 80.00th=[19268], 90.00th=[22414], 95.00th=[22938], 00:19:11.891 | 99.00th=[28181], 99.50th=[30802], 99.90th=[31065], 99.95th=[33817], 00:19:11.891 | 99.99th=[42206] 00:19:11.891 bw ( KiB/s): min=16785, max=17568, per=23.22%, avg=17176.50, stdev=553.66, samples=2 00:19:11.891 iops : min= 4196, max= 4392, avg=4294.00, stdev=138.59, samples=2 00:19:11.891 lat (usec) : 500=0.01% 00:19:11.891 lat (msec) : 4=0.08%, 10=8.42%, 20=75.55%, 50=15.94% 00:19:11.891 cpu : usr=5.29%, sys=7.98%, ctx=372, majf=0, minf=5 00:19:11.891 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:19:11.891 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:11.891 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:11.891 issued rwts: total=4096,4418,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:11.891 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:11.891 job3: (groupid=0, jobs=1): err= 0: pid=3415745: Tue Jul 23 00:58:55 2024 00:19:11.891 read: IOPS=5089, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1006msec) 00:19:11.891 slat (usec): min=3, max=11607, avg=97.12, stdev=674.91 00:19:11.891 clat (usec): min=1275, max=24457, avg=13028.24, stdev=3578.09 00:19:11.891 lat (usec): min=1942, max=27395, avg=13125.36, stdev=3598.72 00:19:11.891 clat percentiles (usec): 00:19:11.891 | 1.00th=[ 2868], 5.00th=[ 8094], 10.00th=[ 9896], 20.00th=[10552], 00:19:11.891 | 30.00th=[11076], 40.00th=[11731], 50.00th=[12387], 60.00th=[13435], 00:19:11.891 | 70.00th=[14222], 80.00th=[15401], 90.00th=[17957], 95.00th=[20055], 00:19:11.891 | 99.00th=[22938], 99.50th=[23462], 99.90th=[24249], 99.95th=[24511], 00:19:11.891 | 99.99th=[24511] 00:19:11.891 write: IOPS=5452, BW=21.3MiB/s (22.3MB/s)(21.4MiB/1006msec); 0 zone resets 00:19:11.891 slat (usec): min=4, max=10495, avg=78.85, stdev=441.89 00:19:11.891 clat (usec): min=2817, max=24422, avg=11094.78, stdev=3064.33 00:19:11.891 lat (usec): min=2838, max=24431, avg=11173.63, stdev=3074.38 00:19:11.891 clat percentiles (usec): 00:19:11.891 | 1.00th=[ 3949], 5.00th=[ 5669], 10.00th=[ 6980], 20.00th=[ 8225], 00:19:11.891 | 30.00th=[ 9503], 40.00th=[10683], 50.00th=[11994], 60.00th=[12518], 00:19:11.891 | 70.00th=[13042], 80.00th=[13435], 90.00th=[13829], 95.00th=[15795], 00:19:11.891 | 99.00th=[17957], 99.50th=[18220], 99.90th=[23987], 99.95th=[24249], 00:19:11.891 | 99.99th=[24511] 00:19:11.891 bw ( KiB/s): min=20456, max=22400, per=28.97%, avg=21428.00, stdev=1374.62, samples=2 00:19:11.891 iops : min= 5114, max= 5600, avg=5357.00, stdev=343.65, samples=2 00:19:11.891 lat (msec) : 2=0.01%, 4=1.33%, 10=22.16%, 20=73.77%, 50=2.73% 00:19:11.891 cpu : usr=8.46%, sys=10.45%, ctx=542, majf=0, minf=17 00:19:11.891 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:19:11.891 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:11.891 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:11.891 issued rwts: total=5120,5485,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:11.891 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:11.891 00:19:11.891 Run status group 0 (all jobs): 00:19:11.891 READ: bw=66.4MiB/s (69.7MB/s), 10.8MiB/s-20.0MiB/s (11.3MB/s-20.9MB/s), io=66.8MiB (70.1MB), run=1001-1006msec 00:19:11.891 WRITE: bw=72.2MiB/s (75.8MB/s), 12.0MiB/s-22.0MiB/s (12.5MB/s-23.0MB/s), io=72.7MiB (76.2MB), run=1001-1006msec 00:19:11.891 00:19:11.891 Disk stats (read/write): 00:19:11.891 nvme0n1: ios=4425/4608, merge=0/0, ticks=21173/22758, in_queue=43931, util=84.87% 00:19:11.891 nvme0n2: ios=2093/2048, merge=0/0, ticks=13803/14531, in_queue=28334, util=100.00% 00:19:11.891 nvme0n3: ios=3216/3584, merge=0/0, ticks=41318/46029, in_queue=87347, util=99.79% 00:19:11.891 nvme0n4: ios=4184/4608, merge=0/0, ticks=51631/48474, in_queue=100105, util=97.73% 00:19:11.891 00:58:55 -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:19:11.891 [global] 00:19:11.891 thread=1 00:19:11.891 invalidate=1 00:19:11.891 rw=randwrite 00:19:11.891 time_based=1 00:19:11.891 runtime=1 00:19:11.891 ioengine=libaio 00:19:11.891 direct=1 00:19:11.891 bs=4096 00:19:11.891 iodepth=128 00:19:11.891 norandommap=0 00:19:11.891 numjobs=1 00:19:11.891 00:19:11.891 verify_dump=1 00:19:11.891 verify_backlog=512 00:19:11.891 verify_state_save=0 00:19:11.891 do_verify=1 00:19:11.891 verify=crc32c-intel 00:19:11.891 [job0] 00:19:11.891 filename=/dev/nvme0n1 00:19:11.891 [job1] 00:19:11.891 filename=/dev/nvme0n2 00:19:11.891 [job2] 00:19:11.891 filename=/dev/nvme0n3 00:19:11.891 [job3] 00:19:11.891 filename=/dev/nvme0n4 00:19:11.891 Could not set queue depth (nvme0n1) 00:19:11.891 Could not set queue depth (nvme0n2) 00:19:11.891 Could not set queue depth (nvme0n3) 00:19:11.891 Could not set queue depth (nvme0n4) 00:19:12.147 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:12.147 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:12.147 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:12.147 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:12.147 fio-3.35 00:19:12.147 Starting 4 threads 00:19:13.521 00:19:13.521 job0: (groupid=0, jobs=1): err= 0: pid=3415973: Tue Jul 23 00:58:57 2024 00:19:13.521 read: IOPS=3817, BW=14.9MiB/s (15.6MB/s)(15.0MiB/1005msec) 00:19:13.521 slat (usec): min=3, max=12045, avg=103.22, stdev=668.74 00:19:13.521 clat (usec): min=2763, max=41482, avg=13681.14, stdev=5563.51 00:19:13.521 lat (usec): min=2959, max=41515, avg=13784.36, stdev=5608.06 00:19:13.521 clat percentiles (usec): 00:19:13.521 | 1.00th=[ 6325], 5.00th=[ 8029], 10.00th=[ 8586], 20.00th=[ 9896], 00:19:13.521 | 30.00th=[10552], 40.00th=[11469], 50.00th=[12125], 60.00th=[13173], 00:19:13.521 | 70.00th=[14353], 80.00th=[16909], 90.00th=[20317], 95.00th=[25297], 00:19:13.521 | 99.00th=[32637], 99.50th=[35914], 99.90th=[36963], 99.95th=[40109], 00:19:13.521 | 99.99th=[41681] 00:19:13.521 write: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec); 0 zone resets 00:19:13.521 slat (usec): min=4, max=22782, avg=134.47, stdev=1075.14 00:19:13.521 clat (usec): min=3228, max=66903, avg=18252.16, stdev=10374.58 00:19:13.521 lat (usec): min=3244, max=66919, avg=18386.63, stdev=10466.31 00:19:13.521 clat percentiles (usec): 00:19:13.521 | 1.00th=[ 4015], 5.00th=[ 6456], 10.00th=[ 8455], 20.00th=[ 9372], 00:19:13.521 | 30.00th=[10159], 40.00th=[12256], 50.00th=[15139], 60.00th=[17695], 00:19:13.521 | 70.00th=[22938], 80.00th=[28443], 90.00th=[32900], 95.00th=[38011], 00:19:13.521 | 99.00th=[46400], 99.50th=[46400], 99.90th=[46400], 99.95th=[51643], 00:19:13.521 | 99.99th=[66847] 00:19:13.521 bw ( KiB/s): min=14176, max=18592, per=23.91%, avg=16384.00, stdev=3122.58, samples=2 00:19:13.521 iops : min= 3544, max= 4648, avg=4096.00, stdev=780.65, samples=2 00:19:13.521 lat (msec) : 4=0.74%, 10=25.63%, 20=49.62%, 50=23.96%, 100=0.05% 00:19:13.521 cpu : usr=5.88%, sys=9.46%, ctx=370, majf=0, minf=17 00:19:13.521 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:13.521 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:13.521 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:13.521 issued rwts: total=3837,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:13.521 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:13.521 job1: (groupid=0, jobs=1): err= 0: pid=3415974: Tue Jul 23 00:58:57 2024 00:19:13.521 read: IOPS=4607, BW=18.0MiB/s (18.9MB/s)(18.1MiB/1005msec) 00:19:13.521 slat (usec): min=2, max=25148, avg=102.58, stdev=949.74 00:19:13.521 clat (usec): min=848, max=46167, avg=14153.11, stdev=7102.03 00:19:13.521 lat (usec): min=864, max=46177, avg=14255.69, stdev=7140.30 00:19:13.521 clat percentiles (usec): 00:19:13.521 | 1.00th=[ 2704], 5.00th=[ 7177], 10.00th=[ 8356], 20.00th=[ 9634], 00:19:13.521 | 30.00th=[10290], 40.00th=[10945], 50.00th=[12256], 60.00th=[12911], 00:19:13.521 | 70.00th=[14484], 80.00th=[17433], 90.00th=[22676], 95.00th=[30540], 00:19:13.521 | 99.00th=[45876], 99.50th=[45876], 99.90th=[46400], 99.95th=[46400], 00:19:13.521 | 99.99th=[46400] 00:19:13.521 write: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1005msec); 0 zone resets 00:19:13.521 slat (usec): min=3, max=12488, avg=77.65, stdev=577.64 00:19:13.521 clat (usec): min=2012, max=42758, avg=12125.64, stdev=4838.56 00:19:13.521 lat (usec): min=2044, max=42770, avg=12203.29, stdev=4862.22 00:19:13.521 clat percentiles (usec): 00:19:13.521 | 1.00th=[ 3687], 5.00th=[ 5669], 10.00th=[ 7046], 20.00th=[ 7898], 00:19:13.521 | 30.00th=[ 9503], 40.00th=[10814], 50.00th=[11338], 60.00th=[12387], 00:19:13.521 | 70.00th=[13566], 80.00th=[15664], 90.00th=[17957], 95.00th=[20841], 00:19:13.521 | 99.00th=[29230], 99.50th=[34866], 99.90th=[34866], 99.95th=[35390], 00:19:13.521 | 99.99th=[42730] 00:19:13.521 bw ( KiB/s): min=18912, max=21216, per=29.28%, avg=20064.00, stdev=1629.17, samples=2 00:19:13.521 iops : min= 4728, max= 5304, avg=5016.00, stdev=407.29, samples=2 00:19:13.521 lat (usec) : 1000=0.01% 00:19:13.521 lat (msec) : 2=0.01%, 4=1.10%, 10=28.83%, 20=59.83%, 50=10.22% 00:19:13.521 cpu : usr=4.68%, sys=7.97%, ctx=392, majf=0, minf=7 00:19:13.521 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:19:13.521 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:13.521 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:13.521 issued rwts: total=4631,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:13.521 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:13.521 job2: (groupid=0, jobs=1): err= 0: pid=3415975: Tue Jul 23 00:58:57 2024 00:19:13.521 read: IOPS=4088, BW=16.0MiB/s (16.7MB/s)(16.1MiB/1009msec) 00:19:13.521 slat (usec): min=2, max=10334, avg=107.37, stdev=758.49 00:19:13.521 clat (usec): min=4602, max=30406, avg=14705.07, stdev=3849.25 00:19:13.521 lat (usec): min=5186, max=33526, avg=14812.44, stdev=3893.18 00:19:13.521 clat percentiles (usec): 00:19:13.521 | 1.00th=[ 6456], 5.00th=[ 9896], 10.00th=[11338], 20.00th=[11731], 00:19:13.521 | 30.00th=[12518], 40.00th=[13173], 50.00th=[13829], 60.00th=[14746], 00:19:13.521 | 70.00th=[15533], 80.00th=[17695], 90.00th=[21103], 95.00th=[22676], 00:19:13.521 | 99.00th=[25560], 99.50th=[26084], 99.90th=[26084], 99.95th=[27657], 00:19:13.521 | 99.99th=[30278] 00:19:13.521 write: IOPS=4566, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1009msec); 0 zone resets 00:19:13.521 slat (usec): min=3, max=20064, avg=102.58, stdev=764.27 00:19:13.521 clat (usec): min=627, max=53919, avg=14598.13, stdev=6977.16 00:19:13.521 lat (usec): min=636, max=53923, avg=14700.71, stdev=7013.10 00:19:13.521 clat percentiles (usec): 00:19:13.521 | 1.00th=[ 5145], 5.00th=[ 7898], 10.00th=[ 8586], 20.00th=[ 9634], 00:19:13.521 | 30.00th=[10159], 40.00th=[11338], 50.00th=[13435], 60.00th=[14091], 00:19:13.521 | 70.00th=[15401], 80.00th=[18220], 90.00th=[22938], 95.00th=[31065], 00:19:13.521 | 99.00th=[38011], 99.50th=[41157], 99.90th=[51643], 99.95th=[51643], 00:19:13.521 | 99.99th=[53740] 00:19:13.521 bw ( KiB/s): min=16384, max=19688, per=26.32%, avg=18036.00, stdev=2336.28, samples=2 00:19:13.521 iops : min= 4096, max= 4922, avg=4509.00, stdev=584.07, samples=2 00:19:13.521 lat (usec) : 750=0.05%, 1000=0.05% 00:19:13.521 lat (msec) : 2=0.02%, 4=0.15%, 10=16.03%, 20=71.08%, 50=12.55% 00:19:13.521 lat (msec) : 100=0.08% 00:19:13.521 cpu : usr=3.27%, sys=5.26%, ctx=302, majf=0, minf=13 00:19:13.521 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:19:13.521 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:13.521 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:13.521 issued rwts: total=4125,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:13.521 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:13.521 job3: (groupid=0, jobs=1): err= 0: pid=3415976: Tue Jul 23 00:58:57 2024 00:19:13.521 read: IOPS=3032, BW=11.8MiB/s (12.4MB/s)(12.0MiB/1013msec) 00:19:13.521 slat (usec): min=2, max=14091, avg=102.44, stdev=681.98 00:19:13.521 clat (usec): min=4580, max=31495, avg=14791.97, stdev=3815.29 00:19:13.521 lat (usec): min=4591, max=31499, avg=14894.41, stdev=3852.21 00:19:13.521 clat percentiles (usec): 00:19:13.521 | 1.00th=[ 5800], 5.00th=[ 9372], 10.00th=[11076], 20.00th=[12911], 00:19:13.521 | 30.00th=[13173], 40.00th=[13566], 50.00th=[13698], 60.00th=[15008], 00:19:13.521 | 70.00th=[15533], 80.00th=[16712], 90.00th=[19268], 95.00th=[23725], 00:19:13.521 | 99.00th=[26084], 99.50th=[27919], 99.90th=[31589], 99.95th=[31589], 00:19:13.521 | 99.99th=[31589] 00:19:13.521 write: IOPS=3486, BW=13.6MiB/s (14.3MB/s)(13.8MiB/1013msec); 0 zone resets 00:19:13.521 slat (usec): min=3, max=14396, avg=171.44, stdev=1009.89 00:19:13.521 clat (msec): min=4, max=131, avg=23.47, stdev=25.52 00:19:13.521 lat (msec): min=4, max=131, avg=23.64, stdev=25.69 00:19:13.522 clat percentiles (msec): 00:19:13.522 | 1.00th=[ 9], 5.00th=[ 11], 10.00th=[ 12], 20.00th=[ 13], 00:19:13.522 | 30.00th=[ 13], 40.00th=[ 14], 50.00th=[ 14], 60.00th=[ 16], 00:19:13.522 | 70.00th=[ 21], 80.00th=[ 23], 90.00th=[ 35], 95.00th=[ 97], 00:19:13.522 | 99.00th=[ 128], 99.50th=[ 128], 99.90th=[ 132], 99.95th=[ 132], 00:19:13.522 | 99.99th=[ 132] 00:19:13.522 bw ( KiB/s): min=10848, max=16384, per=19.87%, avg=13616.00, stdev=3914.54, samples=2 00:19:13.522 iops : min= 2712, max= 4096, avg=3404.00, stdev=978.64, samples=2 00:19:13.522 lat (msec) : 10=5.59%, 20=74.26%, 50=15.69%, 100=1.94%, 250=2.53% 00:19:13.522 cpu : usr=3.95%, sys=7.51%, ctx=324, majf=0, minf=13 00:19:13.522 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:19:13.522 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:13.522 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:13.522 issued rwts: total=3072,3532,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:13.522 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:13.522 00:19:13.522 Run status group 0 (all jobs): 00:19:13.522 READ: bw=60.4MiB/s (63.3MB/s), 11.8MiB/s-18.0MiB/s (12.4MB/s-18.9MB/s), io=61.2MiB (64.2MB), run=1005-1013msec 00:19:13.522 WRITE: bw=66.9MiB/s (70.2MB/s), 13.6MiB/s-19.9MiB/s (14.3MB/s-20.9MB/s), io=67.8MiB (71.1MB), run=1005-1013msec 00:19:13.522 00:19:13.522 Disk stats (read/write): 00:19:13.522 nvme0n1: ios=2866/3072, merge=0/0, ticks=19996/30967, in_queue=50963, util=87.58% 00:19:13.522 nvme0n2: ios=4146/4310, merge=0/0, ticks=44508/39783, in_queue=84291, util=91.27% 00:19:13.522 nvme0n3: ios=3612/3661, merge=0/0, ticks=36048/31832, in_queue=67880, util=95.31% 00:19:13.522 nvme0n4: ios=3093/3324, merge=0/0, ticks=30035/41245, in_queue=71280, util=99.37% 00:19:13.522 00:58:57 -- target/fio.sh@55 -- # sync 00:19:13.522 00:58:57 -- target/fio.sh@59 -- # fio_pid=3416114 00:19:13.522 00:58:57 -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:19:13.522 00:58:57 -- target/fio.sh@61 -- # sleep 3 00:19:13.522 [global] 00:19:13.522 thread=1 00:19:13.522 invalidate=1 00:19:13.522 rw=read 00:19:13.522 time_based=1 00:19:13.522 runtime=10 00:19:13.522 ioengine=libaio 00:19:13.522 direct=1 00:19:13.522 bs=4096 00:19:13.522 iodepth=1 00:19:13.522 norandommap=1 00:19:13.522 numjobs=1 00:19:13.522 00:19:13.522 [job0] 00:19:13.522 filename=/dev/nvme0n1 00:19:13.522 [job1] 00:19:13.522 filename=/dev/nvme0n2 00:19:13.522 [job2] 00:19:13.522 filename=/dev/nvme0n3 00:19:13.522 [job3] 00:19:13.522 filename=/dev/nvme0n4 00:19:13.522 Could not set queue depth (nvme0n1) 00:19:13.522 Could not set queue depth (nvme0n2) 00:19:13.522 Could not set queue depth (nvme0n3) 00:19:13.522 Could not set queue depth (nvme0n4) 00:19:13.522 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:13.522 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:13.522 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:13.522 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:13.522 fio-3.35 00:19:13.522 Starting 4 threads 00:19:16.801 00:59:00 -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:19:16.801 00:59:00 -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:19:16.801 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=18169856, buflen=4096 00:19:16.801 fio: pid=3416214, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:16.801 00:59:00 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:16.802 00:59:00 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:19:16.802 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=27987968, buflen=4096 00:19:16.802 fio: pid=3416212, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:17.060 00:59:01 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:17.060 00:59:01 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:19:17.060 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=14155776, buflen=4096 00:19:17.060 fio: pid=3416210, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:17.318 00:59:01 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:17.318 00:59:01 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:19:17.318 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=2998272, buflen=4096 00:19:17.318 fio: pid=3416211, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:17.318 00:19:17.318 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3416210: Tue Jul 23 00:59:01 2024 00:19:17.318 read: IOPS=1006, BW=4026KiB/s (4122kB/s)(13.5MiB/3434msec) 00:19:17.318 slat (nsec): min=5602, max=73274, avg=14523.23, stdev=8365.98 00:19:17.318 clat (usec): min=279, max=42183, avg=965.14, stdev=4987.73 00:19:17.318 lat (usec): min=285, max=42197, avg=979.66, stdev=4988.43 00:19:17.318 clat percentiles (usec): 00:19:17.318 | 1.00th=[ 293], 5.00th=[ 302], 10.00th=[ 310], 20.00th=[ 318], 00:19:17.318 | 30.00th=[ 330], 40.00th=[ 338], 50.00th=[ 351], 60.00th=[ 363], 00:19:17.318 | 70.00th=[ 375], 80.00th=[ 388], 90.00th=[ 408], 95.00th=[ 437], 00:19:17.318 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:17.318 | 99.99th=[42206] 00:19:17.318 bw ( KiB/s): min= 96, max=10880, per=27.43%, avg=4594.67, stdev=4972.41, samples=6 00:19:17.318 iops : min= 24, max= 2720, avg=1148.67, stdev=1243.10, samples=6 00:19:17.318 lat (usec) : 500=97.54%, 750=0.93%, 1000=0.03% 00:19:17.318 lat (msec) : 50=1.48% 00:19:17.318 cpu : usr=0.58%, sys=1.72%, ctx=3460, majf=0, minf=1 00:19:17.318 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:17.318 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:17.318 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:17.318 issued rwts: total=3457,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:17.318 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:17.318 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3416211: Tue Jul 23 00:59:01 2024 00:19:17.318 read: IOPS=198, BW=793KiB/s (812kB/s)(2928KiB/3692msec) 00:19:17.318 slat (usec): min=5, max=10902, avg=27.66, stdev=402.28 00:19:17.318 clat (usec): min=263, max=44962, avg=4981.14, stdev=12878.48 00:19:17.318 lat (usec): min=270, max=52165, avg=5008.79, stdev=12929.78 00:19:17.318 clat percentiles (usec): 00:19:17.318 | 1.00th=[ 289], 5.00th=[ 318], 10.00th=[ 343], 20.00th=[ 379], 00:19:17.318 | 30.00th=[ 396], 40.00th=[ 424], 50.00th=[ 457], 60.00th=[ 490], 00:19:17.318 | 70.00th=[ 506], 80.00th=[ 545], 90.00th=[41157], 95.00th=[41157], 00:19:17.318 | 99.00th=[42206], 99.50th=[42206], 99.90th=[44827], 99.95th=[44827], 00:19:17.318 | 99.99th=[44827] 00:19:17.318 bw ( KiB/s): min= 96, max= 2254, per=4.60%, avg=771.14, stdev=905.39, samples=7 00:19:17.318 iops : min= 24, max= 563, avg=192.71, stdev=226.21, samples=7 00:19:17.318 lat (usec) : 500=66.85%, 750=21.96% 00:19:17.318 lat (msec) : 50=11.05% 00:19:17.318 cpu : usr=0.19%, sys=0.22%, ctx=738, majf=0, minf=1 00:19:17.318 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:17.318 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:17.318 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:17.318 issued rwts: total=733,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:17.319 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:17.319 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3416212: Tue Jul 23 00:59:01 2024 00:19:17.319 read: IOPS=2159, BW=8638KiB/s (8846kB/s)(26.7MiB/3164msec) 00:19:17.319 slat (nsec): min=6032, max=62089, avg=13737.03, stdev=6857.68 00:19:17.319 clat (usec): min=288, max=42167, avg=442.54, stdev=1563.94 00:19:17.319 lat (usec): min=295, max=42187, avg=456.27, stdev=1564.07 00:19:17.319 clat percentiles (usec): 00:19:17.319 | 1.00th=[ 306], 5.00th=[ 322], 10.00th=[ 330], 20.00th=[ 343], 00:19:17.319 | 30.00th=[ 351], 40.00th=[ 359], 50.00th=[ 367], 60.00th=[ 379], 00:19:17.319 | 70.00th=[ 392], 80.00th=[ 416], 90.00th=[ 457], 95.00th=[ 486], 00:19:17.319 | 99.00th=[ 578], 99.50th=[ 619], 99.90th=[41157], 99.95th=[41157], 00:19:17.319 | 99.99th=[42206] 00:19:17.319 bw ( KiB/s): min= 3616, max=10880, per=51.90%, avg=8692.00, stdev=2653.14, samples=6 00:19:17.319 iops : min= 904, max= 2720, avg=2173.00, stdev=663.28, samples=6 00:19:17.319 lat (usec) : 500=96.03%, 750=3.69%, 1000=0.07% 00:19:17.319 lat (msec) : 2=0.03%, 10=0.01%, 50=0.15% 00:19:17.319 cpu : usr=1.77%, sys=4.43%, ctx=6835, majf=0, minf=1 00:19:17.319 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:17.319 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:17.319 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:17.319 issued rwts: total=6834,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:17.319 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:17.319 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3416214: Tue Jul 23 00:59:01 2024 00:19:17.319 read: IOPS=1528, BW=6114KiB/s (6261kB/s)(17.3MiB/2902msec) 00:19:17.319 slat (nsec): min=4334, max=64400, avg=15385.31, stdev=9011.71 00:19:17.319 clat (usec): min=274, max=42014, avg=629.87, stdev=3343.14 00:19:17.319 lat (usec): min=278, max=42031, avg=645.26, stdev=3343.56 00:19:17.319 clat percentiles (usec): 00:19:17.319 | 1.00th=[ 289], 5.00th=[ 297], 10.00th=[ 302], 20.00th=[ 314], 00:19:17.319 | 30.00th=[ 322], 40.00th=[ 330], 50.00th=[ 343], 60.00th=[ 355], 00:19:17.319 | 70.00th=[ 367], 80.00th=[ 379], 90.00th=[ 408], 95.00th=[ 498], 00:19:17.319 | 99.00th=[ 742], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:19:17.319 | 99.99th=[42206] 00:19:17.319 bw ( KiB/s): min= 96, max=10144, per=31.89%, avg=5340.80, stdev=5032.97, samples=5 00:19:17.319 iops : min= 24, max= 2536, avg=1335.20, stdev=1258.24, samples=5 00:19:17.319 lat (usec) : 500=95.02%, 750=3.97%, 1000=0.27% 00:19:17.319 lat (msec) : 2=0.02%, 4=0.02%, 50=0.68% 00:19:17.319 cpu : usr=1.28%, sys=2.96%, ctx=4437, majf=0, minf=1 00:19:17.319 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:17.319 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:17.319 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:17.319 issued rwts: total=4437,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:17.319 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:17.319 00:19:17.319 Run status group 0 (all jobs): 00:19:17.319 READ: bw=16.4MiB/s (17.1MB/s), 793KiB/s-8638KiB/s (812kB/s-8846kB/s), io=60.4MiB (63.3MB), run=2902-3692msec 00:19:17.319 00:19:17.319 Disk stats (read/write): 00:19:17.319 nvme0n1: ios=3454/0, merge=0/0, ticks=3226/0, in_queue=3226, util=95.91% 00:19:17.319 nvme0n2: ios=773/0, merge=0/0, ticks=4843/0, in_queue=4843, util=99.79% 00:19:17.319 nvme0n3: ios=6792/0, merge=0/0, ticks=4068/0, in_queue=4068, util=100.00% 00:19:17.319 nvme0n4: ios=4350/0, merge=0/0, ticks=2715/0, in_queue=2715, util=96.74% 00:19:17.605 00:59:01 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:17.605 00:59:01 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:19:17.862 00:59:01 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:17.862 00:59:01 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:19:18.120 00:59:02 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:18.120 00:59:02 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:19:18.378 00:59:02 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:18.378 00:59:02 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:19:18.635 00:59:02 -- target/fio.sh@69 -- # fio_status=0 00:19:18.635 00:59:02 -- target/fio.sh@70 -- # wait 3416114 00:19:18.635 00:59:02 -- target/fio.sh@70 -- # fio_status=4 00:19:18.635 00:59:02 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:18.635 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:18.635 00:59:02 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:18.635 00:59:02 -- common/autotest_common.sh@1198 -- # local i=0 00:19:18.635 00:59:02 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:18.635 00:59:02 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:18.635 00:59:02 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:18.635 00:59:02 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:18.635 00:59:02 -- common/autotest_common.sh@1210 -- # return 0 00:19:18.635 00:59:02 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:19:18.635 00:59:02 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:19:18.635 nvmf hotplug test: fio failed as expected 00:19:18.635 00:59:02 -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:18.893 00:59:03 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:19:18.893 00:59:03 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:19:18.893 00:59:03 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:19:18.893 00:59:03 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:19:18.893 00:59:03 -- target/fio.sh@91 -- # nvmftestfini 00:19:18.893 00:59:03 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:18.893 00:59:03 -- nvmf/common.sh@116 -- # sync 00:19:18.893 00:59:03 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:18.893 00:59:03 -- nvmf/common.sh@119 -- # set +e 00:19:18.893 00:59:03 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:18.893 00:59:03 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:18.893 rmmod nvme_tcp 00:19:18.893 rmmod nvme_fabrics 00:19:19.151 rmmod nvme_keyring 00:19:19.151 00:59:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:19.151 00:59:03 -- nvmf/common.sh@123 -- # set -e 00:19:19.151 00:59:03 -- nvmf/common.sh@124 -- # return 0 00:19:19.151 00:59:03 -- nvmf/common.sh@477 -- # '[' -n 3414038 ']' 00:19:19.151 00:59:03 -- nvmf/common.sh@478 -- # killprocess 3414038 00:19:19.151 00:59:03 -- common/autotest_common.sh@926 -- # '[' -z 3414038 ']' 00:19:19.151 00:59:03 -- common/autotest_common.sh@930 -- # kill -0 3414038 00:19:19.151 00:59:03 -- common/autotest_common.sh@931 -- # uname 00:19:19.151 00:59:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:19.151 00:59:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3414038 00:19:19.151 00:59:03 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:19.151 00:59:03 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:19.151 00:59:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3414038' 00:19:19.151 killing process with pid 3414038 00:19:19.151 00:59:03 -- common/autotest_common.sh@945 -- # kill 3414038 00:19:19.151 00:59:03 -- common/autotest_common.sh@950 -- # wait 3414038 00:19:19.409 00:59:03 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:19.409 00:59:03 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:19.409 00:59:03 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:19.409 00:59:03 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:19.409 00:59:03 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:19.409 00:59:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:19.409 00:59:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:19.409 00:59:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:21.315 00:59:05 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:21.315 00:19:21.315 real 0m23.859s 00:19:21.315 user 1m22.194s 00:19:21.315 sys 0m7.292s 00:19:21.315 00:59:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:21.315 00:59:05 -- common/autotest_common.sh@10 -- # set +x 00:19:21.315 ************************************ 00:19:21.315 END TEST nvmf_fio_target 00:19:21.315 ************************************ 00:19:21.315 00:59:05 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:21.315 00:59:05 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:21.315 00:59:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:21.315 00:59:05 -- common/autotest_common.sh@10 -- # set +x 00:19:21.315 ************************************ 00:19:21.315 START TEST nvmf_bdevio 00:19:21.315 ************************************ 00:19:21.315 00:59:05 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:21.315 * Looking for test storage... 00:19:21.315 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:21.315 00:59:05 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:21.315 00:59:05 -- nvmf/common.sh@7 -- # uname -s 00:19:21.315 00:59:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:21.315 00:59:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:21.315 00:59:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:21.315 00:59:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:21.315 00:59:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:21.315 00:59:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:21.315 00:59:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:21.315 00:59:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:21.315 00:59:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:21.315 00:59:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:21.315 00:59:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:21.315 00:59:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:21.315 00:59:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:21.315 00:59:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:21.315 00:59:05 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:21.315 00:59:05 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:21.315 00:59:05 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:21.315 00:59:05 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:21.315 00:59:05 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:21.315 00:59:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:21.315 00:59:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:21.316 00:59:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:21.316 00:59:05 -- paths/export.sh@5 -- # export PATH 00:19:21.316 00:59:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:21.316 00:59:05 -- nvmf/common.sh@46 -- # : 0 00:19:21.316 00:59:05 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:21.316 00:59:05 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:21.316 00:59:05 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:21.316 00:59:05 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:21.316 00:59:05 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:21.316 00:59:05 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:21.316 00:59:05 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:21.316 00:59:05 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:21.316 00:59:05 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:21.316 00:59:05 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:21.316 00:59:05 -- target/bdevio.sh@14 -- # nvmftestinit 00:19:21.316 00:59:05 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:21.316 00:59:05 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:21.316 00:59:05 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:21.316 00:59:05 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:21.316 00:59:05 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:21.316 00:59:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:21.316 00:59:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:21.316 00:59:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:21.316 00:59:05 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:21.316 00:59:05 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:21.316 00:59:05 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:21.316 00:59:05 -- common/autotest_common.sh@10 -- # set +x 00:19:23.845 00:59:07 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:23.845 00:59:07 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:23.845 00:59:07 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:23.845 00:59:07 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:23.845 00:59:07 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:23.845 00:59:07 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:23.845 00:59:07 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:23.845 00:59:07 -- nvmf/common.sh@294 -- # net_devs=() 00:19:23.845 00:59:07 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:23.845 00:59:07 -- nvmf/common.sh@295 -- # e810=() 00:19:23.845 00:59:07 -- nvmf/common.sh@295 -- # local -ga e810 00:19:23.845 00:59:07 -- nvmf/common.sh@296 -- # x722=() 00:19:23.845 00:59:07 -- nvmf/common.sh@296 -- # local -ga x722 00:19:23.845 00:59:07 -- nvmf/common.sh@297 -- # mlx=() 00:19:23.845 00:59:07 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:23.845 00:59:07 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:23.845 00:59:07 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:23.845 00:59:07 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:23.845 00:59:07 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:23.845 00:59:07 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:23.845 00:59:07 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:23.845 00:59:07 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:23.845 00:59:07 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:23.845 00:59:07 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:23.845 00:59:07 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:23.845 00:59:07 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:23.845 00:59:07 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:23.845 00:59:07 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:23.845 00:59:07 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:19:23.845 00:59:07 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:19:23.845 00:59:07 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:19:23.845 00:59:07 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:23.845 00:59:07 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:23.845 00:59:07 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:23.845 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:23.845 00:59:07 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:23.845 00:59:07 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:23.845 00:59:07 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:23.845 00:59:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:23.845 00:59:07 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:23.845 00:59:07 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:23.845 00:59:07 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:23.845 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:23.845 00:59:07 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:23.845 00:59:07 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:23.845 00:59:07 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:23.845 00:59:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:23.845 00:59:07 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:23.845 00:59:07 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:23.845 00:59:07 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:19:23.845 00:59:07 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:19:23.845 00:59:07 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:23.845 00:59:07 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:23.845 00:59:07 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:23.845 00:59:07 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:23.845 00:59:07 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:23.845 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:23.845 00:59:07 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:23.845 00:59:07 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:23.845 00:59:07 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:23.845 00:59:07 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:23.845 00:59:07 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:23.845 00:59:07 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:23.845 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:23.845 00:59:07 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:23.845 00:59:07 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:23.845 00:59:07 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:23.845 00:59:07 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:23.845 00:59:07 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:23.846 00:59:07 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:23.846 00:59:07 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:23.846 00:59:07 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:23.846 00:59:07 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:23.846 00:59:07 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:23.846 00:59:07 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:23.846 00:59:07 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:23.846 00:59:07 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:23.846 00:59:07 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:23.846 00:59:07 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:23.846 00:59:07 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:23.846 00:59:07 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:23.846 00:59:07 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:23.846 00:59:07 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:23.846 00:59:07 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:23.846 00:59:07 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:23.846 00:59:07 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:23.846 00:59:07 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:23.846 00:59:07 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:23.846 00:59:07 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:23.846 00:59:07 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:23.846 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:23.846 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:19:23.846 00:19:23.846 --- 10.0.0.2 ping statistics --- 00:19:23.846 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:23.846 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:19:23.846 00:59:07 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:23.846 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:23.846 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:19:23.846 00:19:23.846 --- 10.0.0.1 ping statistics --- 00:19:23.846 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:23.846 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:19:23.846 00:59:07 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:23.846 00:59:07 -- nvmf/common.sh@410 -- # return 0 00:19:23.846 00:59:07 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:23.846 00:59:07 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:23.846 00:59:07 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:23.846 00:59:07 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:23.846 00:59:07 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:23.846 00:59:07 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:23.846 00:59:07 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:23.846 00:59:07 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:23.846 00:59:07 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:23.846 00:59:07 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:23.846 00:59:07 -- common/autotest_common.sh@10 -- # set +x 00:19:23.846 00:59:07 -- nvmf/common.sh@469 -- # nvmfpid=3418860 00:19:23.846 00:59:07 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:19:23.846 00:59:07 -- nvmf/common.sh@470 -- # waitforlisten 3418860 00:19:23.846 00:59:07 -- common/autotest_common.sh@819 -- # '[' -z 3418860 ']' 00:19:23.846 00:59:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:23.846 00:59:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:23.846 00:59:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:23.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:23.846 00:59:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:23.846 00:59:07 -- common/autotest_common.sh@10 -- # set +x 00:19:23.846 [2024-07-23 00:59:07.760623] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:19:23.846 [2024-07-23 00:59:07.760728] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:23.846 EAL: No free 2048 kB hugepages reported on node 1 00:19:23.846 [2024-07-23 00:59:07.831542] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:23.846 [2024-07-23 00:59:07.928179] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:23.846 [2024-07-23 00:59:07.928343] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:23.846 [2024-07-23 00:59:07.928363] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:23.846 [2024-07-23 00:59:07.928379] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:23.846 [2024-07-23 00:59:07.928478] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:19:23.846 [2024-07-23 00:59:07.928503] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:19:23.846 [2024-07-23 00:59:07.928567] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:19:23.846 [2024-07-23 00:59:07.928570] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:24.778 00:59:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:24.778 00:59:08 -- common/autotest_common.sh@852 -- # return 0 00:19:24.778 00:59:08 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:24.778 00:59:08 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:24.778 00:59:08 -- common/autotest_common.sh@10 -- # set +x 00:19:24.778 00:59:08 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:24.778 00:59:08 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:24.778 00:59:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:24.778 00:59:08 -- common/autotest_common.sh@10 -- # set +x 00:19:24.778 [2024-07-23 00:59:08.710124] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:24.778 00:59:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:24.778 00:59:08 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:24.778 00:59:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:24.778 00:59:08 -- common/autotest_common.sh@10 -- # set +x 00:19:24.778 Malloc0 00:19:24.778 00:59:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:24.778 00:59:08 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:24.778 00:59:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:24.778 00:59:08 -- common/autotest_common.sh@10 -- # set +x 00:19:24.778 00:59:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:24.778 00:59:08 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:24.778 00:59:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:24.778 00:59:08 -- common/autotest_common.sh@10 -- # set +x 00:19:24.778 00:59:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:24.778 00:59:08 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:24.778 00:59:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:24.778 00:59:08 -- common/autotest_common.sh@10 -- # set +x 00:19:24.778 [2024-07-23 00:59:08.761237] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:24.778 00:59:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:24.778 00:59:08 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:19:24.778 00:59:08 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:24.778 00:59:08 -- nvmf/common.sh@520 -- # config=() 00:19:24.778 00:59:08 -- nvmf/common.sh@520 -- # local subsystem config 00:19:24.778 00:59:08 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:24.778 00:59:08 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:24.778 { 00:19:24.778 "params": { 00:19:24.778 "name": "Nvme$subsystem", 00:19:24.778 "trtype": "$TEST_TRANSPORT", 00:19:24.778 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:24.778 "adrfam": "ipv4", 00:19:24.778 "trsvcid": "$NVMF_PORT", 00:19:24.778 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:24.778 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:24.778 "hdgst": ${hdgst:-false}, 00:19:24.778 "ddgst": ${ddgst:-false} 00:19:24.778 }, 00:19:24.778 "method": "bdev_nvme_attach_controller" 00:19:24.778 } 00:19:24.778 EOF 00:19:24.778 )") 00:19:24.778 00:59:08 -- nvmf/common.sh@542 -- # cat 00:19:24.778 00:59:08 -- nvmf/common.sh@544 -- # jq . 00:19:24.778 00:59:08 -- nvmf/common.sh@545 -- # IFS=, 00:19:24.778 00:59:08 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:24.778 "params": { 00:19:24.778 "name": "Nvme1", 00:19:24.778 "trtype": "tcp", 00:19:24.778 "traddr": "10.0.0.2", 00:19:24.778 "adrfam": "ipv4", 00:19:24.778 "trsvcid": "4420", 00:19:24.778 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:24.778 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:24.778 "hdgst": false, 00:19:24.778 "ddgst": false 00:19:24.778 }, 00:19:24.778 "method": "bdev_nvme_attach_controller" 00:19:24.778 }' 00:19:24.778 [2024-07-23 00:59:08.801031] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:19:24.778 [2024-07-23 00:59:08.801119] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3419023 ] 00:19:24.778 EAL: No free 2048 kB hugepages reported on node 1 00:19:24.778 [2024-07-23 00:59:08.862024] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:24.778 [2024-07-23 00:59:08.951150] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:24.778 [2024-07-23 00:59:08.951178] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:24.778 [2024-07-23 00:59:08.951181] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:25.344 [2024-07-23 00:59:09.245448] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:19:25.344 [2024-07-23 00:59:09.245505] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:19:25.344 I/O targets: 00:19:25.344 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:25.344 00:19:25.344 00:19:25.344 CUnit - A unit testing framework for C - Version 2.1-3 00:19:25.344 http://cunit.sourceforge.net/ 00:19:25.344 00:19:25.344 00:19:25.344 Suite: bdevio tests on: Nvme1n1 00:19:25.344 Test: blockdev write read block ...passed 00:19:25.344 Test: blockdev write zeroes read block ...passed 00:19:25.344 Test: blockdev write zeroes read no split ...passed 00:19:25.344 Test: blockdev write zeroes read split ...passed 00:19:25.344 Test: blockdev write zeroes read split partial ...passed 00:19:25.344 Test: blockdev reset ...[2024-07-23 00:59:09.450446] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:25.344 [2024-07-23 00:59:09.450561] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8dd860 (9): Bad file descriptor 00:19:25.344 [2024-07-23 00:59:09.519290] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:25.344 passed 00:19:25.344 Test: blockdev write read 8 blocks ...passed 00:19:25.344 Test: blockdev write read size > 128k ...passed 00:19:25.344 Test: blockdev write read invalid size ...passed 00:19:25.602 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:25.602 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:25.602 Test: blockdev write read max offset ...passed 00:19:25.602 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:25.602 Test: blockdev writev readv 8 blocks ...passed 00:19:25.602 Test: blockdev writev readv 30 x 1block ...passed 00:19:25.602 Test: blockdev writev readv block ...passed 00:19:25.602 Test: blockdev writev readv size > 128k ...passed 00:19:25.602 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:25.602 Test: blockdev comparev and writev ...[2024-07-23 00:59:09.777207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:25.602 [2024-07-23 00:59:09.777243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:25.602 [2024-07-23 00:59:09.777267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:25.602 [2024-07-23 00:59:09.777284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.602 [2024-07-23 00:59:09.777633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:25.602 [2024-07-23 00:59:09.777658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:25.602 [2024-07-23 00:59:09.777681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:25.602 [2024-07-23 00:59:09.777698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:25.602 [2024-07-23 00:59:09.778050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:25.602 [2024-07-23 00:59:09.778074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:25.602 [2024-07-23 00:59:09.778095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:25.602 [2024-07-23 00:59:09.778112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:25.602 [2024-07-23 00:59:09.778443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:25.602 [2024-07-23 00:59:09.778468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:25.602 [2024-07-23 00:59:09.778489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:25.602 [2024-07-23 00:59:09.778506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:25.860 passed 00:19:25.860 Test: blockdev nvme passthru rw ...passed 00:19:25.860 Test: blockdev nvme passthru vendor specific ...[2024-07-23 00:59:09.861955] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:25.860 [2024-07-23 00:59:09.861983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:25.860 [2024-07-23 00:59:09.862176] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:25.860 [2024-07-23 00:59:09.862199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:25.860 [2024-07-23 00:59:09.862389] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:25.860 [2024-07-23 00:59:09.862412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:25.860 [2024-07-23 00:59:09.862604] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:25.860 [2024-07-23 00:59:09.862635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:25.860 passed 00:19:25.860 Test: blockdev nvme admin passthru ...passed 00:19:25.860 Test: blockdev copy ...passed 00:19:25.860 00:19:25.860 Run Summary: Type Total Ran Passed Failed Inactive 00:19:25.860 suites 1 1 n/a 0 0 00:19:25.860 tests 23 23 23 0 0 00:19:25.860 asserts 152 152 152 0 n/a 00:19:25.860 00:19:25.860 Elapsed time = 1.354 seconds 00:19:26.118 00:59:10 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:26.118 00:59:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:26.118 00:59:10 -- common/autotest_common.sh@10 -- # set +x 00:19:26.118 00:59:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:26.118 00:59:10 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:26.118 00:59:10 -- target/bdevio.sh@30 -- # nvmftestfini 00:19:26.118 00:59:10 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:26.118 00:59:10 -- nvmf/common.sh@116 -- # sync 00:19:26.118 00:59:10 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:26.118 00:59:10 -- nvmf/common.sh@119 -- # set +e 00:19:26.118 00:59:10 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:26.118 00:59:10 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:26.118 rmmod nvme_tcp 00:19:26.118 rmmod nvme_fabrics 00:19:26.118 rmmod nvme_keyring 00:19:26.118 00:59:10 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:26.118 00:59:10 -- nvmf/common.sh@123 -- # set -e 00:19:26.118 00:59:10 -- nvmf/common.sh@124 -- # return 0 00:19:26.118 00:59:10 -- nvmf/common.sh@477 -- # '[' -n 3418860 ']' 00:19:26.118 00:59:10 -- nvmf/common.sh@478 -- # killprocess 3418860 00:19:26.118 00:59:10 -- common/autotest_common.sh@926 -- # '[' -z 3418860 ']' 00:19:26.118 00:59:10 -- common/autotest_common.sh@930 -- # kill -0 3418860 00:19:26.118 00:59:10 -- common/autotest_common.sh@931 -- # uname 00:19:26.118 00:59:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:26.118 00:59:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3418860 00:19:26.118 00:59:10 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:19:26.118 00:59:10 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:19:26.118 00:59:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3418860' 00:19:26.118 killing process with pid 3418860 00:19:26.118 00:59:10 -- common/autotest_common.sh@945 -- # kill 3418860 00:19:26.118 00:59:10 -- common/autotest_common.sh@950 -- # wait 3418860 00:19:26.377 00:59:10 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:26.377 00:59:10 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:26.377 00:59:10 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:26.377 00:59:10 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:26.377 00:59:10 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:26.377 00:59:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:26.377 00:59:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:26.377 00:59:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:28.277 00:59:12 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:28.277 00:19:28.277 real 0m7.016s 00:19:28.277 user 0m13.307s 00:19:28.277 sys 0m2.154s 00:19:28.277 00:59:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:28.277 00:59:12 -- common/autotest_common.sh@10 -- # set +x 00:19:28.277 ************************************ 00:19:28.277 END TEST nvmf_bdevio 00:19:28.277 ************************************ 00:19:28.534 00:59:12 -- nvmf/nvmf.sh@57 -- # '[' tcp = tcp ']' 00:19:28.534 00:59:12 -- nvmf/nvmf.sh@58 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:28.534 00:59:12 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:19:28.534 00:59:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:28.534 00:59:12 -- common/autotest_common.sh@10 -- # set +x 00:19:28.534 ************************************ 00:19:28.534 START TEST nvmf_bdevio_no_huge 00:19:28.534 ************************************ 00:19:28.534 00:59:12 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:28.534 * Looking for test storage... 00:19:28.534 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:28.534 00:59:12 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:28.534 00:59:12 -- nvmf/common.sh@7 -- # uname -s 00:19:28.534 00:59:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:28.534 00:59:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:28.534 00:59:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:28.534 00:59:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:28.534 00:59:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:28.534 00:59:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:28.534 00:59:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:28.534 00:59:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:28.534 00:59:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:28.534 00:59:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:28.534 00:59:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:28.534 00:59:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:28.534 00:59:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:28.534 00:59:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:28.534 00:59:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:28.534 00:59:12 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:28.534 00:59:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:28.534 00:59:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:28.534 00:59:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:28.534 00:59:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.534 00:59:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.535 00:59:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.535 00:59:12 -- paths/export.sh@5 -- # export PATH 00:19:28.535 00:59:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.535 00:59:12 -- nvmf/common.sh@46 -- # : 0 00:19:28.535 00:59:12 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:28.535 00:59:12 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:28.535 00:59:12 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:28.535 00:59:12 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:28.535 00:59:12 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:28.535 00:59:12 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:28.535 00:59:12 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:28.535 00:59:12 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:28.535 00:59:12 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:28.535 00:59:12 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:28.535 00:59:12 -- target/bdevio.sh@14 -- # nvmftestinit 00:19:28.535 00:59:12 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:28.535 00:59:12 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:28.535 00:59:12 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:28.535 00:59:12 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:28.535 00:59:12 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:28.535 00:59:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:28.535 00:59:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:28.535 00:59:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:28.535 00:59:12 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:28.535 00:59:12 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:28.535 00:59:12 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:28.535 00:59:12 -- common/autotest_common.sh@10 -- # set +x 00:19:30.434 00:59:14 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:30.434 00:59:14 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:30.434 00:59:14 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:30.434 00:59:14 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:30.434 00:59:14 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:30.434 00:59:14 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:30.434 00:59:14 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:30.434 00:59:14 -- nvmf/common.sh@294 -- # net_devs=() 00:19:30.434 00:59:14 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:30.434 00:59:14 -- nvmf/common.sh@295 -- # e810=() 00:19:30.434 00:59:14 -- nvmf/common.sh@295 -- # local -ga e810 00:19:30.434 00:59:14 -- nvmf/common.sh@296 -- # x722=() 00:19:30.434 00:59:14 -- nvmf/common.sh@296 -- # local -ga x722 00:19:30.434 00:59:14 -- nvmf/common.sh@297 -- # mlx=() 00:19:30.434 00:59:14 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:30.434 00:59:14 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:30.434 00:59:14 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:30.434 00:59:14 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:30.434 00:59:14 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:30.434 00:59:14 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:30.434 00:59:14 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:30.434 00:59:14 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:30.434 00:59:14 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:30.434 00:59:14 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:30.434 00:59:14 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:30.434 00:59:14 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:30.434 00:59:14 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:30.434 00:59:14 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:30.434 00:59:14 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:19:30.434 00:59:14 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:19:30.434 00:59:14 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:19:30.434 00:59:14 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:30.434 00:59:14 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:30.434 00:59:14 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:30.434 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:30.434 00:59:14 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:30.434 00:59:14 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:30.434 00:59:14 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:30.434 00:59:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:30.434 00:59:14 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:30.434 00:59:14 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:30.434 00:59:14 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:30.434 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:30.434 00:59:14 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:30.434 00:59:14 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:30.434 00:59:14 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:30.434 00:59:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:30.434 00:59:14 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:30.434 00:59:14 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:30.434 00:59:14 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:19:30.434 00:59:14 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:19:30.434 00:59:14 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:30.434 00:59:14 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:30.434 00:59:14 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:30.434 00:59:14 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:30.434 00:59:14 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:30.434 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:30.434 00:59:14 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:30.434 00:59:14 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:30.434 00:59:14 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:30.434 00:59:14 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:30.434 00:59:14 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:30.434 00:59:14 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:30.434 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:30.434 00:59:14 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:30.434 00:59:14 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:30.434 00:59:14 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:30.434 00:59:14 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:30.434 00:59:14 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:30.434 00:59:14 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:30.434 00:59:14 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:30.434 00:59:14 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:30.434 00:59:14 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:30.434 00:59:14 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:30.434 00:59:14 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:30.434 00:59:14 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:30.434 00:59:14 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:30.434 00:59:14 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:30.434 00:59:14 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:30.434 00:59:14 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:30.434 00:59:14 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:30.434 00:59:14 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:30.434 00:59:14 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:30.434 00:59:14 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:30.434 00:59:14 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:30.434 00:59:14 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:30.434 00:59:14 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:30.434 00:59:14 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:30.434 00:59:14 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:30.434 00:59:14 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:30.434 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:30.434 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.118 ms 00:19:30.434 00:19:30.434 --- 10.0.0.2 ping statistics --- 00:19:30.434 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:30.434 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:19:30.434 00:59:14 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:30.434 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:30.434 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:19:30.434 00:19:30.434 --- 10.0.0.1 ping statistics --- 00:19:30.434 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:30.434 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:19:30.434 00:59:14 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:30.434 00:59:14 -- nvmf/common.sh@410 -- # return 0 00:19:30.434 00:59:14 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:30.434 00:59:14 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:30.434 00:59:14 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:30.434 00:59:14 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:30.434 00:59:14 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:30.434 00:59:14 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:30.434 00:59:14 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:30.693 00:59:14 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:30.693 00:59:14 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:30.693 00:59:14 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:30.693 00:59:14 -- common/autotest_common.sh@10 -- # set +x 00:19:30.693 00:59:14 -- nvmf/common.sh@469 -- # nvmfpid=3421164 00:19:30.693 00:59:14 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:19:30.693 00:59:14 -- nvmf/common.sh@470 -- # waitforlisten 3421164 00:19:30.693 00:59:14 -- common/autotest_common.sh@819 -- # '[' -z 3421164 ']' 00:19:30.693 00:59:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:30.693 00:59:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:30.693 00:59:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:30.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:30.693 00:59:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:30.693 00:59:14 -- common/autotest_common.sh@10 -- # set +x 00:19:30.693 [2024-07-23 00:59:14.687397] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:19:30.693 [2024-07-23 00:59:14.687472] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:19:30.693 [2024-07-23 00:59:14.767699] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:30.693 [2024-07-23 00:59:14.848458] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:30.693 [2024-07-23 00:59:14.848620] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:30.693 [2024-07-23 00:59:14.848637] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:30.693 [2024-07-23 00:59:14.848649] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:30.693 [2024-07-23 00:59:14.848736] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:19:30.693 [2024-07-23 00:59:14.848799] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:19:30.693 [2024-07-23 00:59:14.848865] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:19:30.693 [2024-07-23 00:59:14.848868] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:31.626 00:59:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:31.626 00:59:15 -- common/autotest_common.sh@852 -- # return 0 00:19:31.626 00:59:15 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:31.626 00:59:15 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:31.626 00:59:15 -- common/autotest_common.sh@10 -- # set +x 00:19:31.626 00:59:15 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:31.626 00:59:15 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:31.626 00:59:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:31.626 00:59:15 -- common/autotest_common.sh@10 -- # set +x 00:19:31.626 [2024-07-23 00:59:15.701635] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:31.626 00:59:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:31.626 00:59:15 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:31.626 00:59:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:31.626 00:59:15 -- common/autotest_common.sh@10 -- # set +x 00:19:31.626 Malloc0 00:19:31.626 00:59:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:31.626 00:59:15 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:31.626 00:59:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:31.626 00:59:15 -- common/autotest_common.sh@10 -- # set +x 00:19:31.626 00:59:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:31.626 00:59:15 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:31.626 00:59:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:31.626 00:59:15 -- common/autotest_common.sh@10 -- # set +x 00:19:31.626 00:59:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:31.626 00:59:15 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:31.626 00:59:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:31.626 00:59:15 -- common/autotest_common.sh@10 -- # set +x 00:19:31.626 [2024-07-23 00:59:15.739549] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:31.626 00:59:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:31.626 00:59:15 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:19:31.626 00:59:15 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:31.626 00:59:15 -- nvmf/common.sh@520 -- # config=() 00:19:31.626 00:59:15 -- nvmf/common.sh@520 -- # local subsystem config 00:19:31.626 00:59:15 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:31.626 00:59:15 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:31.626 { 00:19:31.626 "params": { 00:19:31.626 "name": "Nvme$subsystem", 00:19:31.626 "trtype": "$TEST_TRANSPORT", 00:19:31.626 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:31.626 "adrfam": "ipv4", 00:19:31.626 "trsvcid": "$NVMF_PORT", 00:19:31.626 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:31.626 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:31.626 "hdgst": ${hdgst:-false}, 00:19:31.626 "ddgst": ${ddgst:-false} 00:19:31.626 }, 00:19:31.626 "method": "bdev_nvme_attach_controller" 00:19:31.626 } 00:19:31.626 EOF 00:19:31.626 )") 00:19:31.626 00:59:15 -- nvmf/common.sh@542 -- # cat 00:19:31.626 00:59:15 -- nvmf/common.sh@544 -- # jq . 00:19:31.626 00:59:15 -- nvmf/common.sh@545 -- # IFS=, 00:19:31.626 00:59:15 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:31.626 "params": { 00:19:31.626 "name": "Nvme1", 00:19:31.626 "trtype": "tcp", 00:19:31.626 "traddr": "10.0.0.2", 00:19:31.626 "adrfam": "ipv4", 00:19:31.626 "trsvcid": "4420", 00:19:31.626 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:31.626 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:31.626 "hdgst": false, 00:19:31.626 "ddgst": false 00:19:31.626 }, 00:19:31.626 "method": "bdev_nvme_attach_controller" 00:19:31.626 }' 00:19:31.626 [2024-07-23 00:59:15.787138] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:19:31.626 [2024-07-23 00:59:15.787223] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3421290 ] 00:19:31.885 [2024-07-23 00:59:15.851927] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:31.885 [2024-07-23 00:59:15.935081] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:31.885 [2024-07-23 00:59:15.935133] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:31.885 [2024-07-23 00:59:15.935136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:32.143 [2024-07-23 00:59:16.245597] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:19:32.143 [2024-07-23 00:59:16.245658] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:19:32.143 I/O targets: 00:19:32.143 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:32.143 00:19:32.143 00:19:32.143 CUnit - A unit testing framework for C - Version 2.1-3 00:19:32.143 http://cunit.sourceforge.net/ 00:19:32.143 00:19:32.143 00:19:32.143 Suite: bdevio tests on: Nvme1n1 00:19:32.143 Test: blockdev write read block ...passed 00:19:32.143 Test: blockdev write zeroes read block ...passed 00:19:32.143 Test: blockdev write zeroes read no split ...passed 00:19:32.401 Test: blockdev write zeroes read split ...passed 00:19:32.401 Test: blockdev write zeroes read split partial ...passed 00:19:32.401 Test: blockdev reset ...[2024-07-23 00:59:16.458025] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:32.401 [2024-07-23 00:59:16.458135] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc07ef0 (9): Bad file descriptor 00:19:32.401 [2024-07-23 00:59:16.474463] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:32.401 passed 00:19:32.401 Test: blockdev write read 8 blocks ...passed 00:19:32.401 Test: blockdev write read size > 128k ...passed 00:19:32.401 Test: blockdev write read invalid size ...passed 00:19:32.401 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:32.401 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:32.401 Test: blockdev write read max offset ...passed 00:19:32.659 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:32.659 Test: blockdev writev readv 8 blocks ...passed 00:19:32.659 Test: blockdev writev readv 30 x 1block ...passed 00:19:32.659 Test: blockdev writev readv block ...passed 00:19:32.659 Test: blockdev writev readv size > 128k ...passed 00:19:32.659 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:32.659 Test: blockdev comparev and writev ...[2024-07-23 00:59:16.770918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:32.659 [2024-07-23 00:59:16.770954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:32.659 [2024-07-23 00:59:16.770978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:32.659 [2024-07-23 00:59:16.770995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:32.659 [2024-07-23 00:59:16.771394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:32.659 [2024-07-23 00:59:16.771419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:32.659 [2024-07-23 00:59:16.771441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:32.659 [2024-07-23 00:59:16.771457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:32.659 [2024-07-23 00:59:16.771839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:32.659 [2024-07-23 00:59:16.771865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:32.659 [2024-07-23 00:59:16.771886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:32.659 [2024-07-23 00:59:16.771905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:32.659 [2024-07-23 00:59:16.772291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:32.659 [2024-07-23 00:59:16.772317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:32.659 [2024-07-23 00:59:16.772338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:32.659 [2024-07-23 00:59:16.772355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:32.659 passed 00:19:32.659 Test: blockdev nvme passthru rw ...passed 00:19:32.659 Test: blockdev nvme passthru vendor specific ...[2024-07-23 00:59:16.855968] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:32.659 [2024-07-23 00:59:16.855996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:32.659 [2024-07-23 00:59:16.856189] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:32.659 [2024-07-23 00:59:16.856212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:32.659 [2024-07-23 00:59:16.856399] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:32.659 [2024-07-23 00:59:16.856421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:32.659 [2024-07-23 00:59:16.856610] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:32.659 [2024-07-23 00:59:16.856650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:32.659 passed 00:19:32.917 Test: blockdev nvme admin passthru ...passed 00:19:32.917 Test: blockdev copy ...passed 00:19:32.917 00:19:32.917 Run Summary: Type Total Ran Passed Failed Inactive 00:19:32.917 suites 1 1 n/a 0 0 00:19:32.917 tests 23 23 23 0 0 00:19:32.917 asserts 152 152 152 0 n/a 00:19:32.917 00:19:32.917 Elapsed time = 1.328 seconds 00:19:33.175 00:59:17 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:33.175 00:59:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:33.175 00:59:17 -- common/autotest_common.sh@10 -- # set +x 00:19:33.175 00:59:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:33.175 00:59:17 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:33.175 00:59:17 -- target/bdevio.sh@30 -- # nvmftestfini 00:19:33.175 00:59:17 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:33.175 00:59:17 -- nvmf/common.sh@116 -- # sync 00:19:33.175 00:59:17 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:33.175 00:59:17 -- nvmf/common.sh@119 -- # set +e 00:19:33.175 00:59:17 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:33.175 00:59:17 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:33.175 rmmod nvme_tcp 00:19:33.175 rmmod nvme_fabrics 00:19:33.175 rmmod nvme_keyring 00:19:33.175 00:59:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:33.175 00:59:17 -- nvmf/common.sh@123 -- # set -e 00:19:33.175 00:59:17 -- nvmf/common.sh@124 -- # return 0 00:19:33.175 00:59:17 -- nvmf/common.sh@477 -- # '[' -n 3421164 ']' 00:19:33.175 00:59:17 -- nvmf/common.sh@478 -- # killprocess 3421164 00:19:33.175 00:59:17 -- common/autotest_common.sh@926 -- # '[' -z 3421164 ']' 00:19:33.175 00:59:17 -- common/autotest_common.sh@930 -- # kill -0 3421164 00:19:33.175 00:59:17 -- common/autotest_common.sh@931 -- # uname 00:19:33.175 00:59:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:33.175 00:59:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3421164 00:19:33.175 00:59:17 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:19:33.175 00:59:17 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:19:33.175 00:59:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3421164' 00:19:33.175 killing process with pid 3421164 00:19:33.175 00:59:17 -- common/autotest_common.sh@945 -- # kill 3421164 00:19:33.175 00:59:17 -- common/autotest_common.sh@950 -- # wait 3421164 00:19:33.741 00:59:17 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:33.741 00:59:17 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:33.741 00:59:17 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:33.741 00:59:17 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:33.741 00:59:17 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:33.741 00:59:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:33.741 00:59:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:33.741 00:59:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:35.673 00:59:19 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:35.673 00:19:35.673 real 0m7.260s 00:19:35.673 user 0m14.608s 00:19:35.673 sys 0m2.492s 00:19:35.673 00:59:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:35.673 00:59:19 -- common/autotest_common.sh@10 -- # set +x 00:19:35.673 ************************************ 00:19:35.673 END TEST nvmf_bdevio_no_huge 00:19:35.673 ************************************ 00:19:35.673 00:59:19 -- nvmf/nvmf.sh@59 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:35.673 00:59:19 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:35.673 00:59:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:35.673 00:59:19 -- common/autotest_common.sh@10 -- # set +x 00:19:35.673 ************************************ 00:19:35.673 START TEST nvmf_tls 00:19:35.673 ************************************ 00:19:35.673 00:59:19 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:35.673 * Looking for test storage... 00:19:35.673 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:35.673 00:59:19 -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:35.673 00:59:19 -- nvmf/common.sh@7 -- # uname -s 00:19:35.673 00:59:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:35.673 00:59:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:35.673 00:59:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:35.673 00:59:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:35.673 00:59:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:35.673 00:59:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:35.673 00:59:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:35.673 00:59:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:35.673 00:59:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:35.673 00:59:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:35.673 00:59:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:35.673 00:59:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:35.673 00:59:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:35.673 00:59:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:35.673 00:59:19 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:35.673 00:59:19 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:35.673 00:59:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:35.673 00:59:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:35.673 00:59:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:35.674 00:59:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.674 00:59:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.674 00:59:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.674 00:59:19 -- paths/export.sh@5 -- # export PATH 00:19:35.674 00:59:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.674 00:59:19 -- nvmf/common.sh@46 -- # : 0 00:19:35.674 00:59:19 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:35.674 00:59:19 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:35.674 00:59:19 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:35.674 00:59:19 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:35.674 00:59:19 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:35.674 00:59:19 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:35.674 00:59:19 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:35.674 00:59:19 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:35.674 00:59:19 -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:35.674 00:59:19 -- target/tls.sh@71 -- # nvmftestinit 00:19:35.674 00:59:19 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:35.674 00:59:19 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:35.674 00:59:19 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:35.674 00:59:19 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:35.674 00:59:19 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:35.674 00:59:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:35.674 00:59:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:35.674 00:59:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:35.674 00:59:19 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:35.674 00:59:19 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:35.674 00:59:19 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:35.674 00:59:19 -- common/autotest_common.sh@10 -- # set +x 00:19:38.203 00:59:21 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:38.203 00:59:21 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:38.203 00:59:21 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:38.203 00:59:21 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:38.203 00:59:21 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:38.203 00:59:21 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:38.203 00:59:21 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:38.203 00:59:21 -- nvmf/common.sh@294 -- # net_devs=() 00:19:38.203 00:59:21 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:38.203 00:59:21 -- nvmf/common.sh@295 -- # e810=() 00:19:38.203 00:59:21 -- nvmf/common.sh@295 -- # local -ga e810 00:19:38.203 00:59:21 -- nvmf/common.sh@296 -- # x722=() 00:19:38.203 00:59:21 -- nvmf/common.sh@296 -- # local -ga x722 00:19:38.203 00:59:21 -- nvmf/common.sh@297 -- # mlx=() 00:19:38.203 00:59:21 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:38.203 00:59:21 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:38.203 00:59:21 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:38.203 00:59:21 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:38.203 00:59:21 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:38.203 00:59:21 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:38.203 00:59:21 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:38.203 00:59:21 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:38.203 00:59:21 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:38.203 00:59:21 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:38.203 00:59:21 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:38.203 00:59:21 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:38.203 00:59:21 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:38.203 00:59:21 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:38.203 00:59:21 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:19:38.203 00:59:21 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:19:38.203 00:59:21 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:19:38.203 00:59:21 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:38.203 00:59:21 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:38.203 00:59:21 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:38.203 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:38.203 00:59:21 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:38.203 00:59:21 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:38.203 00:59:21 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:38.203 00:59:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:38.203 00:59:21 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:38.203 00:59:21 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:38.203 00:59:21 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:38.203 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:38.203 00:59:21 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:38.203 00:59:21 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:38.203 00:59:21 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:38.203 00:59:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:38.203 00:59:21 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:38.203 00:59:21 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:38.203 00:59:21 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:19:38.203 00:59:21 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:19:38.203 00:59:21 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:38.203 00:59:21 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:38.203 00:59:21 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:38.203 00:59:21 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:38.203 00:59:21 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:38.203 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:38.203 00:59:21 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:38.203 00:59:21 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:38.203 00:59:21 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:38.203 00:59:21 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:38.203 00:59:21 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:38.203 00:59:21 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:38.203 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:38.203 00:59:21 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:38.203 00:59:21 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:38.203 00:59:21 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:38.203 00:59:21 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:38.203 00:59:21 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:38.203 00:59:21 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:38.203 00:59:21 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:38.203 00:59:21 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:38.203 00:59:21 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:38.203 00:59:21 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:38.203 00:59:21 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:38.203 00:59:21 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:38.203 00:59:21 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:38.203 00:59:21 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:38.203 00:59:21 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:38.203 00:59:21 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:38.203 00:59:21 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:38.203 00:59:21 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:38.203 00:59:21 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:38.203 00:59:21 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:38.203 00:59:21 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:38.203 00:59:21 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:38.203 00:59:21 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:38.203 00:59:21 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:38.203 00:59:21 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:38.203 00:59:21 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:38.203 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:38.203 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.190 ms 00:19:38.203 00:19:38.203 --- 10.0.0.2 ping statistics --- 00:19:38.203 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:38.203 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:19:38.203 00:59:21 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:38.203 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:38.203 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:19:38.203 00:19:38.203 --- 10.0.0.1 ping statistics --- 00:19:38.203 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:38.203 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:19:38.203 00:59:21 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:38.203 00:59:21 -- nvmf/common.sh@410 -- # return 0 00:19:38.203 00:59:21 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:38.203 00:59:21 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:38.203 00:59:21 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:38.203 00:59:21 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:38.203 00:59:21 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:38.203 00:59:21 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:38.203 00:59:21 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:38.203 00:59:21 -- target/tls.sh@72 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:19:38.203 00:59:21 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:38.203 00:59:21 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:38.203 00:59:21 -- common/autotest_common.sh@10 -- # set +x 00:19:38.203 00:59:21 -- nvmf/common.sh@469 -- # nvmfpid=3423477 00:19:38.203 00:59:21 -- nvmf/common.sh@470 -- # waitforlisten 3423477 00:19:38.203 00:59:21 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:19:38.203 00:59:21 -- common/autotest_common.sh@819 -- # '[' -z 3423477 ']' 00:19:38.203 00:59:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:38.203 00:59:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:38.203 00:59:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:38.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:38.203 00:59:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:38.203 00:59:21 -- common/autotest_common.sh@10 -- # set +x 00:19:38.203 [2024-07-23 00:59:21.994248] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:19:38.203 [2024-07-23 00:59:21.994329] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:38.203 EAL: No free 2048 kB hugepages reported on node 1 00:19:38.203 [2024-07-23 00:59:22.064375] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:38.203 [2024-07-23 00:59:22.151882] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:38.203 [2024-07-23 00:59:22.152059] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:38.203 [2024-07-23 00:59:22.152078] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:38.203 [2024-07-23 00:59:22.152092] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:38.203 [2024-07-23 00:59:22.152130] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:38.203 00:59:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:38.203 00:59:22 -- common/autotest_common.sh@852 -- # return 0 00:19:38.203 00:59:22 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:38.203 00:59:22 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:38.203 00:59:22 -- common/autotest_common.sh@10 -- # set +x 00:19:38.203 00:59:22 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:38.203 00:59:22 -- target/tls.sh@74 -- # '[' tcp '!=' tcp ']' 00:19:38.203 00:59:22 -- target/tls.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:19:38.461 true 00:19:38.461 00:59:22 -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:38.461 00:59:22 -- target/tls.sh@82 -- # jq -r .tls_version 00:19:38.719 00:59:22 -- target/tls.sh@82 -- # version=0 00:19:38.719 00:59:22 -- target/tls.sh@83 -- # [[ 0 != \0 ]] 00:19:38.719 00:59:22 -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:38.719 00:59:22 -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:38.719 00:59:22 -- target/tls.sh@90 -- # jq -r .tls_version 00:19:38.977 00:59:23 -- target/tls.sh@90 -- # version=13 00:19:38.977 00:59:23 -- target/tls.sh@91 -- # [[ 13 != \1\3 ]] 00:19:38.977 00:59:23 -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:19:39.236 00:59:23 -- target/tls.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:39.236 00:59:23 -- target/tls.sh@98 -- # jq -r .tls_version 00:19:39.493 00:59:23 -- target/tls.sh@98 -- # version=7 00:19:39.494 00:59:23 -- target/tls.sh@99 -- # [[ 7 != \7 ]] 00:19:39.494 00:59:23 -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:39.494 00:59:23 -- target/tls.sh@105 -- # jq -r .enable_ktls 00:19:39.751 00:59:23 -- target/tls.sh@105 -- # ktls=false 00:19:39.751 00:59:23 -- target/tls.sh@106 -- # [[ false != \f\a\l\s\e ]] 00:19:39.751 00:59:23 -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:19:40.008 00:59:24 -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:40.008 00:59:24 -- target/tls.sh@113 -- # jq -r .enable_ktls 00:19:40.266 00:59:24 -- target/tls.sh@113 -- # ktls=true 00:19:40.266 00:59:24 -- target/tls.sh@114 -- # [[ true != \t\r\u\e ]] 00:19:40.266 00:59:24 -- target/tls.sh@120 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:19:40.523 00:59:24 -- target/tls.sh@121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:40.523 00:59:24 -- target/tls.sh@121 -- # jq -r .enable_ktls 00:19:40.781 00:59:24 -- target/tls.sh@121 -- # ktls=false 00:19:40.781 00:59:24 -- target/tls.sh@122 -- # [[ false != \f\a\l\s\e ]] 00:19:40.781 00:59:24 -- target/tls.sh@127 -- # format_interchange_psk 00112233445566778899aabbccddeeff 00:19:40.781 00:59:24 -- target/tls.sh@49 -- # local key hash crc 00:19:40.781 00:59:24 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff 00:19:40.781 00:59:24 -- target/tls.sh@51 -- # hash=01 00:19:40.781 00:59:24 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff 00:19:40.781 00:59:24 -- target/tls.sh@52 -- # gzip -1 -c 00:19:40.781 00:59:24 -- target/tls.sh@52 -- # tail -c8 00:19:40.781 00:59:24 -- target/tls.sh@52 -- # head -c 4 00:19:40.781 00:59:24 -- target/tls.sh@52 -- # crc='p$H�' 00:19:40.781 00:59:24 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:19:40.781 00:59:24 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeffp$H�' 00:19:40.781 00:59:24 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:40.781 00:59:24 -- target/tls.sh@127 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:40.781 00:59:24 -- target/tls.sh@128 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 00:19:40.781 00:59:24 -- target/tls.sh@49 -- # local key hash crc 00:19:40.781 00:59:24 -- target/tls.sh@51 -- # key=ffeeddccbbaa99887766554433221100 00:19:40.781 00:59:24 -- target/tls.sh@51 -- # hash=01 00:19:40.781 00:59:24 -- target/tls.sh@52 -- # echo -n ffeeddccbbaa99887766554433221100 00:19:40.781 00:59:24 -- target/tls.sh@52 -- # gzip -1 -c 00:19:40.781 00:59:24 -- target/tls.sh@52 -- # tail -c8 00:19:40.781 00:59:24 -- target/tls.sh@52 -- # head -c 4 00:19:40.781 00:59:24 -- target/tls.sh@52 -- # crc=$'_\006o\330' 00:19:40.781 00:59:24 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:19:40.781 00:59:24 -- target/tls.sh@54 -- # echo -n $'ffeeddccbbaa99887766554433221100_\006o\330' 00:19:40.781 00:59:24 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:40.781 00:59:24 -- target/tls.sh@128 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:40.781 00:59:24 -- target/tls.sh@130 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:40.781 00:59:24 -- target/tls.sh@131 -- # key_2_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:19:40.781 00:59:24 -- target/tls.sh@133 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:40.781 00:59:24 -- target/tls.sh@134 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:40.781 00:59:24 -- target/tls.sh@136 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:40.781 00:59:24 -- target/tls.sh@137 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:19:40.781 00:59:24 -- target/tls.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:41.038 00:59:25 -- target/tls.sh@140 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:19:41.296 00:59:25 -- target/tls.sh@142 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:41.296 00:59:25 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:41.296 00:59:25 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:41.553 [2024-07-23 00:59:25.703256] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:41.553 00:59:25 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:41.810 00:59:25 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:42.067 [2024-07-23 00:59:26.252794] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:42.067 [2024-07-23 00:59:26.253042] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:42.325 00:59:26 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:42.325 malloc0 00:19:42.325 00:59:26 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:42.889 00:59:26 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:42.889 00:59:27 -- target/tls.sh@146 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:43.146 EAL: No free 2048 kB hugepages reported on node 1 00:19:53.108 Initializing NVMe Controllers 00:19:53.108 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:53.108 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:53.108 Initialization complete. Launching workers. 00:19:53.108 ======================================================== 00:19:53.108 Latency(us) 00:19:53.108 Device Information : IOPS MiB/s Average min max 00:19:53.108 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7832.69 30.60 8173.44 1499.07 8942.69 00:19:53.108 ======================================================== 00:19:53.108 Total : 7832.69 30.60 8173.44 1499.07 8942.69 00:19:53.108 00:19:53.108 00:59:37 -- target/tls.sh@152 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:53.108 00:59:37 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:53.108 00:59:37 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:53.108 00:59:37 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:53.108 00:59:37 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:19:53.108 00:59:37 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:53.108 00:59:37 -- target/tls.sh@28 -- # bdevperf_pid=3425318 00:19:53.108 00:59:37 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:53.108 00:59:37 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:53.108 00:59:37 -- target/tls.sh@31 -- # waitforlisten 3425318 /var/tmp/bdevperf.sock 00:19:53.108 00:59:37 -- common/autotest_common.sh@819 -- # '[' -z 3425318 ']' 00:19:53.108 00:59:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:53.108 00:59:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:53.108 00:59:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:53.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:53.108 00:59:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:53.108 00:59:37 -- common/autotest_common.sh@10 -- # set +x 00:19:53.108 [2024-07-23 00:59:37.213721] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:19:53.108 [2024-07-23 00:59:37.213798] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3425318 ] 00:19:53.108 EAL: No free 2048 kB hugepages reported on node 1 00:19:53.108 [2024-07-23 00:59:37.270308] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:53.366 [2024-07-23 00:59:37.355445] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:54.300 00:59:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:54.300 00:59:38 -- common/autotest_common.sh@852 -- # return 0 00:19:54.300 00:59:38 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:54.300 [2024-07-23 00:59:38.416501] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:54.300 TLSTESTn1 00:19:54.558 00:59:38 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:54.558 Running I/O for 10 seconds... 00:20:04.553 00:20:04.553 Latency(us) 00:20:04.553 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:04.553 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:04.553 Verification LBA range: start 0x0 length 0x2000 00:20:04.553 TLSTESTn1 : 10.03 2337.13 9.13 0.00 0.00 54694.24 4708.88 59807.67 00:20:04.553 =================================================================================================================== 00:20:04.553 Total : 2337.13 9.13 0.00 0.00 54694.24 4708.88 59807.67 00:20:04.553 0 00:20:04.553 00:59:48 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:04.553 00:59:48 -- target/tls.sh@45 -- # killprocess 3425318 00:20:04.553 00:59:48 -- common/autotest_common.sh@926 -- # '[' -z 3425318 ']' 00:20:04.553 00:59:48 -- common/autotest_common.sh@930 -- # kill -0 3425318 00:20:04.553 00:59:48 -- common/autotest_common.sh@931 -- # uname 00:20:04.553 00:59:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:04.553 00:59:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3425318 00:20:04.553 00:59:48 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:04.553 00:59:48 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:04.553 00:59:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3425318' 00:20:04.553 killing process with pid 3425318 00:20:04.553 00:59:48 -- common/autotest_common.sh@945 -- # kill 3425318 00:20:04.553 Received shutdown signal, test time was about 10.000000 seconds 00:20:04.553 00:20:04.553 Latency(us) 00:20:04.553 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:04.553 =================================================================================================================== 00:20:04.553 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:04.553 00:59:48 -- common/autotest_common.sh@950 -- # wait 3425318 00:20:04.811 00:59:48 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:20:04.811 00:59:48 -- common/autotest_common.sh@640 -- # local es=0 00:20:04.811 00:59:48 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:20:04.811 00:59:48 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:20:04.811 00:59:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:04.811 00:59:48 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:20:04.811 00:59:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:04.811 00:59:48 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:20:04.811 00:59:48 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:04.811 00:59:48 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:04.811 00:59:48 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:04.811 00:59:48 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt' 00:20:04.811 00:59:48 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:04.811 00:59:48 -- target/tls.sh@28 -- # bdevperf_pid=3426800 00:20:04.811 00:59:48 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:04.811 00:59:48 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:04.811 00:59:48 -- target/tls.sh@31 -- # waitforlisten 3426800 /var/tmp/bdevperf.sock 00:20:04.811 00:59:48 -- common/autotest_common.sh@819 -- # '[' -z 3426800 ']' 00:20:04.811 00:59:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:04.811 00:59:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:04.811 00:59:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:04.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:04.811 00:59:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:04.811 00:59:48 -- common/autotest_common.sh@10 -- # set +x 00:20:04.811 [2024-07-23 00:59:48.999075] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:20:04.811 [2024-07-23 00:59:48.999153] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3426800 ] 00:20:05.069 EAL: No free 2048 kB hugepages reported on node 1 00:20:05.069 [2024-07-23 00:59:49.055391] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:05.069 [2024-07-23 00:59:49.136023] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:06.000 00:59:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:06.000 00:59:49 -- common/autotest_common.sh@852 -- # return 0 00:20:06.000 00:59:49 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:20:06.258 [2024-07-23 00:59:50.228837] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:06.258 [2024-07-23 00:59:50.238782] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:06.258 [2024-07-23 00:59:50.239064] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x194f130 (107): Transport endpoint is not connected 00:20:06.258 [2024-07-23 00:59:50.240047] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x194f130 (9): Bad file descriptor 00:20:06.258 [2024-07-23 00:59:50.241045] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:06.258 [2024-07-23 00:59:50.241067] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:06.258 [2024-07-23 00:59:50.241087] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:06.258 request: 00:20:06.258 { 00:20:06.258 "name": "TLSTEST", 00:20:06.258 "trtype": "tcp", 00:20:06.258 "traddr": "10.0.0.2", 00:20:06.258 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:06.258 "adrfam": "ipv4", 00:20:06.258 "trsvcid": "4420", 00:20:06.258 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:06.258 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt", 00:20:06.258 "method": "bdev_nvme_attach_controller", 00:20:06.258 "req_id": 1 00:20:06.258 } 00:20:06.258 Got JSON-RPC error response 00:20:06.258 response: 00:20:06.258 { 00:20:06.258 "code": -32602, 00:20:06.258 "message": "Invalid parameters" 00:20:06.258 } 00:20:06.258 00:59:50 -- target/tls.sh@36 -- # killprocess 3426800 00:20:06.258 00:59:50 -- common/autotest_common.sh@926 -- # '[' -z 3426800 ']' 00:20:06.258 00:59:50 -- common/autotest_common.sh@930 -- # kill -0 3426800 00:20:06.258 00:59:50 -- common/autotest_common.sh@931 -- # uname 00:20:06.258 00:59:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:06.258 00:59:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3426800 00:20:06.258 00:59:50 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:06.258 00:59:50 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:06.258 00:59:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3426800' 00:20:06.258 killing process with pid 3426800 00:20:06.258 00:59:50 -- common/autotest_common.sh@945 -- # kill 3426800 00:20:06.258 Received shutdown signal, test time was about 10.000000 seconds 00:20:06.258 00:20:06.258 Latency(us) 00:20:06.258 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:06.258 =================================================================================================================== 00:20:06.258 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:06.258 00:59:50 -- common/autotest_common.sh@950 -- # wait 3426800 00:20:06.516 00:59:50 -- target/tls.sh@37 -- # return 1 00:20:06.516 00:59:50 -- common/autotest_common.sh@643 -- # es=1 00:20:06.516 00:59:50 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:06.516 00:59:50 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:06.516 00:59:50 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:06.516 00:59:50 -- target/tls.sh@158 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:06.516 00:59:50 -- common/autotest_common.sh@640 -- # local es=0 00:20:06.516 00:59:50 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:06.516 00:59:50 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:20:06.516 00:59:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:06.516 00:59:50 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:20:06.516 00:59:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:06.516 00:59:50 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:06.516 00:59:50 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:06.516 00:59:50 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:06.516 00:59:50 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:20:06.516 00:59:50 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:20:06.516 00:59:50 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:06.516 00:59:50 -- target/tls.sh@28 -- # bdevperf_pid=3426958 00:20:06.516 00:59:50 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:06.516 00:59:50 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:06.517 00:59:50 -- target/tls.sh@31 -- # waitforlisten 3426958 /var/tmp/bdevperf.sock 00:20:06.517 00:59:50 -- common/autotest_common.sh@819 -- # '[' -z 3426958 ']' 00:20:06.517 00:59:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:06.517 00:59:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:06.517 00:59:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:06.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:06.517 00:59:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:06.517 00:59:50 -- common/autotest_common.sh@10 -- # set +x 00:20:06.517 [2024-07-23 00:59:50.549503] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:20:06.517 [2024-07-23 00:59:50.549584] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3426958 ] 00:20:06.517 EAL: No free 2048 kB hugepages reported on node 1 00:20:06.517 [2024-07-23 00:59:50.606835] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:06.517 [2024-07-23 00:59:50.688839] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:07.450 00:59:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:07.450 00:59:51 -- common/autotest_common.sh@852 -- # return 0 00:20:07.450 00:59:51 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:07.708 [2024-07-23 00:59:51.713677] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:07.708 [2024-07-23 00:59:51.722423] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:07.708 [2024-07-23 00:59:51.722456] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:07.708 [2024-07-23 00:59:51.722494] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:07.708 [2024-07-23 00:59:51.723551] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b4c130 (107): Transport endpoint is not connected 00:20:07.708 [2024-07-23 00:59:51.724541] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b4c130 (9): Bad file descriptor 00:20:07.708 [2024-07-23 00:59:51.725539] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:07.708 [2024-07-23 00:59:51.725562] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:07.708 [2024-07-23 00:59:51.725582] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:07.708 request: 00:20:07.708 { 00:20:07.708 "name": "TLSTEST", 00:20:07.708 "trtype": "tcp", 00:20:07.708 "traddr": "10.0.0.2", 00:20:07.708 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:07.708 "adrfam": "ipv4", 00:20:07.708 "trsvcid": "4420", 00:20:07.708 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:07.708 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt", 00:20:07.708 "method": "bdev_nvme_attach_controller", 00:20:07.708 "req_id": 1 00:20:07.708 } 00:20:07.708 Got JSON-RPC error response 00:20:07.708 response: 00:20:07.708 { 00:20:07.708 "code": -32602, 00:20:07.708 "message": "Invalid parameters" 00:20:07.708 } 00:20:07.708 00:59:51 -- target/tls.sh@36 -- # killprocess 3426958 00:20:07.708 00:59:51 -- common/autotest_common.sh@926 -- # '[' -z 3426958 ']' 00:20:07.708 00:59:51 -- common/autotest_common.sh@930 -- # kill -0 3426958 00:20:07.708 00:59:51 -- common/autotest_common.sh@931 -- # uname 00:20:07.708 00:59:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:07.708 00:59:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3426958 00:20:07.708 00:59:51 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:07.708 00:59:51 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:07.708 00:59:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3426958' 00:20:07.708 killing process with pid 3426958 00:20:07.708 00:59:51 -- common/autotest_common.sh@945 -- # kill 3426958 00:20:07.708 Received shutdown signal, test time was about 10.000000 seconds 00:20:07.708 00:20:07.708 Latency(us) 00:20:07.708 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:07.708 =================================================================================================================== 00:20:07.708 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:07.708 00:59:51 -- common/autotest_common.sh@950 -- # wait 3426958 00:20:07.965 00:59:51 -- target/tls.sh@37 -- # return 1 00:20:07.965 00:59:51 -- common/autotest_common.sh@643 -- # es=1 00:20:07.965 00:59:51 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:07.965 00:59:51 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:07.965 00:59:51 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:07.965 00:59:51 -- target/tls.sh@161 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:07.965 00:59:51 -- common/autotest_common.sh@640 -- # local es=0 00:20:07.965 00:59:51 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:07.965 00:59:51 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:20:07.965 00:59:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:07.965 00:59:51 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:20:07.965 00:59:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:07.965 00:59:51 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:07.965 00:59:51 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:07.965 00:59:51 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:20:07.965 00:59:51 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:07.965 00:59:51 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:20:07.965 00:59:51 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:07.965 00:59:51 -- target/tls.sh@28 -- # bdevperf_pid=3427105 00:20:07.965 00:59:51 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:07.965 00:59:51 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:07.965 00:59:51 -- target/tls.sh@31 -- # waitforlisten 3427105 /var/tmp/bdevperf.sock 00:20:07.965 00:59:51 -- common/autotest_common.sh@819 -- # '[' -z 3427105 ']' 00:20:07.965 00:59:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:07.965 00:59:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:07.965 00:59:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:07.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:07.965 00:59:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:07.965 00:59:51 -- common/autotest_common.sh@10 -- # set +x 00:20:07.965 [2024-07-23 00:59:52.037573] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:20:07.965 [2024-07-23 00:59:52.037687] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3427105 ] 00:20:07.965 EAL: No free 2048 kB hugepages reported on node 1 00:20:07.965 [2024-07-23 00:59:52.097463] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:08.223 [2024-07-23 00:59:52.183325] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:08.787 00:59:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:08.787 00:59:52 -- common/autotest_common.sh@852 -- # return 0 00:20:08.787 00:59:52 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:09.044 [2024-07-23 00:59:53.239979] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:09.044 [2024-07-23 00:59:53.246446] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:09.044 [2024-07-23 00:59:53.246480] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:09.044 [2024-07-23 00:59:53.246516] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:09.044 [2024-07-23 00:59:53.246967] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x629130 (107): Transport endpoint is not connected 00:20:09.302 [2024-07-23 00:59:53.247948] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x629130 (9): Bad file descriptor 00:20:09.302 [2024-07-23 00:59:53.248948] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:09.302 [2024-07-23 00:59:53.248972] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:09.302 [2024-07-23 00:59:53.248995] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:09.302 request: 00:20:09.302 { 00:20:09.302 "name": "TLSTEST", 00:20:09.302 "trtype": "tcp", 00:20:09.302 "traddr": "10.0.0.2", 00:20:09.302 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:09.302 "adrfam": "ipv4", 00:20:09.302 "trsvcid": "4420", 00:20:09.302 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:09.302 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt", 00:20:09.302 "method": "bdev_nvme_attach_controller", 00:20:09.302 "req_id": 1 00:20:09.302 } 00:20:09.302 Got JSON-RPC error response 00:20:09.302 response: 00:20:09.302 { 00:20:09.302 "code": -32602, 00:20:09.302 "message": "Invalid parameters" 00:20:09.302 } 00:20:09.302 00:59:53 -- target/tls.sh@36 -- # killprocess 3427105 00:20:09.302 00:59:53 -- common/autotest_common.sh@926 -- # '[' -z 3427105 ']' 00:20:09.302 00:59:53 -- common/autotest_common.sh@930 -- # kill -0 3427105 00:20:09.302 00:59:53 -- common/autotest_common.sh@931 -- # uname 00:20:09.302 00:59:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:09.302 00:59:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3427105 00:20:09.302 00:59:53 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:09.302 00:59:53 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:09.303 00:59:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3427105' 00:20:09.303 killing process with pid 3427105 00:20:09.303 00:59:53 -- common/autotest_common.sh@945 -- # kill 3427105 00:20:09.303 Received shutdown signal, test time was about 10.000000 seconds 00:20:09.303 00:20:09.303 Latency(us) 00:20:09.303 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:09.303 =================================================================================================================== 00:20:09.303 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:09.303 00:59:53 -- common/autotest_common.sh@950 -- # wait 3427105 00:20:09.560 00:59:53 -- target/tls.sh@37 -- # return 1 00:20:09.560 00:59:53 -- common/autotest_common.sh@643 -- # es=1 00:20:09.560 00:59:53 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:09.560 00:59:53 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:09.560 00:59:53 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:09.560 00:59:53 -- target/tls.sh@164 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:09.560 00:59:53 -- common/autotest_common.sh@640 -- # local es=0 00:20:09.560 00:59:53 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:09.560 00:59:53 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:20:09.560 00:59:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:09.560 00:59:53 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:20:09.560 00:59:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:09.560 00:59:53 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:09.560 00:59:53 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:09.560 00:59:53 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:09.560 00:59:53 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:09.560 00:59:53 -- target/tls.sh@23 -- # psk= 00:20:09.560 00:59:53 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:09.560 00:59:53 -- target/tls.sh@28 -- # bdevperf_pid=3427378 00:20:09.560 00:59:53 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:09.560 00:59:53 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:09.560 00:59:53 -- target/tls.sh@31 -- # waitforlisten 3427378 /var/tmp/bdevperf.sock 00:20:09.560 00:59:53 -- common/autotest_common.sh@819 -- # '[' -z 3427378 ']' 00:20:09.560 00:59:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:09.560 00:59:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:09.560 00:59:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:09.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:09.560 00:59:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:09.560 00:59:53 -- common/autotest_common.sh@10 -- # set +x 00:20:09.560 [2024-07-23 00:59:53.560662] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:20:09.560 [2024-07-23 00:59:53.560737] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3427378 ] 00:20:09.560 EAL: No free 2048 kB hugepages reported on node 1 00:20:09.560 [2024-07-23 00:59:53.616404] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:09.560 [2024-07-23 00:59:53.695098] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:10.492 00:59:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:10.492 00:59:54 -- common/autotest_common.sh@852 -- # return 0 00:20:10.492 00:59:54 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:10.749 [2024-07-23 00:59:54.710237] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:10.749 [2024-07-23 00:59:54.712405] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x73a810 (9): Bad file descriptor 00:20:10.749 [2024-07-23 00:59:54.713400] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:10.749 [2024-07-23 00:59:54.713426] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:10.749 [2024-07-23 00:59:54.713447] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:10.749 request: 00:20:10.749 { 00:20:10.749 "name": "TLSTEST", 00:20:10.749 "trtype": "tcp", 00:20:10.749 "traddr": "10.0.0.2", 00:20:10.749 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:10.749 "adrfam": "ipv4", 00:20:10.749 "trsvcid": "4420", 00:20:10.749 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:10.749 "method": "bdev_nvme_attach_controller", 00:20:10.749 "req_id": 1 00:20:10.749 } 00:20:10.749 Got JSON-RPC error response 00:20:10.749 response: 00:20:10.749 { 00:20:10.749 "code": -32602, 00:20:10.749 "message": "Invalid parameters" 00:20:10.749 } 00:20:10.749 00:59:54 -- target/tls.sh@36 -- # killprocess 3427378 00:20:10.749 00:59:54 -- common/autotest_common.sh@926 -- # '[' -z 3427378 ']' 00:20:10.749 00:59:54 -- common/autotest_common.sh@930 -- # kill -0 3427378 00:20:10.749 00:59:54 -- common/autotest_common.sh@931 -- # uname 00:20:10.749 00:59:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:10.749 00:59:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3427378 00:20:10.749 00:59:54 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:10.749 00:59:54 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:10.749 00:59:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3427378' 00:20:10.749 killing process with pid 3427378 00:20:10.749 00:59:54 -- common/autotest_common.sh@945 -- # kill 3427378 00:20:10.749 Received shutdown signal, test time was about 10.000000 seconds 00:20:10.749 00:20:10.749 Latency(us) 00:20:10.749 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:10.749 =================================================================================================================== 00:20:10.749 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:10.749 00:59:54 -- common/autotest_common.sh@950 -- # wait 3427378 00:20:11.007 00:59:54 -- target/tls.sh@37 -- # return 1 00:20:11.007 00:59:54 -- common/autotest_common.sh@643 -- # es=1 00:20:11.007 00:59:54 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:11.007 00:59:54 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:11.007 00:59:54 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:11.007 00:59:54 -- target/tls.sh@167 -- # killprocess 3423477 00:20:11.007 00:59:54 -- common/autotest_common.sh@926 -- # '[' -z 3423477 ']' 00:20:11.007 00:59:54 -- common/autotest_common.sh@930 -- # kill -0 3423477 00:20:11.007 00:59:54 -- common/autotest_common.sh@931 -- # uname 00:20:11.007 00:59:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:11.007 00:59:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3423477 00:20:11.007 00:59:54 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:20:11.007 00:59:54 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:20:11.007 00:59:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3423477' 00:20:11.007 killing process with pid 3423477 00:20:11.007 00:59:54 -- common/autotest_common.sh@945 -- # kill 3423477 00:20:11.007 00:59:54 -- common/autotest_common.sh@950 -- # wait 3423477 00:20:11.265 00:59:55 -- target/tls.sh@168 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 02 00:20:11.265 00:59:55 -- target/tls.sh@49 -- # local key hash crc 00:20:11.265 00:59:55 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:20:11.265 00:59:55 -- target/tls.sh@51 -- # hash=02 00:20:11.265 00:59:55 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff0011223344556677 00:20:11.265 00:59:55 -- target/tls.sh@52 -- # gzip -1 -c 00:20:11.265 00:59:55 -- target/tls.sh@52 -- # tail -c8 00:20:11.265 00:59:55 -- target/tls.sh@52 -- # head -c 4 00:20:11.265 00:59:55 -- target/tls.sh@52 -- # crc='�e�'\''' 00:20:11.265 00:59:55 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:20:11.265 00:59:55 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeff0011223344556677�e�'\''' 00:20:11.265 00:59:55 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:11.265 00:59:55 -- target/tls.sh@168 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:11.265 00:59:55 -- target/tls.sh@169 -- # key_long_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:11.265 00:59:55 -- target/tls.sh@170 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:11.265 00:59:55 -- target/tls.sh@171 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:11.265 00:59:55 -- target/tls.sh@172 -- # nvmfappstart -m 0x2 00:20:11.265 00:59:55 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:11.265 00:59:55 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:11.265 00:59:55 -- common/autotest_common.sh@10 -- # set +x 00:20:11.265 00:59:55 -- nvmf/common.sh@469 -- # nvmfpid=3427544 00:20:11.265 00:59:55 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:11.265 00:59:55 -- nvmf/common.sh@470 -- # waitforlisten 3427544 00:20:11.265 00:59:55 -- common/autotest_common.sh@819 -- # '[' -z 3427544 ']' 00:20:11.265 00:59:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:11.265 00:59:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:11.265 00:59:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:11.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:11.265 00:59:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:11.265 00:59:55 -- common/autotest_common.sh@10 -- # set +x 00:20:11.265 [2024-07-23 00:59:55.306951] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:20:11.265 [2024-07-23 00:59:55.307052] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:11.265 EAL: No free 2048 kB hugepages reported on node 1 00:20:11.265 [2024-07-23 00:59:55.369312] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:11.265 [2024-07-23 00:59:55.456358] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:11.265 [2024-07-23 00:59:55.456539] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:11.265 [2024-07-23 00:59:55.456556] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:11.265 [2024-07-23 00:59:55.456568] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:11.265 [2024-07-23 00:59:55.456608] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:12.197 00:59:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:12.197 00:59:56 -- common/autotest_common.sh@852 -- # return 0 00:20:12.197 00:59:56 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:12.197 00:59:56 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:12.197 00:59:56 -- common/autotest_common.sh@10 -- # set +x 00:20:12.197 00:59:56 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:12.197 00:59:56 -- target/tls.sh@174 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:12.197 00:59:56 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:12.197 00:59:56 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:12.454 [2024-07-23 00:59:56.532704] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:12.454 00:59:56 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:12.712 00:59:56 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:12.969 [2024-07-23 00:59:57.046111] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:12.969 [2024-07-23 00:59:57.046344] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:12.969 00:59:57 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:13.227 malloc0 00:20:13.227 00:59:57 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:13.486 00:59:57 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:13.744 00:59:57 -- target/tls.sh@176 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:13.744 00:59:57 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:13.744 00:59:57 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:13.744 00:59:57 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:13.744 00:59:57 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt' 00:20:13.744 00:59:57 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:13.744 00:59:57 -- target/tls.sh@28 -- # bdevperf_pid=3427968 00:20:13.744 00:59:57 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:13.744 00:59:57 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:13.744 00:59:57 -- target/tls.sh@31 -- # waitforlisten 3427968 /var/tmp/bdevperf.sock 00:20:13.744 00:59:57 -- common/autotest_common.sh@819 -- # '[' -z 3427968 ']' 00:20:13.744 00:59:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:13.744 00:59:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:13.744 00:59:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:13.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:13.744 00:59:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:13.744 00:59:57 -- common/autotest_common.sh@10 -- # set +x 00:20:13.744 [2024-07-23 00:59:57.869634] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:20:13.744 [2024-07-23 00:59:57.869706] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3427968 ] 00:20:13.744 EAL: No free 2048 kB hugepages reported on node 1 00:20:13.744 [2024-07-23 00:59:57.926013] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:14.002 [2024-07-23 00:59:58.008084] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:14.933 00:59:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:14.933 00:59:58 -- common/autotest_common.sh@852 -- # return 0 00:20:14.933 00:59:58 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:14.933 [2024-07-23 00:59:59.057466] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:15.191 TLSTESTn1 00:20:15.191 00:59:59 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:15.191 Running I/O for 10 seconds... 00:20:25.152 00:20:25.152 Latency(us) 00:20:25.152 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:25.152 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:25.152 Verification LBA range: start 0x0 length 0x2000 00:20:25.152 TLSTESTn1 : 10.04 2359.60 9.22 0.00 0.00 54154.64 7427.41 62526.20 00:20:25.153 =================================================================================================================== 00:20:25.153 Total : 2359.60 9.22 0.00 0.00 54154.64 7427.41 62526.20 00:20:25.153 0 00:20:25.153 01:00:09 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:25.153 01:00:09 -- target/tls.sh@45 -- # killprocess 3427968 00:20:25.153 01:00:09 -- common/autotest_common.sh@926 -- # '[' -z 3427968 ']' 00:20:25.153 01:00:09 -- common/autotest_common.sh@930 -- # kill -0 3427968 00:20:25.153 01:00:09 -- common/autotest_common.sh@931 -- # uname 00:20:25.153 01:00:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:25.153 01:00:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3427968 00:20:25.409 01:00:09 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:25.409 01:00:09 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:25.409 01:00:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3427968' 00:20:25.409 killing process with pid 3427968 00:20:25.409 01:00:09 -- common/autotest_common.sh@945 -- # kill 3427968 00:20:25.409 Received shutdown signal, test time was about 10.000000 seconds 00:20:25.409 00:20:25.409 Latency(us) 00:20:25.409 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:25.409 =================================================================================================================== 00:20:25.409 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:25.409 01:00:09 -- common/autotest_common.sh@950 -- # wait 3427968 00:20:25.409 01:00:09 -- target/tls.sh@179 -- # chmod 0666 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:25.409 01:00:09 -- target/tls.sh@180 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:25.409 01:00:09 -- common/autotest_common.sh@640 -- # local es=0 00:20:25.409 01:00:09 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:25.409 01:00:09 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:20:25.409 01:00:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:25.409 01:00:09 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:20:25.409 01:00:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:25.409 01:00:09 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:25.409 01:00:09 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:25.409 01:00:09 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:25.409 01:00:09 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:25.409 01:00:09 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt' 00:20:25.409 01:00:09 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:25.409 01:00:09 -- target/tls.sh@28 -- # bdevperf_pid=3429969 00:20:25.409 01:00:09 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:25.409 01:00:09 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:25.409 01:00:09 -- target/tls.sh@31 -- # waitforlisten 3429969 /var/tmp/bdevperf.sock 00:20:25.409 01:00:09 -- common/autotest_common.sh@819 -- # '[' -z 3429969 ']' 00:20:25.409 01:00:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:25.409 01:00:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:25.409 01:00:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:25.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:25.409 01:00:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:25.409 01:00:09 -- common/autotest_common.sh@10 -- # set +x 00:20:25.696 [2024-07-23 01:00:09.611703] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:20:25.696 [2024-07-23 01:00:09.611787] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3429969 ] 00:20:25.696 EAL: No free 2048 kB hugepages reported on node 1 00:20:25.696 [2024-07-23 01:00:09.671416] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:25.696 [2024-07-23 01:00:09.759832] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:26.630 01:00:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:26.630 01:00:10 -- common/autotest_common.sh@852 -- # return 0 00:20:26.630 01:00:10 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:26.630 [2024-07-23 01:00:10.768526] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:26.630 [2024-07-23 01:00:10.768574] bdev_nvme_rpc.c: 336:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:20:26.630 request: 00:20:26.630 { 00:20:26.630 "name": "TLSTEST", 00:20:26.630 "trtype": "tcp", 00:20:26.630 "traddr": "10.0.0.2", 00:20:26.630 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:26.630 "adrfam": "ipv4", 00:20:26.630 "trsvcid": "4420", 00:20:26.630 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:26.630 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:20:26.630 "method": "bdev_nvme_attach_controller", 00:20:26.631 "req_id": 1 00:20:26.631 } 00:20:26.631 Got JSON-RPC error response 00:20:26.631 response: 00:20:26.631 { 00:20:26.631 "code": -22, 00:20:26.631 "message": "Could not retrieve PSK from file: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:20:26.631 } 00:20:26.631 01:00:10 -- target/tls.sh@36 -- # killprocess 3429969 00:20:26.631 01:00:10 -- common/autotest_common.sh@926 -- # '[' -z 3429969 ']' 00:20:26.631 01:00:10 -- common/autotest_common.sh@930 -- # kill -0 3429969 00:20:26.631 01:00:10 -- common/autotest_common.sh@931 -- # uname 00:20:26.631 01:00:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:26.631 01:00:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3429969 00:20:26.631 01:00:10 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:26.631 01:00:10 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:26.631 01:00:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3429969' 00:20:26.631 killing process with pid 3429969 00:20:26.631 01:00:10 -- common/autotest_common.sh@945 -- # kill 3429969 00:20:26.631 Received shutdown signal, test time was about 10.000000 seconds 00:20:26.631 00:20:26.631 Latency(us) 00:20:26.631 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:26.631 =================================================================================================================== 00:20:26.631 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:26.631 01:00:10 -- common/autotest_common.sh@950 -- # wait 3429969 00:20:26.888 01:00:11 -- target/tls.sh@37 -- # return 1 00:20:26.888 01:00:11 -- common/autotest_common.sh@643 -- # es=1 00:20:26.888 01:00:11 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:26.888 01:00:11 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:26.888 01:00:11 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:26.888 01:00:11 -- target/tls.sh@183 -- # killprocess 3427544 00:20:26.888 01:00:11 -- common/autotest_common.sh@926 -- # '[' -z 3427544 ']' 00:20:26.888 01:00:11 -- common/autotest_common.sh@930 -- # kill -0 3427544 00:20:26.888 01:00:11 -- common/autotest_common.sh@931 -- # uname 00:20:26.888 01:00:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:26.888 01:00:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3427544 00:20:26.888 01:00:11 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:20:26.888 01:00:11 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:20:26.888 01:00:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3427544' 00:20:26.888 killing process with pid 3427544 00:20:26.888 01:00:11 -- common/autotest_common.sh@945 -- # kill 3427544 00:20:26.888 01:00:11 -- common/autotest_common.sh@950 -- # wait 3427544 00:20:27.144 01:00:11 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:20:27.144 01:00:11 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:27.144 01:00:11 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:27.144 01:00:11 -- common/autotest_common.sh@10 -- # set +x 00:20:27.144 01:00:11 -- nvmf/common.sh@469 -- # nvmfpid=3430241 00:20:27.144 01:00:11 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:27.144 01:00:11 -- nvmf/common.sh@470 -- # waitforlisten 3430241 00:20:27.144 01:00:11 -- common/autotest_common.sh@819 -- # '[' -z 3430241 ']' 00:20:27.145 01:00:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:27.145 01:00:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:27.145 01:00:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:27.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:27.145 01:00:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:27.145 01:00:11 -- common/autotest_common.sh@10 -- # set +x 00:20:27.402 [2024-07-23 01:00:11.360326] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:20:27.402 [2024-07-23 01:00:11.360408] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:27.402 EAL: No free 2048 kB hugepages reported on node 1 00:20:27.402 [2024-07-23 01:00:11.429993] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:27.402 [2024-07-23 01:00:11.520388] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:27.402 [2024-07-23 01:00:11.520569] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:27.402 [2024-07-23 01:00:11.520608] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:27.402 [2024-07-23 01:00:11.520643] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:27.402 [2024-07-23 01:00:11.520696] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:28.333 01:00:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:28.333 01:00:12 -- common/autotest_common.sh@852 -- # return 0 00:20:28.333 01:00:12 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:28.333 01:00:12 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:28.333 01:00:12 -- common/autotest_common.sh@10 -- # set +x 00:20:28.334 01:00:12 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:28.334 01:00:12 -- target/tls.sh@186 -- # NOT setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:28.334 01:00:12 -- common/autotest_common.sh@640 -- # local es=0 00:20:28.334 01:00:12 -- common/autotest_common.sh@642 -- # valid_exec_arg setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:28.334 01:00:12 -- common/autotest_common.sh@628 -- # local arg=setup_nvmf_tgt 00:20:28.334 01:00:12 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:28.334 01:00:12 -- common/autotest_common.sh@632 -- # type -t setup_nvmf_tgt 00:20:28.334 01:00:12 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:28.334 01:00:12 -- common/autotest_common.sh@643 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:28.334 01:00:12 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:28.334 01:00:12 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:28.590 [2024-07-23 01:00:12.641559] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:28.590 01:00:12 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:28.847 01:00:12 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:29.105 [2024-07-23 01:00:13.162958] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:29.105 [2024-07-23 01:00:13.163211] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:29.105 01:00:13 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:29.362 malloc0 00:20:29.362 01:00:13 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:29.620 01:00:13 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:29.890 [2024-07-23 01:00:13.876095] tcp.c:3549:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:20:29.890 [2024-07-23 01:00:13.876140] tcp.c:3618:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:20:29.890 [2024-07-23 01:00:13.876174] subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:20:29.890 request: 00:20:29.890 { 00:20:29.890 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:29.890 "host": "nqn.2016-06.io.spdk:host1", 00:20:29.890 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:20:29.890 "method": "nvmf_subsystem_add_host", 00:20:29.890 "req_id": 1 00:20:29.890 } 00:20:29.890 Got JSON-RPC error response 00:20:29.890 response: 00:20:29.890 { 00:20:29.890 "code": -32603, 00:20:29.890 "message": "Internal error" 00:20:29.890 } 00:20:29.890 01:00:13 -- common/autotest_common.sh@643 -- # es=1 00:20:29.890 01:00:13 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:29.890 01:00:13 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:29.890 01:00:13 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:29.890 01:00:13 -- target/tls.sh@189 -- # killprocess 3430241 00:20:29.890 01:00:13 -- common/autotest_common.sh@926 -- # '[' -z 3430241 ']' 00:20:29.890 01:00:13 -- common/autotest_common.sh@930 -- # kill -0 3430241 00:20:29.890 01:00:13 -- common/autotest_common.sh@931 -- # uname 00:20:29.890 01:00:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:29.890 01:00:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3430241 00:20:29.890 01:00:13 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:20:29.890 01:00:13 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:20:29.890 01:00:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3430241' 00:20:29.890 killing process with pid 3430241 00:20:29.890 01:00:13 -- common/autotest_common.sh@945 -- # kill 3430241 00:20:29.890 01:00:13 -- common/autotest_common.sh@950 -- # wait 3430241 00:20:30.148 01:00:14 -- target/tls.sh@190 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:30.148 01:00:14 -- target/tls.sh@193 -- # nvmfappstart -m 0x2 00:20:30.148 01:00:14 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:30.148 01:00:14 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:30.148 01:00:14 -- common/autotest_common.sh@10 -- # set +x 00:20:30.148 01:00:14 -- nvmf/common.sh@469 -- # nvmfpid=3430561 00:20:30.148 01:00:14 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:30.148 01:00:14 -- nvmf/common.sh@470 -- # waitforlisten 3430561 00:20:30.148 01:00:14 -- common/autotest_common.sh@819 -- # '[' -z 3430561 ']' 00:20:30.148 01:00:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:30.148 01:00:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:30.148 01:00:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:30.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:30.148 01:00:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:30.148 01:00:14 -- common/autotest_common.sh@10 -- # set +x 00:20:30.148 [2024-07-23 01:00:14.235182] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:20:30.148 [2024-07-23 01:00:14.235270] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:30.148 EAL: No free 2048 kB hugepages reported on node 1 00:20:30.148 [2024-07-23 01:00:14.305514] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:30.405 [2024-07-23 01:00:14.390870] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:30.405 [2024-07-23 01:00:14.391055] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:30.405 [2024-07-23 01:00:14.391085] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:30.405 [2024-07-23 01:00:14.391108] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:30.405 [2024-07-23 01:00:14.391151] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:30.970 01:00:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:30.970 01:00:15 -- common/autotest_common.sh@852 -- # return 0 00:20:30.970 01:00:15 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:30.970 01:00:15 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:30.970 01:00:15 -- common/autotest_common.sh@10 -- # set +x 00:20:30.970 01:00:15 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:30.970 01:00:15 -- target/tls.sh@194 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:30.970 01:00:15 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:30.970 01:00:15 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:31.228 [2024-07-23 01:00:15.385353] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:31.228 01:00:15 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:31.486 01:00:15 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:31.743 [2024-07-23 01:00:15.854660] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:31.743 [2024-07-23 01:00:15.854929] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:31.743 01:00:15 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:32.000 malloc0 00:20:32.000 01:00:16 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:32.258 01:00:16 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:32.515 01:00:16 -- target/tls.sh@197 -- # bdevperf_pid=3430866 00:20:32.515 01:00:16 -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:32.515 01:00:16 -- target/tls.sh@199 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:32.515 01:00:16 -- target/tls.sh@200 -- # waitforlisten 3430866 /var/tmp/bdevperf.sock 00:20:32.515 01:00:16 -- common/autotest_common.sh@819 -- # '[' -z 3430866 ']' 00:20:32.515 01:00:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:32.515 01:00:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:32.515 01:00:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:32.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:32.515 01:00:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:32.515 01:00:16 -- common/autotest_common.sh@10 -- # set +x 00:20:32.515 [2024-07-23 01:00:16.680527] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:20:32.515 [2024-07-23 01:00:16.680595] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3430866 ] 00:20:32.515 EAL: No free 2048 kB hugepages reported on node 1 00:20:32.775 [2024-07-23 01:00:16.737516] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:32.775 [2024-07-23 01:00:16.821099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:33.708 01:00:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:33.708 01:00:17 -- common/autotest_common.sh@852 -- # return 0 00:20:33.708 01:00:17 -- target/tls.sh@201 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:33.966 [2024-07-23 01:00:17.928695] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:33.966 TLSTESTn1 00:20:33.966 01:00:18 -- target/tls.sh@205 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:20:34.223 01:00:18 -- target/tls.sh@205 -- # tgtconf='{ 00:20:34.223 "subsystems": [ 00:20:34.223 { 00:20:34.223 "subsystem": "iobuf", 00:20:34.223 "config": [ 00:20:34.223 { 00:20:34.223 "method": "iobuf_set_options", 00:20:34.223 "params": { 00:20:34.223 "small_pool_count": 8192, 00:20:34.223 "large_pool_count": 1024, 00:20:34.223 "small_bufsize": 8192, 00:20:34.223 "large_bufsize": 135168 00:20:34.223 } 00:20:34.223 } 00:20:34.223 ] 00:20:34.223 }, 00:20:34.223 { 00:20:34.223 "subsystem": "sock", 00:20:34.223 "config": [ 00:20:34.223 { 00:20:34.223 "method": "sock_impl_set_options", 00:20:34.223 "params": { 00:20:34.223 "impl_name": "posix", 00:20:34.223 "recv_buf_size": 2097152, 00:20:34.223 "send_buf_size": 2097152, 00:20:34.223 "enable_recv_pipe": true, 00:20:34.223 "enable_quickack": false, 00:20:34.223 "enable_placement_id": 0, 00:20:34.223 "enable_zerocopy_send_server": true, 00:20:34.223 "enable_zerocopy_send_client": false, 00:20:34.223 "zerocopy_threshold": 0, 00:20:34.223 "tls_version": 0, 00:20:34.223 "enable_ktls": false 00:20:34.223 } 00:20:34.223 }, 00:20:34.223 { 00:20:34.223 "method": "sock_impl_set_options", 00:20:34.223 "params": { 00:20:34.223 "impl_name": "ssl", 00:20:34.223 "recv_buf_size": 4096, 00:20:34.223 "send_buf_size": 4096, 00:20:34.223 "enable_recv_pipe": true, 00:20:34.223 "enable_quickack": false, 00:20:34.223 "enable_placement_id": 0, 00:20:34.223 "enable_zerocopy_send_server": true, 00:20:34.223 "enable_zerocopy_send_client": false, 00:20:34.223 "zerocopy_threshold": 0, 00:20:34.223 "tls_version": 0, 00:20:34.223 "enable_ktls": false 00:20:34.223 } 00:20:34.223 } 00:20:34.223 ] 00:20:34.223 }, 00:20:34.223 { 00:20:34.223 "subsystem": "vmd", 00:20:34.223 "config": [] 00:20:34.223 }, 00:20:34.223 { 00:20:34.223 "subsystem": "accel", 00:20:34.223 "config": [ 00:20:34.223 { 00:20:34.223 "method": "accel_set_options", 00:20:34.223 "params": { 00:20:34.223 "small_cache_size": 128, 00:20:34.223 "large_cache_size": 16, 00:20:34.223 "task_count": 2048, 00:20:34.223 "sequence_count": 2048, 00:20:34.223 "buf_count": 2048 00:20:34.223 } 00:20:34.223 } 00:20:34.223 ] 00:20:34.223 }, 00:20:34.223 { 00:20:34.223 "subsystem": "bdev", 00:20:34.223 "config": [ 00:20:34.223 { 00:20:34.223 "method": "bdev_set_options", 00:20:34.223 "params": { 00:20:34.223 "bdev_io_pool_size": 65535, 00:20:34.223 "bdev_io_cache_size": 256, 00:20:34.223 "bdev_auto_examine": true, 00:20:34.223 "iobuf_small_cache_size": 128, 00:20:34.223 "iobuf_large_cache_size": 16 00:20:34.223 } 00:20:34.223 }, 00:20:34.223 { 00:20:34.223 "method": "bdev_raid_set_options", 00:20:34.223 "params": { 00:20:34.223 "process_window_size_kb": 1024 00:20:34.223 } 00:20:34.223 }, 00:20:34.223 { 00:20:34.223 "method": "bdev_iscsi_set_options", 00:20:34.223 "params": { 00:20:34.223 "timeout_sec": 30 00:20:34.223 } 00:20:34.223 }, 00:20:34.223 { 00:20:34.223 "method": "bdev_nvme_set_options", 00:20:34.223 "params": { 00:20:34.223 "action_on_timeout": "none", 00:20:34.223 "timeout_us": 0, 00:20:34.223 "timeout_admin_us": 0, 00:20:34.223 "keep_alive_timeout_ms": 10000, 00:20:34.223 "transport_retry_count": 4, 00:20:34.223 "arbitration_burst": 0, 00:20:34.223 "low_priority_weight": 0, 00:20:34.223 "medium_priority_weight": 0, 00:20:34.223 "high_priority_weight": 0, 00:20:34.223 "nvme_adminq_poll_period_us": 10000, 00:20:34.223 "nvme_ioq_poll_period_us": 0, 00:20:34.223 "io_queue_requests": 0, 00:20:34.223 "delay_cmd_submit": true, 00:20:34.223 "bdev_retry_count": 3, 00:20:34.223 "transport_ack_timeout": 0, 00:20:34.223 "ctrlr_loss_timeout_sec": 0, 00:20:34.223 "reconnect_delay_sec": 0, 00:20:34.223 "fast_io_fail_timeout_sec": 0, 00:20:34.223 "generate_uuids": false, 00:20:34.223 "transport_tos": 0, 00:20:34.223 "io_path_stat": false, 00:20:34.223 "allow_accel_sequence": false 00:20:34.223 } 00:20:34.223 }, 00:20:34.223 { 00:20:34.223 "method": "bdev_nvme_set_hotplug", 00:20:34.223 "params": { 00:20:34.223 "period_us": 100000, 00:20:34.223 "enable": false 00:20:34.223 } 00:20:34.223 }, 00:20:34.223 { 00:20:34.223 "method": "bdev_malloc_create", 00:20:34.223 "params": { 00:20:34.223 "name": "malloc0", 00:20:34.223 "num_blocks": 8192, 00:20:34.223 "block_size": 4096, 00:20:34.223 "physical_block_size": 4096, 00:20:34.223 "uuid": "b7bb9d86-b4a4-4964-a208-cbb8384ee014", 00:20:34.223 "optimal_io_boundary": 0 00:20:34.223 } 00:20:34.223 }, 00:20:34.223 { 00:20:34.223 "method": "bdev_wait_for_examine" 00:20:34.223 } 00:20:34.223 ] 00:20:34.223 }, 00:20:34.223 { 00:20:34.223 "subsystem": "nbd", 00:20:34.223 "config": [] 00:20:34.223 }, 00:20:34.223 { 00:20:34.223 "subsystem": "scheduler", 00:20:34.223 "config": [ 00:20:34.223 { 00:20:34.223 "method": "framework_set_scheduler", 00:20:34.223 "params": { 00:20:34.223 "name": "static" 00:20:34.223 } 00:20:34.223 } 00:20:34.223 ] 00:20:34.223 }, 00:20:34.223 { 00:20:34.223 "subsystem": "nvmf", 00:20:34.223 "config": [ 00:20:34.223 { 00:20:34.223 "method": "nvmf_set_config", 00:20:34.223 "params": { 00:20:34.223 "discovery_filter": "match_any", 00:20:34.223 "admin_cmd_passthru": { 00:20:34.223 "identify_ctrlr": false 00:20:34.223 } 00:20:34.223 } 00:20:34.223 }, 00:20:34.223 { 00:20:34.223 "method": "nvmf_set_max_subsystems", 00:20:34.223 "params": { 00:20:34.223 "max_subsystems": 1024 00:20:34.223 } 00:20:34.223 }, 00:20:34.223 { 00:20:34.223 "method": "nvmf_set_crdt", 00:20:34.223 "params": { 00:20:34.223 "crdt1": 0, 00:20:34.223 "crdt2": 0, 00:20:34.223 "crdt3": 0 00:20:34.223 } 00:20:34.223 }, 00:20:34.223 { 00:20:34.224 "method": "nvmf_create_transport", 00:20:34.224 "params": { 00:20:34.224 "trtype": "TCP", 00:20:34.224 "max_queue_depth": 128, 00:20:34.224 "max_io_qpairs_per_ctrlr": 127, 00:20:34.224 "in_capsule_data_size": 4096, 00:20:34.224 "max_io_size": 131072, 00:20:34.224 "io_unit_size": 131072, 00:20:34.224 "max_aq_depth": 128, 00:20:34.224 "num_shared_buffers": 511, 00:20:34.224 "buf_cache_size": 4294967295, 00:20:34.224 "dif_insert_or_strip": false, 00:20:34.224 "zcopy": false, 00:20:34.224 "c2h_success": false, 00:20:34.224 "sock_priority": 0, 00:20:34.224 "abort_timeout_sec": 1 00:20:34.224 } 00:20:34.224 }, 00:20:34.224 { 00:20:34.224 "method": "nvmf_create_subsystem", 00:20:34.224 "params": { 00:20:34.224 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:34.224 "allow_any_host": false, 00:20:34.224 "serial_number": "SPDK00000000000001", 00:20:34.224 "model_number": "SPDK bdev Controller", 00:20:34.224 "max_namespaces": 10, 00:20:34.224 "min_cntlid": 1, 00:20:34.224 "max_cntlid": 65519, 00:20:34.224 "ana_reporting": false 00:20:34.224 } 00:20:34.224 }, 00:20:34.224 { 00:20:34.224 "method": "nvmf_subsystem_add_host", 00:20:34.224 "params": { 00:20:34.224 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:34.224 "host": "nqn.2016-06.io.spdk:host1", 00:20:34.224 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:20:34.224 } 00:20:34.224 }, 00:20:34.224 { 00:20:34.224 "method": "nvmf_subsystem_add_ns", 00:20:34.224 "params": { 00:20:34.224 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:34.224 "namespace": { 00:20:34.224 "nsid": 1, 00:20:34.224 "bdev_name": "malloc0", 00:20:34.224 "nguid": "B7BB9D86B4A44964A208CBB8384EE014", 00:20:34.224 "uuid": "b7bb9d86-b4a4-4964-a208-cbb8384ee014" 00:20:34.224 } 00:20:34.224 } 00:20:34.224 }, 00:20:34.224 { 00:20:34.224 "method": "nvmf_subsystem_add_listener", 00:20:34.224 "params": { 00:20:34.224 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:34.224 "listen_address": { 00:20:34.224 "trtype": "TCP", 00:20:34.224 "adrfam": "IPv4", 00:20:34.224 "traddr": "10.0.0.2", 00:20:34.224 "trsvcid": "4420" 00:20:34.224 }, 00:20:34.224 "secure_channel": true 00:20:34.224 } 00:20:34.224 } 00:20:34.224 ] 00:20:34.224 } 00:20:34.224 ] 00:20:34.224 }' 00:20:34.224 01:00:18 -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:34.792 01:00:18 -- target/tls.sh@206 -- # bdevperfconf='{ 00:20:34.792 "subsystems": [ 00:20:34.792 { 00:20:34.792 "subsystem": "iobuf", 00:20:34.792 "config": [ 00:20:34.792 { 00:20:34.792 "method": "iobuf_set_options", 00:20:34.792 "params": { 00:20:34.792 "small_pool_count": 8192, 00:20:34.792 "large_pool_count": 1024, 00:20:34.792 "small_bufsize": 8192, 00:20:34.792 "large_bufsize": 135168 00:20:34.792 } 00:20:34.792 } 00:20:34.792 ] 00:20:34.792 }, 00:20:34.792 { 00:20:34.792 "subsystem": "sock", 00:20:34.792 "config": [ 00:20:34.792 { 00:20:34.792 "method": "sock_impl_set_options", 00:20:34.792 "params": { 00:20:34.792 "impl_name": "posix", 00:20:34.792 "recv_buf_size": 2097152, 00:20:34.792 "send_buf_size": 2097152, 00:20:34.792 "enable_recv_pipe": true, 00:20:34.792 "enable_quickack": false, 00:20:34.792 "enable_placement_id": 0, 00:20:34.792 "enable_zerocopy_send_server": true, 00:20:34.792 "enable_zerocopy_send_client": false, 00:20:34.792 "zerocopy_threshold": 0, 00:20:34.792 "tls_version": 0, 00:20:34.792 "enable_ktls": false 00:20:34.792 } 00:20:34.792 }, 00:20:34.792 { 00:20:34.792 "method": "sock_impl_set_options", 00:20:34.792 "params": { 00:20:34.792 "impl_name": "ssl", 00:20:34.792 "recv_buf_size": 4096, 00:20:34.792 "send_buf_size": 4096, 00:20:34.792 "enable_recv_pipe": true, 00:20:34.792 "enable_quickack": false, 00:20:34.792 "enable_placement_id": 0, 00:20:34.792 "enable_zerocopy_send_server": true, 00:20:34.792 "enable_zerocopy_send_client": false, 00:20:34.792 "zerocopy_threshold": 0, 00:20:34.792 "tls_version": 0, 00:20:34.792 "enable_ktls": false 00:20:34.792 } 00:20:34.792 } 00:20:34.792 ] 00:20:34.792 }, 00:20:34.792 { 00:20:34.792 "subsystem": "vmd", 00:20:34.792 "config": [] 00:20:34.792 }, 00:20:34.792 { 00:20:34.792 "subsystem": "accel", 00:20:34.792 "config": [ 00:20:34.792 { 00:20:34.792 "method": "accel_set_options", 00:20:34.792 "params": { 00:20:34.792 "small_cache_size": 128, 00:20:34.792 "large_cache_size": 16, 00:20:34.792 "task_count": 2048, 00:20:34.792 "sequence_count": 2048, 00:20:34.792 "buf_count": 2048 00:20:34.792 } 00:20:34.792 } 00:20:34.792 ] 00:20:34.792 }, 00:20:34.792 { 00:20:34.792 "subsystem": "bdev", 00:20:34.792 "config": [ 00:20:34.792 { 00:20:34.792 "method": "bdev_set_options", 00:20:34.792 "params": { 00:20:34.792 "bdev_io_pool_size": 65535, 00:20:34.792 "bdev_io_cache_size": 256, 00:20:34.792 "bdev_auto_examine": true, 00:20:34.792 "iobuf_small_cache_size": 128, 00:20:34.792 "iobuf_large_cache_size": 16 00:20:34.792 } 00:20:34.792 }, 00:20:34.792 { 00:20:34.792 "method": "bdev_raid_set_options", 00:20:34.792 "params": { 00:20:34.792 "process_window_size_kb": 1024 00:20:34.792 } 00:20:34.792 }, 00:20:34.792 { 00:20:34.792 "method": "bdev_iscsi_set_options", 00:20:34.792 "params": { 00:20:34.792 "timeout_sec": 30 00:20:34.792 } 00:20:34.792 }, 00:20:34.792 { 00:20:34.792 "method": "bdev_nvme_set_options", 00:20:34.792 "params": { 00:20:34.792 "action_on_timeout": "none", 00:20:34.792 "timeout_us": 0, 00:20:34.792 "timeout_admin_us": 0, 00:20:34.792 "keep_alive_timeout_ms": 10000, 00:20:34.792 "transport_retry_count": 4, 00:20:34.792 "arbitration_burst": 0, 00:20:34.792 "low_priority_weight": 0, 00:20:34.792 "medium_priority_weight": 0, 00:20:34.792 "high_priority_weight": 0, 00:20:34.792 "nvme_adminq_poll_period_us": 10000, 00:20:34.792 "nvme_ioq_poll_period_us": 0, 00:20:34.793 "io_queue_requests": 512, 00:20:34.793 "delay_cmd_submit": true, 00:20:34.793 "bdev_retry_count": 3, 00:20:34.793 "transport_ack_timeout": 0, 00:20:34.793 "ctrlr_loss_timeout_sec": 0, 00:20:34.793 "reconnect_delay_sec": 0, 00:20:34.793 "fast_io_fail_timeout_sec": 0, 00:20:34.793 "generate_uuids": false, 00:20:34.793 "transport_tos": 0, 00:20:34.793 "io_path_stat": false, 00:20:34.793 "allow_accel_sequence": false 00:20:34.793 } 00:20:34.793 }, 00:20:34.793 { 00:20:34.793 "method": "bdev_nvme_attach_controller", 00:20:34.793 "params": { 00:20:34.793 "name": "TLSTEST", 00:20:34.793 "trtype": "TCP", 00:20:34.793 "adrfam": "IPv4", 00:20:34.793 "traddr": "10.0.0.2", 00:20:34.793 "trsvcid": "4420", 00:20:34.793 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:34.793 "prchk_reftag": false, 00:20:34.793 "prchk_guard": false, 00:20:34.793 "ctrlr_loss_timeout_sec": 0, 00:20:34.793 "reconnect_delay_sec": 0, 00:20:34.793 "fast_io_fail_timeout_sec": 0, 00:20:34.793 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:20:34.793 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:34.793 "hdgst": false, 00:20:34.793 "ddgst": false 00:20:34.793 } 00:20:34.793 }, 00:20:34.793 { 00:20:34.793 "method": "bdev_nvme_set_hotplug", 00:20:34.793 "params": { 00:20:34.793 "period_us": 100000, 00:20:34.793 "enable": false 00:20:34.793 } 00:20:34.793 }, 00:20:34.793 { 00:20:34.793 "method": "bdev_wait_for_examine" 00:20:34.793 } 00:20:34.793 ] 00:20:34.793 }, 00:20:34.793 { 00:20:34.793 "subsystem": "nbd", 00:20:34.793 "config": [] 00:20:34.793 } 00:20:34.793 ] 00:20:34.793 }' 00:20:34.793 01:00:18 -- target/tls.sh@208 -- # killprocess 3430866 00:20:34.793 01:00:18 -- common/autotest_common.sh@926 -- # '[' -z 3430866 ']' 00:20:34.793 01:00:18 -- common/autotest_common.sh@930 -- # kill -0 3430866 00:20:34.793 01:00:18 -- common/autotest_common.sh@931 -- # uname 00:20:34.793 01:00:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:34.793 01:00:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3430866 00:20:34.793 01:00:18 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:34.793 01:00:18 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:34.793 01:00:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3430866' 00:20:34.793 killing process with pid 3430866 00:20:34.793 01:00:18 -- common/autotest_common.sh@945 -- # kill 3430866 00:20:34.793 Received shutdown signal, test time was about 10.000000 seconds 00:20:34.793 00:20:34.793 Latency(us) 00:20:34.793 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:34.793 =================================================================================================================== 00:20:34.793 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:34.793 01:00:18 -- common/autotest_common.sh@950 -- # wait 3430866 00:20:34.793 01:00:18 -- target/tls.sh@209 -- # killprocess 3430561 00:20:34.793 01:00:18 -- common/autotest_common.sh@926 -- # '[' -z 3430561 ']' 00:20:34.793 01:00:18 -- common/autotest_common.sh@930 -- # kill -0 3430561 00:20:34.793 01:00:18 -- common/autotest_common.sh@931 -- # uname 00:20:34.793 01:00:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:34.793 01:00:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3430561 00:20:34.793 01:00:18 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:20:34.793 01:00:18 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:20:34.793 01:00:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3430561' 00:20:34.793 killing process with pid 3430561 00:20:34.793 01:00:18 -- common/autotest_common.sh@945 -- # kill 3430561 00:20:34.793 01:00:18 -- common/autotest_common.sh@950 -- # wait 3430561 00:20:35.052 01:00:19 -- target/tls.sh@212 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:20:35.052 01:00:19 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:35.052 01:00:19 -- target/tls.sh@212 -- # echo '{ 00:20:35.052 "subsystems": [ 00:20:35.052 { 00:20:35.052 "subsystem": "iobuf", 00:20:35.052 "config": [ 00:20:35.052 { 00:20:35.052 "method": "iobuf_set_options", 00:20:35.052 "params": { 00:20:35.052 "small_pool_count": 8192, 00:20:35.052 "large_pool_count": 1024, 00:20:35.052 "small_bufsize": 8192, 00:20:35.052 "large_bufsize": 135168 00:20:35.052 } 00:20:35.052 } 00:20:35.052 ] 00:20:35.052 }, 00:20:35.052 { 00:20:35.052 "subsystem": "sock", 00:20:35.052 "config": [ 00:20:35.052 { 00:20:35.052 "method": "sock_impl_set_options", 00:20:35.052 "params": { 00:20:35.052 "impl_name": "posix", 00:20:35.052 "recv_buf_size": 2097152, 00:20:35.052 "send_buf_size": 2097152, 00:20:35.052 "enable_recv_pipe": true, 00:20:35.052 "enable_quickack": false, 00:20:35.052 "enable_placement_id": 0, 00:20:35.052 "enable_zerocopy_send_server": true, 00:20:35.052 "enable_zerocopy_send_client": false, 00:20:35.052 "zerocopy_threshold": 0, 00:20:35.052 "tls_version": 0, 00:20:35.052 "enable_ktls": false 00:20:35.052 } 00:20:35.052 }, 00:20:35.052 { 00:20:35.052 "method": "sock_impl_set_options", 00:20:35.052 "params": { 00:20:35.052 "impl_name": "ssl", 00:20:35.052 "recv_buf_size": 4096, 00:20:35.052 "send_buf_size": 4096, 00:20:35.052 "enable_recv_pipe": true, 00:20:35.053 "enable_quickack": false, 00:20:35.053 "enable_placement_id": 0, 00:20:35.053 "enable_zerocopy_send_server": true, 00:20:35.053 "enable_zerocopy_send_client": false, 00:20:35.053 "zerocopy_threshold": 0, 00:20:35.053 "tls_version": 0, 00:20:35.053 "enable_ktls": false 00:20:35.053 } 00:20:35.053 } 00:20:35.053 ] 00:20:35.053 }, 00:20:35.053 { 00:20:35.053 "subsystem": "vmd", 00:20:35.053 "config": [] 00:20:35.053 }, 00:20:35.053 { 00:20:35.053 "subsystem": "accel", 00:20:35.053 "config": [ 00:20:35.053 { 00:20:35.053 "method": "accel_set_options", 00:20:35.053 "params": { 00:20:35.053 "small_cache_size": 128, 00:20:35.053 "large_cache_size": 16, 00:20:35.053 "task_count": 2048, 00:20:35.053 "sequence_count": 2048, 00:20:35.053 "buf_count": 2048 00:20:35.053 } 00:20:35.053 } 00:20:35.053 ] 00:20:35.053 }, 00:20:35.053 { 00:20:35.053 "subsystem": "bdev", 00:20:35.053 "config": [ 00:20:35.053 { 00:20:35.053 "method": "bdev_set_options", 00:20:35.053 "params": { 00:20:35.053 "bdev_io_pool_size": 65535, 00:20:35.053 "bdev_io_cache_size": 256, 00:20:35.053 "bdev_auto_examine": true, 00:20:35.053 "iobuf_small_cache_size": 128, 00:20:35.053 "iobuf_large_cache_size": 16 00:20:35.053 } 00:20:35.053 }, 00:20:35.053 { 00:20:35.053 "method": "bdev_raid_set_options", 00:20:35.053 "params": { 00:20:35.053 "process_window_size_kb": 1024 00:20:35.053 } 00:20:35.053 }, 00:20:35.053 { 00:20:35.053 "method": "bdev_iscsi_set_options", 00:20:35.053 "params": { 00:20:35.053 "timeout_sec": 30 00:20:35.053 } 00:20:35.053 }, 00:20:35.053 { 00:20:35.053 "method": "bdev_nvme_set_options", 00:20:35.053 "params": { 00:20:35.053 "action_on_timeout": "none", 00:20:35.053 "timeout_us": 0, 00:20:35.053 "timeout_admin_us": 0, 00:20:35.053 "keep_alive_timeout_ms": 10000, 00:20:35.053 "transport_retry_count": 4, 00:20:35.053 "arbitration_burst": 0, 00:20:35.053 "low_priority_weight": 0, 00:20:35.053 "medium_priority_weight": 0, 00:20:35.053 "high_priority_weight": 0, 00:20:35.053 "nvme_adminq_poll_period_us": 10000, 00:20:35.053 "nvme_ioq_poll_period_us": 0, 00:20:35.053 "io_queue_requests": 0, 00:20:35.053 "delay_cmd_submit": true, 00:20:35.053 "bdev_retry_count": 3, 00:20:35.053 "transport_ack_timeout": 0, 00:20:35.053 "ctrlr_loss_timeout_sec": 0, 00:20:35.053 "reconnect_delay_sec": 0, 00:20:35.053 "fast_io_fail_timeout_sec": 0, 00:20:35.053 "generate_uuids": false, 00:20:35.053 "transport_tos": 0, 00:20:35.053 "io_path_stat": false, 00:20:35.053 "allow_accel_sequence": false 00:20:35.053 } 00:20:35.053 }, 00:20:35.053 { 00:20:35.053 "method": "bdev_nvme_set_hotplug", 00:20:35.053 "params": { 00:20:35.053 "period_us": 100000, 00:20:35.053 "enable": false 00:20:35.053 } 00:20:35.053 }, 00:20:35.053 { 00:20:35.053 "method": "bdev_malloc_create", 00:20:35.053 "params": { 00:20:35.053 "name": "malloc0", 00:20:35.053 "num_blocks": 8192, 00:20:35.053 "block_size": 4096, 00:20:35.053 "physical_block_size": 4096, 00:20:35.053 "uuid": "b7bb9d86-b4a4-4964-a208-cbb8384ee014", 00:20:35.053 "optimal_io_boundary": 0 00:20:35.053 } 00:20:35.053 }, 00:20:35.053 { 00:20:35.053 "method": "bdev_wait_for_examine" 00:20:35.053 } 00:20:35.053 ] 00:20:35.053 }, 00:20:35.053 { 00:20:35.053 "subsystem": "nbd", 00:20:35.053 "config": [] 00:20:35.053 }, 00:20:35.053 { 00:20:35.053 "subsystem": "scheduler", 00:20:35.053 "config": [ 00:20:35.053 { 00:20:35.053 "method": "framework_set_scheduler", 00:20:35.053 "params": { 00:20:35.053 "name": "static" 00:20:35.053 } 00:20:35.053 } 00:20:35.053 ] 00:20:35.053 }, 00:20:35.053 { 00:20:35.053 "subsystem": "nvmf", 00:20:35.053 "config": [ 00:20:35.053 { 00:20:35.053 "method": "nvmf_set_config", 00:20:35.053 "params": { 00:20:35.053 "discovery_filter": "match_any", 00:20:35.053 "admin_cmd_passthru": { 00:20:35.053 "identify_ctrlr": false 00:20:35.053 } 00:20:35.053 } 00:20:35.053 }, 00:20:35.053 { 00:20:35.053 "method": "nvmf_set_max_subsystems", 00:20:35.053 "params": { 00:20:35.053 "max_subsystems": 1024 00:20:35.053 } 00:20:35.053 }, 00:20:35.053 { 00:20:35.053 "method": "nvmf_set_crdt", 00:20:35.053 "params": { 00:20:35.053 "crdt1": 0, 00:20:35.053 "crdt2": 0, 00:20:35.053 "crdt3": 0 00:20:35.053 } 00:20:35.053 }, 00:20:35.053 { 00:20:35.053 "method": "nvmf_create_transport", 00:20:35.053 "params": { 00:20:35.053 "trtype": "TCP", 00:20:35.053 "max_queue_depth": 128, 00:20:35.053 "max_io_qpairs_per_ctrlr": 127, 00:20:35.053 "in_capsule_data_size": 4096, 00:20:35.053 "max_io_size": 131072, 00:20:35.053 "io_unit_size": 131072, 00:20:35.053 "max_aq_depth": 128, 00:20:35.053 "num_shared_buffers": 511, 00:20:35.053 "buf_cache_size": 4294967295, 00:20:35.053 "dif_insert_or_strip": false, 00:20:35.053 "zcopy": false, 00:20:35.053 "c2h_success": false, 00:20:35.053 "sock_priority": 0, 00:20:35.053 "abort_timeout_sec": 1 00:20:35.053 } 00:20:35.053 }, 00:20:35.053 { 00:20:35.053 "method": "nvmf_create_subsystem", 00:20:35.053 "params": { 00:20:35.053 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:35.053 "allow_any_host": false, 00:20:35.053 "serial_number": "SPDK00000000000001", 00:20:35.053 "model_number": "SPDK bdev Controller", 00:20:35.053 "max_namespaces": 10, 00:20:35.053 "min_cntlid": 1, 00:20:35.053 "max_cntlid": 65519, 00:20:35.053 "ana_reporting": false 00:20:35.053 } 00:20:35.053 }, 00:20:35.053 { 00:20:35.053 "method": "nvmf_subsystem_add_host", 00:20:35.053 "params": { 00:20:35.053 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:35.053 "host": "nqn.2016-06.io.spdk:host1", 00:20:35.053 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:20:35.053 } 00:20:35.053 }, 00:20:35.053 { 00:20:35.053 "method": "nvmf_subsystem_add_ns", 00:20:35.053 "params": { 00:20:35.053 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:35.053 "namespace": { 00:20:35.053 "nsid": 1, 00:20:35.053 "bdev_name": "malloc0", 00:20:35.053 "nguid": "B7BB9D86B4A44964A208CBB8384EE014", 00:20:35.053 "uuid": "b7bb9d86-b4a4-4964-a208-cbb8384ee014" 00:20:35.053 } 00:20:35.053 } 00:20:35.053 }, 00:20:35.053 { 00:20:35.053 "method": "nvmf_subsystem_add_listener", 00:20:35.053 "params": { 00:20:35.053 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:35.053 "listen_address": { 00:20:35.053 "trtype": "TCP", 00:20:35.053 "adrfam": "IPv4", 00:20:35.053 "traddr": "10.0.0.2", 00:20:35.053 "trsvcid": "4420" 00:20:35.053 }, 00:20:35.053 "secure_channel": true 00:20:35.053 } 00:20:35.053 } 00:20:35.053 ] 00:20:35.053 } 00:20:35.053 ] 00:20:35.053 }' 00:20:35.053 01:00:19 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:35.053 01:00:19 -- common/autotest_common.sh@10 -- # set +x 00:20:35.053 01:00:19 -- nvmf/common.sh@469 -- # nvmfpid=3431273 00:20:35.053 01:00:19 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:20:35.053 01:00:19 -- nvmf/common.sh@470 -- # waitforlisten 3431273 00:20:35.053 01:00:19 -- common/autotest_common.sh@819 -- # '[' -z 3431273 ']' 00:20:35.053 01:00:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:35.053 01:00:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:35.053 01:00:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:35.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:35.053 01:00:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:35.053 01:00:19 -- common/autotest_common.sh@10 -- # set +x 00:20:35.314 [2024-07-23 01:00:19.266504] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:20:35.314 [2024-07-23 01:00:19.266585] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:35.314 EAL: No free 2048 kB hugepages reported on node 1 00:20:35.314 [2024-07-23 01:00:19.332072] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:35.314 [2024-07-23 01:00:19.420896] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:35.314 [2024-07-23 01:00:19.421087] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:35.314 [2024-07-23 01:00:19.421116] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:35.314 [2024-07-23 01:00:19.421140] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:35.314 [2024-07-23 01:00:19.421180] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:35.574 [2024-07-23 01:00:19.651973] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:35.574 [2024-07-23 01:00:19.683980] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:35.574 [2024-07-23 01:00:19.684234] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:36.144 01:00:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:36.144 01:00:20 -- common/autotest_common.sh@852 -- # return 0 00:20:36.144 01:00:20 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:36.144 01:00:20 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:36.144 01:00:20 -- common/autotest_common.sh@10 -- # set +x 00:20:36.144 01:00:20 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:36.144 01:00:20 -- target/tls.sh@216 -- # bdevperf_pid=3431379 00:20:36.144 01:00:20 -- target/tls.sh@217 -- # waitforlisten 3431379 /var/tmp/bdevperf.sock 00:20:36.144 01:00:20 -- common/autotest_common.sh@819 -- # '[' -z 3431379 ']' 00:20:36.144 01:00:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:36.144 01:00:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:36.144 01:00:20 -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:20:36.144 01:00:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:36.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:36.144 01:00:20 -- target/tls.sh@213 -- # echo '{ 00:20:36.144 "subsystems": [ 00:20:36.144 { 00:20:36.144 "subsystem": "iobuf", 00:20:36.144 "config": [ 00:20:36.144 { 00:20:36.144 "method": "iobuf_set_options", 00:20:36.144 "params": { 00:20:36.144 "small_pool_count": 8192, 00:20:36.144 "large_pool_count": 1024, 00:20:36.144 "small_bufsize": 8192, 00:20:36.144 "large_bufsize": 135168 00:20:36.144 } 00:20:36.144 } 00:20:36.144 ] 00:20:36.144 }, 00:20:36.144 { 00:20:36.144 "subsystem": "sock", 00:20:36.144 "config": [ 00:20:36.144 { 00:20:36.144 "method": "sock_impl_set_options", 00:20:36.144 "params": { 00:20:36.144 "impl_name": "posix", 00:20:36.144 "recv_buf_size": 2097152, 00:20:36.144 "send_buf_size": 2097152, 00:20:36.144 "enable_recv_pipe": true, 00:20:36.144 "enable_quickack": false, 00:20:36.144 "enable_placement_id": 0, 00:20:36.144 "enable_zerocopy_send_server": true, 00:20:36.144 "enable_zerocopy_send_client": false, 00:20:36.144 "zerocopy_threshold": 0, 00:20:36.144 "tls_version": 0, 00:20:36.144 "enable_ktls": false 00:20:36.144 } 00:20:36.144 }, 00:20:36.144 { 00:20:36.144 "method": "sock_impl_set_options", 00:20:36.144 "params": { 00:20:36.144 "impl_name": "ssl", 00:20:36.144 "recv_buf_size": 4096, 00:20:36.144 "send_buf_size": 4096, 00:20:36.144 "enable_recv_pipe": true, 00:20:36.144 "enable_quickack": false, 00:20:36.144 "enable_placement_id": 0, 00:20:36.144 "enable_zerocopy_send_server": true, 00:20:36.144 "enable_zerocopy_send_client": false, 00:20:36.144 "zerocopy_threshold": 0, 00:20:36.144 "tls_version": 0, 00:20:36.144 "enable_ktls": false 00:20:36.144 } 00:20:36.144 } 00:20:36.144 ] 00:20:36.144 }, 00:20:36.144 { 00:20:36.144 "subsystem": "vmd", 00:20:36.144 "config": [] 00:20:36.144 }, 00:20:36.144 { 00:20:36.144 "subsystem": "accel", 00:20:36.144 "config": [ 00:20:36.144 { 00:20:36.144 "method": "accel_set_options", 00:20:36.144 "params": { 00:20:36.144 "small_cache_size": 128, 00:20:36.144 "large_cache_size": 16, 00:20:36.144 "task_count": 2048, 00:20:36.144 "sequence_count": 2048, 00:20:36.144 "buf_count": 2048 00:20:36.144 } 00:20:36.144 } 00:20:36.144 ] 00:20:36.144 }, 00:20:36.144 { 00:20:36.144 "subsystem": "bdev", 00:20:36.144 "config": [ 00:20:36.144 { 00:20:36.144 "method": "bdev_set_options", 00:20:36.144 "params": { 00:20:36.144 "bdev_io_pool_size": 65535, 00:20:36.144 "bdev_io_cache_size": 256, 00:20:36.144 "bdev_auto_examine": true, 00:20:36.144 "iobuf_small_cache_size": 128, 00:20:36.144 "iobuf_large_cache_size": 16 00:20:36.144 } 00:20:36.144 }, 00:20:36.144 { 00:20:36.144 "method": "bdev_raid_set_options", 00:20:36.144 "params": { 00:20:36.144 "process_window_size_kb": 1024 00:20:36.144 } 00:20:36.145 }, 00:20:36.145 { 00:20:36.145 "method": "bdev_iscsi_set_options", 00:20:36.145 "params": { 00:20:36.145 "timeout_sec": 30 00:20:36.145 } 00:20:36.145 }, 00:20:36.145 { 00:20:36.145 "method": "bdev_nvme_set_options", 00:20:36.145 "params": { 00:20:36.145 "action_on_timeout": "none", 00:20:36.145 "timeout_us": 0, 00:20:36.145 "timeout_admin_us": 0, 00:20:36.145 "keep_alive_timeout_ms": 10000, 00:20:36.145 "transport_retry_count": 4, 00:20:36.145 "arbitration_burst": 0, 00:20:36.145 "low_priority_weight": 0, 00:20:36.145 "medium_priority_weight": 0, 00:20:36.145 "high_priority_weight": 0, 00:20:36.145 "nvme_adminq_poll_period_us": 10000, 00:20:36.145 "nvme_ioq_poll_period_us": 0, 00:20:36.145 "io_queue_requests": 512, 00:20:36.145 "delay_cmd_submit": true, 00:20:36.145 "bdev_retry_count": 3, 00:20:36.145 "transport_ack_timeout": 0, 00:20:36.145 "ctrlr_loss_timeout_sec": 0, 00:20:36.145 "reconnect_delay_sec": 0, 00:20:36.145 "fast_io_fail_timeout_sec": 0, 00:20:36.145 "generate_uuids": false, 00:20:36.145 "transport_tos": 0, 00:20:36.145 "io_path_stat": false, 00:20:36.145 "allow_accel_sequence": false 00:20:36.145 } 00:20:36.145 }, 00:20:36.145 { 00:20:36.145 "method": "bdev_nvme_attach_controller", 00:20:36.145 "params": { 00:20:36.145 "name": "TLSTEST", 00:20:36.145 "trtype": "TCP", 00:20:36.145 "adrfam": "IPv4", 00:20:36.145 "traddr": "10.0.0.2", 00:20:36.145 "trsvcid": "4420", 00:20:36.145 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:36.145 "prchk_reftag": false, 00:20:36.145 "prchk_guard": false, 00:20:36.145 "ctrlr_loss_timeout_sec": 0, 00:20:36.145 "reconnect_delay_sec": 0, 00:20:36.145 "fast_io_fail_timeout_sec": 0, 00:20:36.145 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:20:36.145 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:36.145 "hdgst": false, 00:20:36.145 "ddgst": false 00:20:36.145 } 00:20:36.145 }, 00:20:36.145 { 00:20:36.145 "method": "bdev_nvme_set_hotplug", 00:20:36.145 "params": { 00:20:36.145 "period_us": 100000, 00:20:36.145 "enable": false 00:20:36.145 } 00:20:36.145 }, 00:20:36.145 { 00:20:36.145 "method": "bdev_wait_for_examine" 00:20:36.145 } 00:20:36.145 ] 00:20:36.145 }, 00:20:36.145 { 00:20:36.145 "subsystem": "nbd", 00:20:36.145 "config": [] 00:20:36.145 } 00:20:36.145 ] 00:20:36.145 }' 00:20:36.145 01:00:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:36.145 01:00:20 -- common/autotest_common.sh@10 -- # set +x 00:20:36.145 [2024-07-23 01:00:20.296656] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:20:36.145 [2024-07-23 01:00:20.296730] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3431379 ] 00:20:36.145 EAL: No free 2048 kB hugepages reported on node 1 00:20:36.404 [2024-07-23 01:00:20.353181] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:36.404 [2024-07-23 01:00:20.434165] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:36.404 [2024-07-23 01:00:20.589183] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:37.341 01:00:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:37.341 01:00:21 -- common/autotest_common.sh@852 -- # return 0 00:20:37.341 01:00:21 -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:37.341 Running I/O for 10 seconds... 00:20:47.360 00:20:47.360 Latency(us) 00:20:47.360 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:47.360 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:47.361 Verification LBA range: start 0x0 length 0x2000 00:20:47.361 TLSTESTn1 : 10.03 1852.93 7.24 0.00 0.00 68969.04 11165.39 80002.47 00:20:47.361 =================================================================================================================== 00:20:47.361 Total : 1852.93 7.24 0.00 0.00 68969.04 11165.39 80002.47 00:20:47.361 0 00:20:47.361 01:00:31 -- target/tls.sh@222 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:47.361 01:00:31 -- target/tls.sh@223 -- # killprocess 3431379 00:20:47.361 01:00:31 -- common/autotest_common.sh@926 -- # '[' -z 3431379 ']' 00:20:47.361 01:00:31 -- common/autotest_common.sh@930 -- # kill -0 3431379 00:20:47.361 01:00:31 -- common/autotest_common.sh@931 -- # uname 00:20:47.361 01:00:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:47.361 01:00:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3431379 00:20:47.361 01:00:31 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:47.361 01:00:31 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:47.361 01:00:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3431379' 00:20:47.361 killing process with pid 3431379 00:20:47.361 01:00:31 -- common/autotest_common.sh@945 -- # kill 3431379 00:20:47.361 Received shutdown signal, test time was about 10.000000 seconds 00:20:47.361 00:20:47.361 Latency(us) 00:20:47.361 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:47.361 =================================================================================================================== 00:20:47.361 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:47.361 01:00:31 -- common/autotest_common.sh@950 -- # wait 3431379 00:20:47.618 01:00:31 -- target/tls.sh@224 -- # killprocess 3431273 00:20:47.618 01:00:31 -- common/autotest_common.sh@926 -- # '[' -z 3431273 ']' 00:20:47.618 01:00:31 -- common/autotest_common.sh@930 -- # kill -0 3431273 00:20:47.618 01:00:31 -- common/autotest_common.sh@931 -- # uname 00:20:47.618 01:00:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:47.618 01:00:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3431273 00:20:47.618 01:00:31 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:20:47.618 01:00:31 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:20:47.618 01:00:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3431273' 00:20:47.618 killing process with pid 3431273 00:20:47.618 01:00:31 -- common/autotest_common.sh@945 -- # kill 3431273 00:20:47.618 01:00:31 -- common/autotest_common.sh@950 -- # wait 3431273 00:20:47.876 01:00:31 -- target/tls.sh@226 -- # trap - SIGINT SIGTERM EXIT 00:20:47.877 01:00:31 -- target/tls.sh@227 -- # cleanup 00:20:47.877 01:00:31 -- target/tls.sh@15 -- # process_shm --id 0 00:20:47.877 01:00:31 -- common/autotest_common.sh@796 -- # type=--id 00:20:47.877 01:00:31 -- common/autotest_common.sh@797 -- # id=0 00:20:47.877 01:00:31 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:20:47.877 01:00:31 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:47.877 01:00:31 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:20:47.877 01:00:31 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:20:47.877 01:00:31 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:20:47.877 01:00:31 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:47.877 nvmf_trace.0 00:20:47.877 01:00:31 -- common/autotest_common.sh@811 -- # return 0 00:20:47.877 01:00:31 -- target/tls.sh@16 -- # killprocess 3431379 00:20:47.877 01:00:31 -- common/autotest_common.sh@926 -- # '[' -z 3431379 ']' 00:20:47.877 01:00:31 -- common/autotest_common.sh@930 -- # kill -0 3431379 00:20:47.877 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (3431379) - No such process 00:20:47.877 01:00:31 -- common/autotest_common.sh@953 -- # echo 'Process with pid 3431379 is not found' 00:20:47.877 Process with pid 3431379 is not found 00:20:47.877 01:00:31 -- target/tls.sh@17 -- # nvmftestfini 00:20:47.877 01:00:31 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:47.877 01:00:31 -- nvmf/common.sh@116 -- # sync 00:20:47.877 01:00:31 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:47.877 01:00:31 -- nvmf/common.sh@119 -- # set +e 00:20:47.877 01:00:31 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:47.877 01:00:31 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:47.877 rmmod nvme_tcp 00:20:47.877 rmmod nvme_fabrics 00:20:47.877 rmmod nvme_keyring 00:20:47.877 01:00:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:47.877 01:00:32 -- nvmf/common.sh@123 -- # set -e 00:20:47.877 01:00:32 -- nvmf/common.sh@124 -- # return 0 00:20:47.877 01:00:32 -- nvmf/common.sh@477 -- # '[' -n 3431273 ']' 00:20:47.877 01:00:32 -- nvmf/common.sh@478 -- # killprocess 3431273 00:20:47.877 01:00:32 -- common/autotest_common.sh@926 -- # '[' -z 3431273 ']' 00:20:47.877 01:00:32 -- common/autotest_common.sh@930 -- # kill -0 3431273 00:20:47.877 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (3431273) - No such process 00:20:47.877 01:00:32 -- common/autotest_common.sh@953 -- # echo 'Process with pid 3431273 is not found' 00:20:47.877 Process with pid 3431273 is not found 00:20:47.877 01:00:32 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:47.877 01:00:32 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:47.877 01:00:32 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:47.877 01:00:32 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:47.877 01:00:32 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:47.877 01:00:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:47.877 01:00:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:47.877 01:00:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:50.415 01:00:34 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:20:50.415 01:00:34 -- target/tls.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:50.415 00:20:50.415 real 1m14.302s 00:20:50.415 user 1m54.196s 00:20:50.415 sys 0m27.695s 00:20:50.415 01:00:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:50.415 01:00:34 -- common/autotest_common.sh@10 -- # set +x 00:20:50.415 ************************************ 00:20:50.415 END TEST nvmf_tls 00:20:50.415 ************************************ 00:20:50.415 01:00:34 -- nvmf/nvmf.sh@60 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:50.415 01:00:34 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:50.415 01:00:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:50.415 01:00:34 -- common/autotest_common.sh@10 -- # set +x 00:20:50.415 ************************************ 00:20:50.415 START TEST nvmf_fips 00:20:50.415 ************************************ 00:20:50.415 01:00:34 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:50.415 * Looking for test storage... 00:20:50.415 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:20:50.415 01:00:34 -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:50.415 01:00:34 -- nvmf/common.sh@7 -- # uname -s 00:20:50.415 01:00:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:50.415 01:00:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:50.415 01:00:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:50.415 01:00:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:50.415 01:00:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:50.415 01:00:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:50.415 01:00:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:50.415 01:00:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:50.415 01:00:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:50.415 01:00:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:50.415 01:00:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:50.415 01:00:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:50.415 01:00:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:50.415 01:00:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:50.415 01:00:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:50.415 01:00:34 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:50.415 01:00:34 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:50.415 01:00:34 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:50.415 01:00:34 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:50.415 01:00:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.415 01:00:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.415 01:00:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.415 01:00:34 -- paths/export.sh@5 -- # export PATH 00:20:50.415 01:00:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.415 01:00:34 -- nvmf/common.sh@46 -- # : 0 00:20:50.415 01:00:34 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:50.415 01:00:34 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:50.415 01:00:34 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:50.415 01:00:34 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:50.415 01:00:34 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:50.415 01:00:34 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:50.415 01:00:34 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:50.415 01:00:34 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:50.415 01:00:34 -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:50.415 01:00:34 -- fips/fips.sh@89 -- # check_openssl_version 00:20:50.415 01:00:34 -- fips/fips.sh@83 -- # local target=3.0.0 00:20:50.415 01:00:34 -- fips/fips.sh@85 -- # openssl version 00:20:50.415 01:00:34 -- fips/fips.sh@85 -- # awk '{print $2}' 00:20:50.415 01:00:34 -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:20:50.415 01:00:34 -- scripts/common.sh@375 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:20:50.415 01:00:34 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:50.415 01:00:34 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:50.415 01:00:34 -- scripts/common.sh@335 -- # IFS=.-: 00:20:50.415 01:00:34 -- scripts/common.sh@335 -- # read -ra ver1 00:20:50.415 01:00:34 -- scripts/common.sh@336 -- # IFS=.-: 00:20:50.415 01:00:34 -- scripts/common.sh@336 -- # read -ra ver2 00:20:50.415 01:00:34 -- scripts/common.sh@337 -- # local 'op=>=' 00:20:50.415 01:00:34 -- scripts/common.sh@339 -- # ver1_l=3 00:20:50.415 01:00:34 -- scripts/common.sh@340 -- # ver2_l=3 00:20:50.415 01:00:34 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:50.415 01:00:34 -- scripts/common.sh@343 -- # case "$op" in 00:20:50.415 01:00:34 -- scripts/common.sh@347 -- # : 1 00:20:50.415 01:00:34 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:50.415 01:00:34 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:50.415 01:00:34 -- scripts/common.sh@364 -- # decimal 3 00:20:50.415 01:00:34 -- scripts/common.sh@352 -- # local d=3 00:20:50.415 01:00:34 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:50.415 01:00:34 -- scripts/common.sh@354 -- # echo 3 00:20:50.415 01:00:34 -- scripts/common.sh@364 -- # ver1[v]=3 00:20:50.415 01:00:34 -- scripts/common.sh@365 -- # decimal 3 00:20:50.415 01:00:34 -- scripts/common.sh@352 -- # local d=3 00:20:50.415 01:00:34 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:50.415 01:00:34 -- scripts/common.sh@354 -- # echo 3 00:20:50.415 01:00:34 -- scripts/common.sh@365 -- # ver2[v]=3 00:20:50.415 01:00:34 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:50.415 01:00:34 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:50.415 01:00:34 -- scripts/common.sh@363 -- # (( v++ )) 00:20:50.415 01:00:34 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:50.415 01:00:34 -- scripts/common.sh@364 -- # decimal 0 00:20:50.415 01:00:34 -- scripts/common.sh@352 -- # local d=0 00:20:50.415 01:00:34 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:50.415 01:00:34 -- scripts/common.sh@354 -- # echo 0 00:20:50.415 01:00:34 -- scripts/common.sh@364 -- # ver1[v]=0 00:20:50.415 01:00:34 -- scripts/common.sh@365 -- # decimal 0 00:20:50.415 01:00:34 -- scripts/common.sh@352 -- # local d=0 00:20:50.416 01:00:34 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:50.416 01:00:34 -- scripts/common.sh@354 -- # echo 0 00:20:50.416 01:00:34 -- scripts/common.sh@365 -- # ver2[v]=0 00:20:50.416 01:00:34 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:50.416 01:00:34 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:50.416 01:00:34 -- scripts/common.sh@363 -- # (( v++ )) 00:20:50.416 01:00:34 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:50.416 01:00:34 -- scripts/common.sh@364 -- # decimal 9 00:20:50.416 01:00:34 -- scripts/common.sh@352 -- # local d=9 00:20:50.416 01:00:34 -- scripts/common.sh@353 -- # [[ 9 =~ ^[0-9]+$ ]] 00:20:50.416 01:00:34 -- scripts/common.sh@354 -- # echo 9 00:20:50.416 01:00:34 -- scripts/common.sh@364 -- # ver1[v]=9 00:20:50.416 01:00:34 -- scripts/common.sh@365 -- # decimal 0 00:20:50.416 01:00:34 -- scripts/common.sh@352 -- # local d=0 00:20:50.416 01:00:34 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:50.416 01:00:34 -- scripts/common.sh@354 -- # echo 0 00:20:50.416 01:00:34 -- scripts/common.sh@365 -- # ver2[v]=0 00:20:50.416 01:00:34 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:50.416 01:00:34 -- scripts/common.sh@366 -- # return 0 00:20:50.416 01:00:34 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:20:50.416 01:00:34 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:20:50.416 01:00:34 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:20:50.416 01:00:34 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:20:50.416 01:00:34 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:20:50.416 01:00:34 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:20:50.416 01:00:34 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:20:50.416 01:00:34 -- fips/fips.sh@105 -- # export OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:20:50.416 01:00:34 -- fips/fips.sh@105 -- # OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:20:50.416 01:00:34 -- fips/fips.sh@114 -- # build_openssl_config 00:20:50.416 01:00:34 -- fips/fips.sh@37 -- # cat 00:20:50.416 01:00:34 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:20:50.416 01:00:34 -- fips/fips.sh@58 -- # cat - 00:20:50.416 01:00:34 -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:20:50.416 01:00:34 -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:20:50.416 01:00:34 -- fips/fips.sh@117 -- # mapfile -t providers 00:20:50.416 01:00:34 -- fips/fips.sh@117 -- # OPENSSL_CONF=spdk_fips.conf 00:20:50.416 01:00:34 -- fips/fips.sh@117 -- # openssl list -providers 00:20:50.416 01:00:34 -- fips/fips.sh@117 -- # grep name 00:20:50.416 01:00:34 -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:20:50.416 01:00:34 -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:20:50.416 01:00:34 -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:20:50.416 01:00:34 -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:20:50.416 01:00:34 -- fips/fips.sh@128 -- # : 00:20:50.416 01:00:34 -- common/autotest_common.sh@640 -- # local es=0 00:20:50.416 01:00:34 -- common/autotest_common.sh@642 -- # valid_exec_arg openssl md5 /dev/fd/62 00:20:50.416 01:00:34 -- common/autotest_common.sh@628 -- # local arg=openssl 00:20:50.416 01:00:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:50.416 01:00:34 -- common/autotest_common.sh@632 -- # type -t openssl 00:20:50.416 01:00:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:50.416 01:00:34 -- common/autotest_common.sh@634 -- # type -P openssl 00:20:50.416 01:00:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:50.416 01:00:34 -- common/autotest_common.sh@634 -- # arg=/usr/bin/openssl 00:20:50.416 01:00:34 -- common/autotest_common.sh@634 -- # [[ -x /usr/bin/openssl ]] 00:20:50.416 01:00:34 -- common/autotest_common.sh@643 -- # openssl md5 /dev/fd/62 00:20:50.416 Error setting digest 00:20:50.416 0002A2D8BB7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:20:50.416 0002A2D8BB7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:20:50.416 01:00:34 -- common/autotest_common.sh@643 -- # es=1 00:20:50.416 01:00:34 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:50.416 01:00:34 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:50.416 01:00:34 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:50.416 01:00:34 -- fips/fips.sh@131 -- # nvmftestinit 00:20:50.416 01:00:34 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:50.416 01:00:34 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:50.416 01:00:34 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:50.416 01:00:34 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:50.416 01:00:34 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:50.416 01:00:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:50.416 01:00:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:50.416 01:00:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:50.416 01:00:34 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:20:50.416 01:00:34 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:20:50.416 01:00:34 -- nvmf/common.sh@284 -- # xtrace_disable 00:20:50.416 01:00:34 -- common/autotest_common.sh@10 -- # set +x 00:20:52.316 01:00:36 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:52.316 01:00:36 -- nvmf/common.sh@290 -- # pci_devs=() 00:20:52.316 01:00:36 -- nvmf/common.sh@290 -- # local -a pci_devs 00:20:52.316 01:00:36 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:20:52.316 01:00:36 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:20:52.316 01:00:36 -- nvmf/common.sh@292 -- # pci_drivers=() 00:20:52.316 01:00:36 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:20:52.316 01:00:36 -- nvmf/common.sh@294 -- # net_devs=() 00:20:52.316 01:00:36 -- nvmf/common.sh@294 -- # local -ga net_devs 00:20:52.316 01:00:36 -- nvmf/common.sh@295 -- # e810=() 00:20:52.316 01:00:36 -- nvmf/common.sh@295 -- # local -ga e810 00:20:52.316 01:00:36 -- nvmf/common.sh@296 -- # x722=() 00:20:52.316 01:00:36 -- nvmf/common.sh@296 -- # local -ga x722 00:20:52.316 01:00:36 -- nvmf/common.sh@297 -- # mlx=() 00:20:52.316 01:00:36 -- nvmf/common.sh@297 -- # local -ga mlx 00:20:52.316 01:00:36 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:52.316 01:00:36 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:52.316 01:00:36 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:52.316 01:00:36 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:52.316 01:00:36 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:52.316 01:00:36 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:52.316 01:00:36 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:52.316 01:00:36 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:52.316 01:00:36 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:52.316 01:00:36 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:52.316 01:00:36 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:52.316 01:00:36 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:20:52.316 01:00:36 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:20:52.316 01:00:36 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:20:52.316 01:00:36 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:20:52.316 01:00:36 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:20:52.316 01:00:36 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:20:52.316 01:00:36 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:52.316 01:00:36 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:52.316 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:52.316 01:00:36 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:52.316 01:00:36 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:52.316 01:00:36 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:52.316 01:00:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:52.316 01:00:36 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:52.316 01:00:36 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:52.316 01:00:36 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:52.317 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:52.317 01:00:36 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:52.317 01:00:36 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:52.317 01:00:36 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:52.317 01:00:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:52.317 01:00:36 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:52.317 01:00:36 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:20:52.317 01:00:36 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:20:52.317 01:00:36 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:20:52.317 01:00:36 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:52.317 01:00:36 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:52.317 01:00:36 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:52.317 01:00:36 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:52.317 01:00:36 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:52.317 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:52.317 01:00:36 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:52.317 01:00:36 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:52.317 01:00:36 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:52.317 01:00:36 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:52.317 01:00:36 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:52.317 01:00:36 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:52.317 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:52.317 01:00:36 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:52.317 01:00:36 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:20:52.317 01:00:36 -- nvmf/common.sh@402 -- # is_hw=yes 00:20:52.317 01:00:36 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:20:52.317 01:00:36 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:20:52.317 01:00:36 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:20:52.317 01:00:36 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:52.317 01:00:36 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:52.317 01:00:36 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:52.317 01:00:36 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:20:52.317 01:00:36 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:52.317 01:00:36 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:52.317 01:00:36 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:20:52.317 01:00:36 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:52.317 01:00:36 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:52.317 01:00:36 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:20:52.317 01:00:36 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:20:52.317 01:00:36 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:20:52.317 01:00:36 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:52.317 01:00:36 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:52.317 01:00:36 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:52.317 01:00:36 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:20:52.317 01:00:36 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:52.317 01:00:36 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:52.317 01:00:36 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:52.317 01:00:36 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:20:52.317 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:52.317 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.249 ms 00:20:52.317 00:20:52.317 --- 10.0.0.2 ping statistics --- 00:20:52.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:52.317 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:20:52.317 01:00:36 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:52.317 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:52.317 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:20:52.317 00:20:52.317 --- 10.0.0.1 ping statistics --- 00:20:52.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:52.317 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:20:52.317 01:00:36 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:52.317 01:00:36 -- nvmf/common.sh@410 -- # return 0 00:20:52.317 01:00:36 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:52.317 01:00:36 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:52.317 01:00:36 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:52.317 01:00:36 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:52.317 01:00:36 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:52.317 01:00:36 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:52.317 01:00:36 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:52.317 01:00:36 -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:20:52.317 01:00:36 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:52.317 01:00:36 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:52.317 01:00:36 -- common/autotest_common.sh@10 -- # set +x 00:20:52.317 01:00:36 -- nvmf/common.sh@469 -- # nvmfpid=3434767 00:20:52.317 01:00:36 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:52.317 01:00:36 -- nvmf/common.sh@470 -- # waitforlisten 3434767 00:20:52.317 01:00:36 -- common/autotest_common.sh@819 -- # '[' -z 3434767 ']' 00:20:52.317 01:00:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:52.317 01:00:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:52.317 01:00:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:52.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:52.317 01:00:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:52.317 01:00:36 -- common/autotest_common.sh@10 -- # set +x 00:20:52.317 [2024-07-23 01:00:36.482105] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:20:52.317 [2024-07-23 01:00:36.482208] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:52.317 EAL: No free 2048 kB hugepages reported on node 1 00:20:52.577 [2024-07-23 01:00:36.546439] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:52.577 [2024-07-23 01:00:36.633840] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:52.577 [2024-07-23 01:00:36.634006] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:52.577 [2024-07-23 01:00:36.634023] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:52.577 [2024-07-23 01:00:36.634035] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:52.577 [2024-07-23 01:00:36.634071] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:53.511 01:00:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:53.511 01:00:37 -- common/autotest_common.sh@852 -- # return 0 00:20:53.511 01:00:37 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:53.511 01:00:37 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:53.511 01:00:37 -- common/autotest_common.sh@10 -- # set +x 00:20:53.511 01:00:37 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:53.511 01:00:37 -- fips/fips.sh@134 -- # trap cleanup EXIT 00:20:53.511 01:00:37 -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:53.511 01:00:37 -- fips/fips.sh@138 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:53.511 01:00:37 -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:53.511 01:00:37 -- fips/fips.sh@140 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:53.511 01:00:37 -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:53.511 01:00:37 -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:53.511 01:00:37 -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:53.511 [2024-07-23 01:00:37.638357] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:53.511 [2024-07-23 01:00:37.654370] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:53.511 [2024-07-23 01:00:37.654596] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:53.511 malloc0 00:20:53.511 01:00:37 -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:53.511 01:00:37 -- fips/fips.sh@148 -- # bdevperf_pid=3434925 00:20:53.512 01:00:37 -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:53.512 01:00:37 -- fips/fips.sh@149 -- # waitforlisten 3434925 /var/tmp/bdevperf.sock 00:20:53.512 01:00:37 -- common/autotest_common.sh@819 -- # '[' -z 3434925 ']' 00:20:53.512 01:00:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:53.512 01:00:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:53.512 01:00:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:53.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:53.512 01:00:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:53.512 01:00:37 -- common/autotest_common.sh@10 -- # set +x 00:20:53.771 [2024-07-23 01:00:37.770905] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:20:53.771 [2024-07-23 01:00:37.770989] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3434925 ] 00:20:53.771 EAL: No free 2048 kB hugepages reported on node 1 00:20:53.771 [2024-07-23 01:00:37.827399] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:53.771 [2024-07-23 01:00:37.910314] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:54.706 01:00:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:54.706 01:00:38 -- common/autotest_common.sh@852 -- # return 0 00:20:54.706 01:00:38 -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:54.965 [2024-07-23 01:00:38.981993] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:54.965 TLSTESTn1 00:20:54.965 01:00:39 -- fips/fips.sh@155 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:55.231 Running I/O for 10 seconds... 00:21:05.208 00:21:05.208 Latency(us) 00:21:05.208 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:05.208 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:05.208 Verification LBA range: start 0x0 length 0x2000 00:21:05.208 TLSTESTn1 : 10.03 2375.83 9.28 0.00 0.00 53796.52 10631.40 56700.78 00:21:05.208 =================================================================================================================== 00:21:05.208 Total : 2375.83 9.28 0.00 0.00 53796.52 10631.40 56700.78 00:21:05.208 0 00:21:05.208 01:00:49 -- fips/fips.sh@1 -- # cleanup 00:21:05.208 01:00:49 -- fips/fips.sh@15 -- # process_shm --id 0 00:21:05.208 01:00:49 -- common/autotest_common.sh@796 -- # type=--id 00:21:05.208 01:00:49 -- common/autotest_common.sh@797 -- # id=0 00:21:05.208 01:00:49 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:21:05.208 01:00:49 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:05.208 01:00:49 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:21:05.208 01:00:49 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:21:05.208 01:00:49 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:21:05.208 01:00:49 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:05.208 nvmf_trace.0 00:21:05.208 01:00:49 -- common/autotest_common.sh@811 -- # return 0 00:21:05.208 01:00:49 -- fips/fips.sh@16 -- # killprocess 3434925 00:21:05.208 01:00:49 -- common/autotest_common.sh@926 -- # '[' -z 3434925 ']' 00:21:05.208 01:00:49 -- common/autotest_common.sh@930 -- # kill -0 3434925 00:21:05.208 01:00:49 -- common/autotest_common.sh@931 -- # uname 00:21:05.208 01:00:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:05.208 01:00:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3434925 00:21:05.208 01:00:49 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:21:05.208 01:00:49 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:21:05.208 01:00:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3434925' 00:21:05.208 killing process with pid 3434925 00:21:05.208 01:00:49 -- common/autotest_common.sh@945 -- # kill 3434925 00:21:05.208 Received shutdown signal, test time was about 10.000000 seconds 00:21:05.208 00:21:05.208 Latency(us) 00:21:05.208 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:05.208 =================================================================================================================== 00:21:05.208 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:05.208 01:00:49 -- common/autotest_common.sh@950 -- # wait 3434925 00:21:05.467 01:00:49 -- fips/fips.sh@17 -- # nvmftestfini 00:21:05.467 01:00:49 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:05.467 01:00:49 -- nvmf/common.sh@116 -- # sync 00:21:05.467 01:00:49 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:05.467 01:00:49 -- nvmf/common.sh@119 -- # set +e 00:21:05.467 01:00:49 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:05.467 01:00:49 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:05.467 rmmod nvme_tcp 00:21:05.467 rmmod nvme_fabrics 00:21:05.468 rmmod nvme_keyring 00:21:05.468 01:00:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:05.468 01:00:49 -- nvmf/common.sh@123 -- # set -e 00:21:05.468 01:00:49 -- nvmf/common.sh@124 -- # return 0 00:21:05.468 01:00:49 -- nvmf/common.sh@477 -- # '[' -n 3434767 ']' 00:21:05.468 01:00:49 -- nvmf/common.sh@478 -- # killprocess 3434767 00:21:05.468 01:00:49 -- common/autotest_common.sh@926 -- # '[' -z 3434767 ']' 00:21:05.468 01:00:49 -- common/autotest_common.sh@930 -- # kill -0 3434767 00:21:05.468 01:00:49 -- common/autotest_common.sh@931 -- # uname 00:21:05.468 01:00:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:05.468 01:00:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3434767 00:21:05.468 01:00:49 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:21:05.468 01:00:49 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:21:05.468 01:00:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3434767' 00:21:05.468 killing process with pid 3434767 00:21:05.468 01:00:49 -- common/autotest_common.sh@945 -- # kill 3434767 00:21:05.468 01:00:49 -- common/autotest_common.sh@950 -- # wait 3434767 00:21:05.726 01:00:49 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:05.726 01:00:49 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:05.726 01:00:49 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:05.726 01:00:49 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:05.726 01:00:49 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:05.726 01:00:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:05.726 01:00:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:05.726 01:00:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:08.262 01:00:51 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:21:08.262 01:00:51 -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:08.262 00:21:08.262 real 0m17.813s 00:21:08.262 user 0m19.172s 00:21:08.262 sys 0m8.264s 00:21:08.262 01:00:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:08.262 01:00:51 -- common/autotest_common.sh@10 -- # set +x 00:21:08.262 ************************************ 00:21:08.262 END TEST nvmf_fips 00:21:08.262 ************************************ 00:21:08.262 01:00:51 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:21:08.262 01:00:51 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:21:08.262 01:00:51 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:08.262 01:00:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:08.262 01:00:51 -- common/autotest_common.sh@10 -- # set +x 00:21:08.262 ************************************ 00:21:08.262 START TEST nvmf_fuzz 00:21:08.262 ************************************ 00:21:08.262 01:00:51 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:21:08.262 * Looking for test storage... 00:21:08.262 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:08.262 01:00:51 -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:08.262 01:00:51 -- nvmf/common.sh@7 -- # uname -s 00:21:08.262 01:00:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:08.262 01:00:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:08.262 01:00:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:08.262 01:00:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:08.262 01:00:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:08.262 01:00:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:08.262 01:00:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:08.262 01:00:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:08.262 01:00:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:08.262 01:00:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:08.262 01:00:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:08.262 01:00:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:08.262 01:00:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:08.262 01:00:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:08.262 01:00:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:08.262 01:00:51 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:08.262 01:00:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:08.262 01:00:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:08.262 01:00:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:08.262 01:00:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.262 01:00:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.262 01:00:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.262 01:00:51 -- paths/export.sh@5 -- # export PATH 00:21:08.262 01:00:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.262 01:00:51 -- nvmf/common.sh@46 -- # : 0 00:21:08.262 01:00:51 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:08.262 01:00:51 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:08.262 01:00:51 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:08.262 01:00:51 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:08.262 01:00:51 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:08.262 01:00:51 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:08.262 01:00:51 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:08.262 01:00:51 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:08.262 01:00:51 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:21:08.262 01:00:51 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:08.262 01:00:51 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:08.262 01:00:51 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:08.262 01:00:51 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:08.262 01:00:51 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:08.262 01:00:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:08.262 01:00:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:08.262 01:00:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:08.262 01:00:52 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:21:08.262 01:00:52 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:08.262 01:00:52 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:08.262 01:00:52 -- common/autotest_common.sh@10 -- # set +x 00:21:10.230 01:00:53 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:10.230 01:00:53 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:10.230 01:00:53 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:10.230 01:00:53 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:10.230 01:00:53 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:10.230 01:00:53 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:10.230 01:00:53 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:10.230 01:00:53 -- nvmf/common.sh@294 -- # net_devs=() 00:21:10.230 01:00:53 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:10.230 01:00:53 -- nvmf/common.sh@295 -- # e810=() 00:21:10.230 01:00:53 -- nvmf/common.sh@295 -- # local -ga e810 00:21:10.230 01:00:53 -- nvmf/common.sh@296 -- # x722=() 00:21:10.230 01:00:53 -- nvmf/common.sh@296 -- # local -ga x722 00:21:10.230 01:00:53 -- nvmf/common.sh@297 -- # mlx=() 00:21:10.230 01:00:53 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:10.230 01:00:53 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:10.230 01:00:53 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:10.230 01:00:53 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:10.230 01:00:53 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:10.230 01:00:53 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:10.230 01:00:53 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:10.230 01:00:53 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:10.230 01:00:53 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:10.230 01:00:53 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:10.230 01:00:53 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:10.230 01:00:53 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:10.230 01:00:53 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:10.230 01:00:53 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:21:10.230 01:00:53 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:21:10.230 01:00:53 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:21:10.230 01:00:53 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:21:10.230 01:00:53 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:10.230 01:00:53 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:10.230 01:00:53 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:10.230 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:10.230 01:00:53 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:10.230 01:00:53 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:10.231 01:00:53 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:10.231 01:00:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:10.231 01:00:53 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:10.231 01:00:53 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:10.231 01:00:53 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:10.231 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:10.231 01:00:53 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:10.231 01:00:53 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:10.231 01:00:53 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:10.231 01:00:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:10.231 01:00:53 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:10.231 01:00:53 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:10.231 01:00:53 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:21:10.231 01:00:53 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:21:10.231 01:00:53 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:10.231 01:00:53 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:10.231 01:00:53 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:10.231 01:00:53 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:10.231 01:00:53 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:10.231 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:10.231 01:00:53 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:10.231 01:00:53 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:10.231 01:00:53 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:10.231 01:00:53 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:10.231 01:00:53 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:10.231 01:00:53 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:10.231 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:10.231 01:00:53 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:10.231 01:00:53 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:10.231 01:00:53 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:10.231 01:00:53 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:10.231 01:00:53 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:21:10.231 01:00:53 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:21:10.231 01:00:53 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:10.231 01:00:53 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:10.231 01:00:53 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:10.231 01:00:53 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:21:10.231 01:00:53 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:10.231 01:00:53 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:10.231 01:00:53 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:21:10.231 01:00:53 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:10.231 01:00:53 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:10.231 01:00:53 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:21:10.231 01:00:53 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:21:10.231 01:00:53 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:21:10.231 01:00:53 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:10.231 01:00:54 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:10.231 01:00:54 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:10.231 01:00:54 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:21:10.231 01:00:54 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:10.231 01:00:54 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:10.231 01:00:54 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:10.231 01:00:54 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:21:10.231 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:10.231 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.226 ms 00:21:10.231 00:21:10.231 --- 10.0.0.2 ping statistics --- 00:21:10.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:10.231 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:21:10.231 01:00:54 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:10.231 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:10.231 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:21:10.231 00:21:10.231 --- 10.0.0.1 ping statistics --- 00:21:10.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:10.231 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:21:10.231 01:00:54 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:10.231 01:00:54 -- nvmf/common.sh@410 -- # return 0 00:21:10.231 01:00:54 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:10.231 01:00:54 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:10.231 01:00:54 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:10.231 01:00:54 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:10.231 01:00:54 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:10.231 01:00:54 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:10.231 01:00:54 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:10.231 01:00:54 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=3438247 00:21:10.231 01:00:54 -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:10.231 01:00:54 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:21:10.231 01:00:54 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 3438247 00:21:10.231 01:00:54 -- common/autotest_common.sh@819 -- # '[' -z 3438247 ']' 00:21:10.231 01:00:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:10.231 01:00:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:10.231 01:00:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:10.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:10.231 01:00:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:10.231 01:00:54 -- common/autotest_common.sh@10 -- # set +x 00:21:11.167 01:00:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:11.167 01:00:55 -- common/autotest_common.sh@852 -- # return 0 00:21:11.167 01:00:55 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:11.167 01:00:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:11.167 01:00:55 -- common/autotest_common.sh@10 -- # set +x 00:21:11.167 01:00:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:11.167 01:00:55 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:21:11.167 01:00:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:11.167 01:00:55 -- common/autotest_common.sh@10 -- # set +x 00:21:11.167 Malloc0 00:21:11.167 01:00:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:11.167 01:00:55 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:11.167 01:00:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:11.167 01:00:55 -- common/autotest_common.sh@10 -- # set +x 00:21:11.167 01:00:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:11.167 01:00:55 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:11.167 01:00:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:11.167 01:00:55 -- common/autotest_common.sh@10 -- # set +x 00:21:11.167 01:00:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:11.167 01:00:55 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:11.167 01:00:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:11.167 01:00:55 -- common/autotest_common.sh@10 -- # set +x 00:21:11.167 01:00:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:11.167 01:00:55 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:21:11.167 01:00:55 -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:21:43.248 Fuzzing completed. Shutting down the fuzz application 00:21:43.249 00:21:43.249 Dumping successful admin opcodes: 00:21:43.249 8, 9, 10, 24, 00:21:43.249 Dumping successful io opcodes: 00:21:43.249 0, 9, 00:21:43.249 NS: 0x200003aeff00 I/O qp, Total commands completed: 441835, total successful commands: 2574, random_seed: 261365504 00:21:43.249 NS: 0x200003aeff00 admin qp, Total commands completed: 55312, total successful commands: 441, random_seed: 3486040000 00:21:43.249 01:01:26 -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:21:43.508 Fuzzing completed. Shutting down the fuzz application 00:21:43.508 00:21:43.508 Dumping successful admin opcodes: 00:21:43.508 24, 00:21:43.508 Dumping successful io opcodes: 00:21:43.508 00:21:43.508 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 317580850 00:21:43.508 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 317771789 00:21:43.508 01:01:27 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:43.508 01:01:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:43.508 01:01:27 -- common/autotest_common.sh@10 -- # set +x 00:21:43.508 01:01:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:43.508 01:01:27 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:21:43.508 01:01:27 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:21:43.508 01:01:27 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:43.508 01:01:27 -- nvmf/common.sh@116 -- # sync 00:21:43.508 01:01:27 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:43.508 01:01:27 -- nvmf/common.sh@119 -- # set +e 00:21:43.508 01:01:27 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:43.508 01:01:27 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:43.508 rmmod nvme_tcp 00:21:43.508 rmmod nvme_fabrics 00:21:43.508 rmmod nvme_keyring 00:21:43.508 01:01:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:43.508 01:01:27 -- nvmf/common.sh@123 -- # set -e 00:21:43.508 01:01:27 -- nvmf/common.sh@124 -- # return 0 00:21:43.508 01:01:27 -- nvmf/common.sh@477 -- # '[' -n 3438247 ']' 00:21:43.508 01:01:27 -- nvmf/common.sh@478 -- # killprocess 3438247 00:21:43.508 01:01:27 -- common/autotest_common.sh@926 -- # '[' -z 3438247 ']' 00:21:43.508 01:01:27 -- common/autotest_common.sh@930 -- # kill -0 3438247 00:21:43.508 01:01:27 -- common/autotest_common.sh@931 -- # uname 00:21:43.508 01:01:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:43.508 01:01:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3438247 00:21:43.508 01:01:27 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:43.508 01:01:27 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:43.508 01:01:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3438247' 00:21:43.508 killing process with pid 3438247 00:21:43.508 01:01:27 -- common/autotest_common.sh@945 -- # kill 3438247 00:21:43.508 01:01:27 -- common/autotest_common.sh@950 -- # wait 3438247 00:21:43.767 01:01:27 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:43.767 01:01:27 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:43.767 01:01:27 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:43.767 01:01:27 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:43.767 01:01:27 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:43.767 01:01:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:43.767 01:01:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:43.767 01:01:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:45.674 01:01:29 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:21:45.674 01:01:29 -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:21:45.933 00:21:45.933 real 0m37.943s 00:21:45.933 user 0m52.164s 00:21:45.933 sys 0m15.331s 00:21:45.933 01:01:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:45.933 01:01:29 -- common/autotest_common.sh@10 -- # set +x 00:21:45.933 ************************************ 00:21:45.933 END TEST nvmf_fuzz 00:21:45.933 ************************************ 00:21:45.933 01:01:29 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:21:45.933 01:01:29 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:45.933 01:01:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:45.933 01:01:29 -- common/autotest_common.sh@10 -- # set +x 00:21:45.933 ************************************ 00:21:45.933 START TEST nvmf_multiconnection 00:21:45.933 ************************************ 00:21:45.933 01:01:29 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:21:45.933 * Looking for test storage... 00:21:45.933 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:45.933 01:01:29 -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:45.933 01:01:29 -- nvmf/common.sh@7 -- # uname -s 00:21:45.933 01:01:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:45.933 01:01:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:45.933 01:01:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:45.933 01:01:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:45.933 01:01:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:45.933 01:01:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:45.933 01:01:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:45.933 01:01:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:45.933 01:01:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:45.933 01:01:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:45.933 01:01:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:45.933 01:01:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:45.933 01:01:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:45.933 01:01:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:45.933 01:01:29 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:45.933 01:01:29 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:45.933 01:01:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:45.933 01:01:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:45.933 01:01:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:45.933 01:01:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.933 01:01:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.934 01:01:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.934 01:01:29 -- paths/export.sh@5 -- # export PATH 00:21:45.934 01:01:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.934 01:01:29 -- nvmf/common.sh@46 -- # : 0 00:21:45.934 01:01:29 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:45.934 01:01:29 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:45.934 01:01:29 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:45.934 01:01:29 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:45.934 01:01:29 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:45.934 01:01:29 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:45.934 01:01:29 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:45.934 01:01:29 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:45.934 01:01:29 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:45.934 01:01:29 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:45.934 01:01:29 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:21:45.934 01:01:29 -- target/multiconnection.sh@16 -- # nvmftestinit 00:21:45.934 01:01:29 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:45.934 01:01:29 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:45.934 01:01:29 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:45.934 01:01:29 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:45.934 01:01:29 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:45.934 01:01:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:45.934 01:01:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:45.934 01:01:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:45.934 01:01:29 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:21:45.934 01:01:29 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:45.934 01:01:29 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:45.934 01:01:29 -- common/autotest_common.sh@10 -- # set +x 00:21:47.836 01:01:31 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:47.836 01:01:31 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:47.836 01:01:31 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:47.836 01:01:31 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:47.836 01:01:31 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:47.836 01:01:31 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:47.836 01:01:31 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:47.836 01:01:31 -- nvmf/common.sh@294 -- # net_devs=() 00:21:47.836 01:01:31 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:47.836 01:01:31 -- nvmf/common.sh@295 -- # e810=() 00:21:47.836 01:01:31 -- nvmf/common.sh@295 -- # local -ga e810 00:21:47.836 01:01:31 -- nvmf/common.sh@296 -- # x722=() 00:21:47.836 01:01:31 -- nvmf/common.sh@296 -- # local -ga x722 00:21:47.836 01:01:31 -- nvmf/common.sh@297 -- # mlx=() 00:21:47.836 01:01:31 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:47.836 01:01:31 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:47.836 01:01:31 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:47.836 01:01:31 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:47.836 01:01:31 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:47.836 01:01:31 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:47.836 01:01:31 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:47.836 01:01:31 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:47.836 01:01:31 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:47.836 01:01:31 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:47.836 01:01:31 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:47.836 01:01:31 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:47.836 01:01:31 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:47.836 01:01:31 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:21:47.836 01:01:31 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:21:47.836 01:01:31 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:21:47.836 01:01:31 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:21:47.836 01:01:31 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:47.836 01:01:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:47.836 01:01:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:47.836 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:47.836 01:01:31 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:47.836 01:01:31 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:47.836 01:01:31 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:47.836 01:01:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:47.837 01:01:31 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:47.837 01:01:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:47.837 01:01:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:47.837 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:47.837 01:01:31 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:47.837 01:01:31 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:47.837 01:01:31 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:47.837 01:01:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:47.837 01:01:31 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:47.837 01:01:31 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:47.837 01:01:31 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:21:47.837 01:01:31 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:21:47.837 01:01:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:47.837 01:01:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:47.837 01:01:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:47.837 01:01:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:47.837 01:01:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:47.837 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:47.837 01:01:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:47.837 01:01:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:47.837 01:01:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:47.837 01:01:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:47.837 01:01:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:47.837 01:01:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:47.837 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:47.837 01:01:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:47.837 01:01:31 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:47.837 01:01:31 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:47.837 01:01:31 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:47.837 01:01:31 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:21:47.837 01:01:31 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:21:47.837 01:01:31 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:47.837 01:01:31 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:47.837 01:01:31 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:47.837 01:01:31 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:21:47.837 01:01:31 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:47.837 01:01:31 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:47.837 01:01:31 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:21:47.837 01:01:31 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:47.837 01:01:31 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:47.837 01:01:31 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:21:47.837 01:01:31 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:21:47.837 01:01:31 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:21:47.837 01:01:31 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:47.837 01:01:31 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:47.837 01:01:31 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:47.837 01:01:31 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:21:47.837 01:01:31 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:47.837 01:01:32 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:48.095 01:01:32 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:48.095 01:01:32 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:21:48.095 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:48.095 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.130 ms 00:21:48.095 00:21:48.095 --- 10.0.0.2 ping statistics --- 00:21:48.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:48.095 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:21:48.095 01:01:32 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:48.095 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:48.095 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:21:48.095 00:21:48.095 --- 10.0.0.1 ping statistics --- 00:21:48.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:48.095 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:21:48.095 01:01:32 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:48.095 01:01:32 -- nvmf/common.sh@410 -- # return 0 00:21:48.095 01:01:32 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:48.095 01:01:32 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:48.095 01:01:32 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:48.095 01:01:32 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:48.095 01:01:32 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:48.095 01:01:32 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:48.095 01:01:32 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:48.095 01:01:32 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:21:48.095 01:01:32 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:48.095 01:01:32 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:48.095 01:01:32 -- common/autotest_common.sh@10 -- # set +x 00:21:48.095 01:01:32 -- nvmf/common.sh@469 -- # nvmfpid=3444229 00:21:48.095 01:01:32 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:48.095 01:01:32 -- nvmf/common.sh@470 -- # waitforlisten 3444229 00:21:48.095 01:01:32 -- common/autotest_common.sh@819 -- # '[' -z 3444229 ']' 00:21:48.095 01:01:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:48.095 01:01:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:48.095 01:01:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:48.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:48.095 01:01:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:48.095 01:01:32 -- common/autotest_common.sh@10 -- # set +x 00:21:48.095 [2024-07-23 01:01:32.132457] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:21:48.095 [2024-07-23 01:01:32.132531] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:48.095 EAL: No free 2048 kB hugepages reported on node 1 00:21:48.095 [2024-07-23 01:01:32.202461] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:48.095 [2024-07-23 01:01:32.294669] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:48.095 [2024-07-23 01:01:32.294816] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:48.095 [2024-07-23 01:01:32.294836] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:48.095 [2024-07-23 01:01:32.294852] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:48.095 [2024-07-23 01:01:32.294916] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:48.095 [2024-07-23 01:01:32.294967] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:48.095 [2024-07-23 01:01:32.295003] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:48.095 [2024-07-23 01:01:32.295006] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:49.085 01:01:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:49.085 01:01:33 -- common/autotest_common.sh@852 -- # return 0 00:21:49.085 01:01:33 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:49.085 01:01:33 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:49.085 01:01:33 -- common/autotest_common.sh@10 -- # set +x 00:21:49.085 01:01:33 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:49.085 01:01:33 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:49.085 01:01:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:49.085 01:01:33 -- common/autotest_common.sh@10 -- # set +x 00:21:49.085 [2024-07-23 01:01:33.124221] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:49.085 01:01:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:49.085 01:01:33 -- target/multiconnection.sh@21 -- # seq 1 11 00:21:49.085 01:01:33 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:49.085 01:01:33 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:49.085 01:01:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:49.085 01:01:33 -- common/autotest_common.sh@10 -- # set +x 00:21:49.085 Malloc1 00:21:49.085 01:01:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:49.085 01:01:33 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:21:49.085 01:01:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:49.085 01:01:33 -- common/autotest_common.sh@10 -- # set +x 00:21:49.085 01:01:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:49.085 01:01:33 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:49.085 01:01:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:49.085 01:01:33 -- common/autotest_common.sh@10 -- # set +x 00:21:49.085 01:01:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:49.085 01:01:33 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:49.085 01:01:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:49.085 01:01:33 -- common/autotest_common.sh@10 -- # set +x 00:21:49.085 [2024-07-23 01:01:33.181298] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:49.085 01:01:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:49.085 01:01:33 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:49.085 01:01:33 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:21:49.085 01:01:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:49.085 01:01:33 -- common/autotest_common.sh@10 -- # set +x 00:21:49.085 Malloc2 00:21:49.085 01:01:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:49.085 01:01:33 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:21:49.085 01:01:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:49.085 01:01:33 -- common/autotest_common.sh@10 -- # set +x 00:21:49.085 01:01:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:49.085 01:01:33 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:21:49.085 01:01:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:49.085 01:01:33 -- common/autotest_common.sh@10 -- # set +x 00:21:49.085 01:01:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:49.085 01:01:33 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:49.085 01:01:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:49.085 01:01:33 -- common/autotest_common.sh@10 -- # set +x 00:21:49.085 01:01:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:49.085 01:01:33 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:49.085 01:01:33 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:21:49.085 01:01:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:49.085 01:01:33 -- common/autotest_common.sh@10 -- # set +x 00:21:49.085 Malloc3 00:21:49.085 01:01:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:49.085 01:01:33 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:21:49.085 01:01:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:49.085 01:01:33 -- common/autotest_common.sh@10 -- # set +x 00:21:49.085 01:01:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:49.085 01:01:33 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:21:49.085 01:01:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:49.085 01:01:33 -- common/autotest_common.sh@10 -- # set +x 00:21:49.085 01:01:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:49.085 01:01:33 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:21:49.085 01:01:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:49.085 01:01:33 -- common/autotest_common.sh@10 -- # set +x 00:21:49.085 01:01:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:49.085 01:01:33 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:49.085 01:01:33 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:21:49.085 01:01:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:49.085 01:01:33 -- common/autotest_common.sh@10 -- # set +x 00:21:49.342 Malloc4 00:21:49.342 01:01:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:49.342 01:01:33 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:21:49.342 01:01:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:49.342 01:01:33 -- common/autotest_common.sh@10 -- # set +x 00:21:49.342 01:01:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:49.342 01:01:33 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:21:49.342 01:01:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:49.342 01:01:33 -- common/autotest_common.sh@10 -- # set +x 00:21:49.342 01:01:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:49.342 01:01:33 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:21:49.342 01:01:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:49.342 01:01:33 -- common/autotest_common.sh@10 -- # set +x 00:21:49.342 01:01:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:49.342 01:01:33 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:49.342 01:01:33 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:21:49.342 01:01:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:49.342 01:01:33 -- common/autotest_common.sh@10 -- # set +x 00:21:49.342 Malloc5 00:21:49.342 01:01:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:49.342 01:01:33 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:21:49.342 01:01:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:49.342 01:01:33 -- common/autotest_common.sh@10 -- # set +x 00:21:49.342 01:01:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:49.342 01:01:33 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:21:49.342 01:01:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:49.342 01:01:33 -- common/autotest_common.sh@10 -- # set +x 00:21:49.342 01:01:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:49.342 01:01:33 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:21:49.342 01:01:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:49.342 01:01:33 -- common/autotest_common.sh@10 -- # set +x 00:21:49.342 01:01:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:49.342 01:01:33 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:49.342 01:01:33 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:21:49.342 01:01:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:49.342 01:01:33 -- common/autotest_common.sh@10 -- # set +x 00:21:49.342 Malloc6 00:21:49.342 01:01:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:49.342 01:01:33 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:21:49.342 01:01:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:49.342 01:01:33 -- common/autotest_common.sh@10 -- # set +x 00:21:49.342 01:01:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:49.342 01:01:33 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:21:49.342 01:01:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:49.342 01:01:33 -- common/autotest_common.sh@10 -- # set +x 00:21:49.342 01:01:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:49.342 01:01:33 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:21:49.342 01:01:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:49.342 01:01:33 -- common/autotest_common.sh@10 -- # set +x 00:21:49.342 01:01:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:49.342 01:01:33 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:49.342 01:01:33 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:21:49.342 01:01:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:49.342 01:01:33 -- common/autotest_common.sh@10 -- # set +x 00:21:49.342 Malloc7 00:21:49.342 01:01:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:49.342 01:01:33 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:21:49.342 01:01:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:49.342 01:01:33 -- common/autotest_common.sh@10 -- # set +x 00:21:49.342 01:01:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:49.342 01:01:33 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:21:49.342 01:01:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:49.342 01:01:33 -- common/autotest_common.sh@10 -- # set +x 00:21:49.342 01:01:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:49.342 01:01:33 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:21:49.342 01:01:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:49.342 01:01:33 -- common/autotest_common.sh@10 -- # set +x 00:21:49.342 01:01:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:49.342 01:01:33 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:49.342 01:01:33 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:21:49.342 01:01:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:49.342 01:01:33 -- common/autotest_common.sh@10 -- # set +x 00:21:49.342 Malloc8 00:21:49.342 01:01:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:49.342 01:01:33 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:21:49.342 01:01:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:49.342 01:01:33 -- common/autotest_common.sh@10 -- # set +x 00:21:49.342 01:01:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:49.342 01:01:33 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:21:49.342 01:01:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:49.342 01:01:33 -- common/autotest_common.sh@10 -- # set +x 00:21:49.342 01:01:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:49.342 01:01:33 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:21:49.342 01:01:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:49.342 01:01:33 -- common/autotest_common.sh@10 -- # set +x 00:21:49.342 01:01:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:49.342 01:01:33 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:49.342 01:01:33 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:21:49.342 01:01:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:49.342 01:01:33 -- common/autotest_common.sh@10 -- # set +x 00:21:49.599 Malloc9 00:21:49.599 01:01:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:49.599 01:01:33 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:21:49.599 01:01:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:49.599 01:01:33 -- common/autotest_common.sh@10 -- # set +x 00:21:49.599 01:01:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:49.599 01:01:33 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:21:49.599 01:01:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:49.599 01:01:33 -- common/autotest_common.sh@10 -- # set +x 00:21:49.599 01:01:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:49.599 01:01:33 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:21:49.599 01:01:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:49.599 01:01:33 -- common/autotest_common.sh@10 -- # set +x 00:21:49.599 01:01:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:49.599 01:01:33 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:49.599 01:01:33 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:21:49.599 01:01:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:49.599 01:01:33 -- common/autotest_common.sh@10 -- # set +x 00:21:49.599 Malloc10 00:21:49.599 01:01:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:49.599 01:01:33 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:21:49.599 01:01:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:49.599 01:01:33 -- common/autotest_common.sh@10 -- # set +x 00:21:49.599 01:01:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:49.599 01:01:33 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:21:49.599 01:01:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:49.599 01:01:33 -- common/autotest_common.sh@10 -- # set +x 00:21:49.599 01:01:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:49.599 01:01:33 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:21:49.599 01:01:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:49.599 01:01:33 -- common/autotest_common.sh@10 -- # set +x 00:21:49.599 01:01:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:49.599 01:01:33 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:49.599 01:01:33 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:21:49.599 01:01:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:49.599 01:01:33 -- common/autotest_common.sh@10 -- # set +x 00:21:49.599 Malloc11 00:21:49.599 01:01:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:49.599 01:01:33 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:21:49.599 01:01:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:49.599 01:01:33 -- common/autotest_common.sh@10 -- # set +x 00:21:49.599 01:01:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:49.599 01:01:33 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:21:49.599 01:01:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:49.599 01:01:33 -- common/autotest_common.sh@10 -- # set +x 00:21:49.599 01:01:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:49.599 01:01:33 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:21:49.599 01:01:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:49.599 01:01:33 -- common/autotest_common.sh@10 -- # set +x 00:21:49.599 01:01:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:49.599 01:01:33 -- target/multiconnection.sh@28 -- # seq 1 11 00:21:49.599 01:01:33 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:49.599 01:01:33 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:21:50.165 01:01:34 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:21:50.165 01:01:34 -- common/autotest_common.sh@1177 -- # local i=0 00:21:50.165 01:01:34 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:21:50.165 01:01:34 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:21:50.165 01:01:34 -- common/autotest_common.sh@1184 -- # sleep 2 00:21:52.700 01:01:36 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:21:52.700 01:01:36 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:21:52.700 01:01:36 -- common/autotest_common.sh@1186 -- # grep -c SPDK1 00:21:52.700 01:01:36 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:21:52.700 01:01:36 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:21:52.700 01:01:36 -- common/autotest_common.sh@1187 -- # return 0 00:21:52.700 01:01:36 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:52.700 01:01:36 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:21:52.958 01:01:37 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:21:52.958 01:01:37 -- common/autotest_common.sh@1177 -- # local i=0 00:21:52.958 01:01:37 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:21:52.958 01:01:37 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:21:52.958 01:01:37 -- common/autotest_common.sh@1184 -- # sleep 2 00:21:55.495 01:01:39 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:21:55.495 01:01:39 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:21:55.495 01:01:39 -- common/autotest_common.sh@1186 -- # grep -c SPDK2 00:21:55.495 01:01:39 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:21:55.495 01:01:39 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:21:55.495 01:01:39 -- common/autotest_common.sh@1187 -- # return 0 00:21:55.495 01:01:39 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:55.495 01:01:39 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:21:55.754 01:01:39 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:21:55.754 01:01:39 -- common/autotest_common.sh@1177 -- # local i=0 00:21:55.754 01:01:39 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:21:55.754 01:01:39 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:21:55.754 01:01:39 -- common/autotest_common.sh@1184 -- # sleep 2 00:21:57.654 01:01:41 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:21:57.654 01:01:41 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:21:57.654 01:01:41 -- common/autotest_common.sh@1186 -- # grep -c SPDK3 00:21:57.654 01:01:41 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:21:57.654 01:01:41 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:21:57.654 01:01:41 -- common/autotest_common.sh@1187 -- # return 0 00:21:57.654 01:01:41 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:57.654 01:01:41 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:21:58.589 01:01:42 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:21:58.589 01:01:42 -- common/autotest_common.sh@1177 -- # local i=0 00:21:58.589 01:01:42 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:21:58.589 01:01:42 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:21:58.589 01:01:42 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:00.544 01:01:44 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:00.544 01:01:44 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:00.544 01:01:44 -- common/autotest_common.sh@1186 -- # grep -c SPDK4 00:22:00.544 01:01:44 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:00.544 01:01:44 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:00.544 01:01:44 -- common/autotest_common.sh@1187 -- # return 0 00:22:00.544 01:01:44 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:00.544 01:01:44 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:22:01.109 01:01:45 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:22:01.109 01:01:45 -- common/autotest_common.sh@1177 -- # local i=0 00:22:01.109 01:01:45 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:01.109 01:01:45 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:01.109 01:01:45 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:03.013 01:01:47 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:03.013 01:01:47 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:03.013 01:01:47 -- common/autotest_common.sh@1186 -- # grep -c SPDK5 00:22:03.013 01:01:47 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:03.013 01:01:47 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:03.013 01:01:47 -- common/autotest_common.sh@1187 -- # return 0 00:22:03.013 01:01:47 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:03.013 01:01:47 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:22:03.951 01:01:48 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:22:03.951 01:01:48 -- common/autotest_common.sh@1177 -- # local i=0 00:22:03.951 01:01:48 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:03.951 01:01:48 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:03.951 01:01:48 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:05.850 01:01:50 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:05.850 01:01:50 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:05.850 01:01:50 -- common/autotest_common.sh@1186 -- # grep -c SPDK6 00:22:05.850 01:01:50 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:05.850 01:01:50 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:05.850 01:01:50 -- common/autotest_common.sh@1187 -- # return 0 00:22:05.850 01:01:50 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:05.850 01:01:50 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:22:06.786 01:01:50 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:22:06.786 01:01:50 -- common/autotest_common.sh@1177 -- # local i=0 00:22:06.786 01:01:50 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:06.786 01:01:50 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:06.786 01:01:50 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:08.692 01:01:52 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:08.692 01:01:52 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:08.692 01:01:52 -- common/autotest_common.sh@1186 -- # grep -c SPDK7 00:22:08.692 01:01:52 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:08.692 01:01:52 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:08.692 01:01:52 -- common/autotest_common.sh@1187 -- # return 0 00:22:08.692 01:01:52 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:08.692 01:01:52 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:22:09.623 01:01:53 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:22:09.623 01:01:53 -- common/autotest_common.sh@1177 -- # local i=0 00:22:09.623 01:01:53 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:09.623 01:01:53 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:09.623 01:01:53 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:11.525 01:01:55 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:11.525 01:01:55 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:11.525 01:01:55 -- common/autotest_common.sh@1186 -- # grep -c SPDK8 00:22:11.525 01:01:55 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:11.525 01:01:55 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:11.525 01:01:55 -- common/autotest_common.sh@1187 -- # return 0 00:22:11.525 01:01:55 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:11.525 01:01:55 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:22:12.461 01:01:56 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:22:12.461 01:01:56 -- common/autotest_common.sh@1177 -- # local i=0 00:22:12.461 01:01:56 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:12.461 01:01:56 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:12.461 01:01:56 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:14.366 01:01:58 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:14.366 01:01:58 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:14.366 01:01:58 -- common/autotest_common.sh@1186 -- # grep -c SPDK9 00:22:14.366 01:01:58 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:14.366 01:01:58 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:14.366 01:01:58 -- common/autotest_common.sh@1187 -- # return 0 00:22:14.366 01:01:58 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:14.366 01:01:58 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:22:15.303 01:01:59 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:22:15.303 01:01:59 -- common/autotest_common.sh@1177 -- # local i=0 00:22:15.303 01:01:59 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:15.303 01:01:59 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:15.303 01:01:59 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:17.207 01:02:01 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:17.464 01:02:01 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:17.464 01:02:01 -- common/autotest_common.sh@1186 -- # grep -c SPDK10 00:22:17.464 01:02:01 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:17.464 01:02:01 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:17.464 01:02:01 -- common/autotest_common.sh@1187 -- # return 0 00:22:17.464 01:02:01 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:17.464 01:02:01 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:22:18.030 01:02:02 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:22:18.030 01:02:02 -- common/autotest_common.sh@1177 -- # local i=0 00:22:18.030 01:02:02 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:18.030 01:02:02 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:18.030 01:02:02 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:20.599 01:02:04 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:20.599 01:02:04 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:20.599 01:02:04 -- common/autotest_common.sh@1186 -- # grep -c SPDK11 00:22:20.599 01:02:04 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:20.599 01:02:04 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:20.599 01:02:04 -- common/autotest_common.sh@1187 -- # return 0 00:22:20.599 01:02:04 -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:22:20.599 [global] 00:22:20.599 thread=1 00:22:20.599 invalidate=1 00:22:20.599 rw=read 00:22:20.599 time_based=1 00:22:20.599 runtime=10 00:22:20.599 ioengine=libaio 00:22:20.599 direct=1 00:22:20.599 bs=262144 00:22:20.599 iodepth=64 00:22:20.599 norandommap=1 00:22:20.599 numjobs=1 00:22:20.599 00:22:20.599 [job0] 00:22:20.599 filename=/dev/nvme0n1 00:22:20.599 [job1] 00:22:20.599 filename=/dev/nvme10n1 00:22:20.599 [job2] 00:22:20.599 filename=/dev/nvme1n1 00:22:20.599 [job3] 00:22:20.599 filename=/dev/nvme2n1 00:22:20.599 [job4] 00:22:20.599 filename=/dev/nvme3n1 00:22:20.599 [job5] 00:22:20.600 filename=/dev/nvme4n1 00:22:20.600 [job6] 00:22:20.600 filename=/dev/nvme5n1 00:22:20.600 [job7] 00:22:20.600 filename=/dev/nvme6n1 00:22:20.600 [job8] 00:22:20.600 filename=/dev/nvme7n1 00:22:20.600 [job9] 00:22:20.600 filename=/dev/nvme8n1 00:22:20.600 [job10] 00:22:20.600 filename=/dev/nvme9n1 00:22:20.600 Could not set queue depth (nvme0n1) 00:22:20.600 Could not set queue depth (nvme10n1) 00:22:20.600 Could not set queue depth (nvme1n1) 00:22:20.600 Could not set queue depth (nvme2n1) 00:22:20.600 Could not set queue depth (nvme3n1) 00:22:20.600 Could not set queue depth (nvme4n1) 00:22:20.600 Could not set queue depth (nvme5n1) 00:22:20.600 Could not set queue depth (nvme6n1) 00:22:20.600 Could not set queue depth (nvme7n1) 00:22:20.600 Could not set queue depth (nvme8n1) 00:22:20.600 Could not set queue depth (nvme9n1) 00:22:20.600 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:20.600 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:20.600 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:20.600 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:20.600 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:20.600 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:20.600 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:20.600 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:20.600 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:20.600 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:20.600 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:20.600 fio-3.35 00:22:20.600 Starting 11 threads 00:22:32.811 00:22:32.811 job0: (groupid=0, jobs=1): err= 0: pid=3448600: Tue Jul 23 01:02:15 2024 00:22:32.811 read: IOPS=566, BW=142MiB/s (148MB/s)(1429MiB/10089msec) 00:22:32.811 slat (usec): min=10, max=98265, avg=1134.86, stdev=4567.91 00:22:32.811 clat (msec): min=5, max=402, avg=111.80, stdev=53.00 00:22:32.811 lat (msec): min=5, max=402, avg=112.93, stdev=53.62 00:22:32.811 clat percentiles (msec): 00:22:32.811 | 1.00th=[ 21], 5.00th=[ 46], 10.00th=[ 56], 20.00th=[ 74], 00:22:32.811 | 30.00th=[ 87], 40.00th=[ 97], 50.00th=[ 108], 60.00th=[ 115], 00:22:32.811 | 70.00th=[ 127], 80.00th=[ 140], 90.00th=[ 165], 95.00th=[ 203], 00:22:32.811 | 99.00th=[ 317], 99.50th=[ 376], 99.90th=[ 397], 99.95th=[ 397], 00:22:32.811 | 99.99th=[ 401] 00:22:32.811 bw ( KiB/s): min=58368, max=244736, per=7.93%, avg=144628.10, stdev=44307.22, samples=20 00:22:32.811 iops : min= 228, max= 956, avg=564.95, stdev=173.08, samples=20 00:22:32.811 lat (msec) : 10=0.21%, 20=0.68%, 50=6.02%, 100=35.82%, 250=54.08% 00:22:32.811 lat (msec) : 500=3.19% 00:22:32.811 cpu : usr=0.36%, sys=1.87%, ctx=1613, majf=0, minf=3721 00:22:32.811 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:22:32.811 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:32.811 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:32.811 issued rwts: total=5714,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:32.811 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:32.811 job1: (groupid=0, jobs=1): err= 0: pid=3448602: Tue Jul 23 01:02:15 2024 00:22:32.811 read: IOPS=497, BW=124MiB/s (131MB/s)(1249MiB/10032msec) 00:22:32.811 slat (usec): min=14, max=113908, avg=1766.24, stdev=6146.25 00:22:32.811 clat (msec): min=4, max=499, avg=126.67, stdev=70.98 00:22:32.811 lat (msec): min=4, max=507, avg=128.43, stdev=72.17 00:22:32.812 clat percentiles (msec): 00:22:32.812 | 1.00th=[ 10], 5.00th=[ 35], 10.00th=[ 40], 20.00th=[ 62], 00:22:32.812 | 30.00th=[ 88], 40.00th=[ 109], 50.00th=[ 128], 60.00th=[ 140], 00:22:32.812 | 70.00th=[ 153], 80.00th=[ 169], 90.00th=[ 201], 95.00th=[ 249], 00:22:32.812 | 99.00th=[ 388], 99.50th=[ 439], 99.90th=[ 456], 99.95th=[ 464], 00:22:32.812 | 99.99th=[ 502] 00:22:32.812 bw ( KiB/s): min=36352, max=354304, per=6.92%, avg=126246.15, stdev=71725.65, samples=20 00:22:32.812 iops : min= 142, max= 1384, avg=493.10, stdev=280.18, samples=20 00:22:32.812 lat (msec) : 10=1.02%, 20=0.72%, 50=13.07%, 100=20.96%, 250=59.30% 00:22:32.812 lat (msec) : 500=4.92% 00:22:32.812 cpu : usr=0.33%, sys=1.95%, ctx=1191, majf=0, minf=4097 00:22:32.812 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:22:32.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:32.812 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:32.812 issued rwts: total=4995,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:32.812 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:32.812 job2: (groupid=0, jobs=1): err= 0: pid=3448606: Tue Jul 23 01:02:15 2024 00:22:32.812 read: IOPS=733, BW=183MiB/s (192MB/s)(1847MiB/10068msec) 00:22:32.812 slat (usec): min=11, max=218619, avg=1157.61, stdev=4838.94 00:22:32.812 clat (usec): min=1443, max=376393, avg=85992.42, stdev=49218.75 00:22:32.812 lat (usec): min=1472, max=466225, avg=87150.03, stdev=49989.26 00:22:32.812 clat percentiles (msec): 00:22:32.812 | 1.00th=[ 12], 5.00th=[ 29], 10.00th=[ 41], 20.00th=[ 52], 00:22:32.812 | 30.00th=[ 58], 40.00th=[ 63], 50.00th=[ 70], 60.00th=[ 85], 00:22:32.812 | 70.00th=[ 104], 80.00th=[ 123], 90.00th=[ 148], 95.00th=[ 174], 00:22:32.812 | 99.00th=[ 245], 99.50th=[ 342], 99.90th=[ 368], 99.95th=[ 368], 00:22:32.812 | 99.99th=[ 376] 00:22:32.812 bw ( KiB/s): min=46172, max=355328, per=10.28%, avg=187510.80, stdev=78640.60, samples=20 00:22:32.812 iops : min= 180, max= 1388, avg=732.40, stdev=307.25, samples=20 00:22:32.812 lat (msec) : 2=0.16%, 4=0.16%, 10=0.57%, 20=2.38%, 50=15.02% 00:22:32.812 lat (msec) : 100=49.51%, 250=31.29%, 500=0.89% 00:22:32.812 cpu : usr=0.59%, sys=2.51%, ctx=1665, majf=0, minf=4097 00:22:32.812 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:22:32.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:32.812 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:32.812 issued rwts: total=7388,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:32.812 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:32.812 job3: (groupid=0, jobs=1): err= 0: pid=3448607: Tue Jul 23 01:02:15 2024 00:22:32.812 read: IOPS=983, BW=246MiB/s (258MB/s)(2476MiB/10075msec) 00:22:32.812 slat (usec): min=10, max=75528, avg=608.26, stdev=2456.57 00:22:32.812 clat (usec): min=1032, max=404790, avg=64445.11, stdev=45395.32 00:22:32.812 lat (usec): min=1090, max=405662, avg=65053.37, stdev=45590.24 00:22:32.812 clat percentiles (msec): 00:22:32.812 | 1.00th=[ 16], 5.00th=[ 30], 10.00th=[ 32], 20.00th=[ 34], 00:22:32.812 | 30.00th=[ 36], 40.00th=[ 41], 50.00th=[ 51], 60.00th=[ 58], 00:22:32.812 | 70.00th=[ 68], 80.00th=[ 90], 90.00th=[ 129], 95.00th=[ 146], 00:22:32.812 | 99.00th=[ 201], 99.50th=[ 351], 99.90th=[ 388], 99.95th=[ 401], 00:22:32.812 | 99.99th=[ 405] 00:22:32.812 bw ( KiB/s): min=70656, max=498688, per=13.81%, avg=251911.80, stdev=119141.78, samples=20 00:22:32.812 iops : min= 276, max= 1948, avg=984.00, stdev=465.42, samples=20 00:22:32.812 lat (msec) : 2=0.07%, 4=0.01%, 10=0.24%, 20=1.48%, 50=48.34% 00:22:32.812 lat (msec) : 100=32.03%, 250=17.10%, 500=0.72% 00:22:32.812 cpu : usr=0.39%, sys=3.42%, ctx=2134, majf=0, minf=4097 00:22:32.812 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:22:32.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:32.812 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:32.812 issued rwts: total=9905,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:32.812 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:32.812 job4: (groupid=0, jobs=1): err= 0: pid=3448608: Tue Jul 23 01:02:15 2024 00:22:32.812 read: IOPS=581, BW=145MiB/s (153MB/s)(1463MiB/10055msec) 00:22:32.812 slat (usec): min=9, max=134040, avg=1064.95, stdev=4693.69 00:22:32.812 clat (usec): min=1628, max=474866, avg=108849.97, stdev=75841.59 00:22:32.812 lat (usec): min=1659, max=474939, avg=109914.92, stdev=76305.46 00:22:32.812 clat percentiles (msec): 00:22:32.812 | 1.00th=[ 9], 5.00th=[ 20], 10.00th=[ 33], 20.00th=[ 51], 00:22:32.812 | 30.00th=[ 71], 40.00th=[ 86], 50.00th=[ 95], 60.00th=[ 104], 00:22:32.812 | 70.00th=[ 121], 80.00th=[ 146], 90.00th=[ 205], 95.00th=[ 275], 00:22:32.812 | 99.00th=[ 409], 99.50th=[ 439], 99.90th=[ 468], 99.95th=[ 472], 00:22:32.812 | 99.99th=[ 477] 00:22:32.812 bw ( KiB/s): min=54784, max=266752, per=8.12%, avg=148156.25, stdev=61086.40, samples=20 00:22:32.812 iops : min= 214, max= 1042, avg=578.70, stdev=238.58, samples=20 00:22:32.812 lat (msec) : 2=0.02%, 4=0.09%, 10=1.32%, 20=3.93%, 50=14.26% 00:22:32.812 lat (msec) : 100=36.14%, 250=38.03%, 500=6.22% 00:22:32.812 cpu : usr=0.34%, sys=1.89%, ctx=1605, majf=0, minf=4097 00:22:32.812 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:22:32.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:32.812 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:32.812 issued rwts: total=5850,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:32.812 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:32.812 job5: (groupid=0, jobs=1): err= 0: pid=3448623: Tue Jul 23 01:02:15 2024 00:22:32.812 read: IOPS=663, BW=166MiB/s (174MB/s)(1671MiB/10068msec) 00:22:32.812 slat (usec): min=9, max=47237, avg=1254.78, stdev=3616.24 00:22:32.812 clat (usec): min=960, max=200202, avg=95102.20, stdev=36242.63 00:22:32.812 lat (usec): min=993, max=200944, avg=96356.98, stdev=36604.78 00:22:32.812 clat percentiles (msec): 00:22:32.812 | 1.00th=[ 12], 5.00th=[ 45], 10.00th=[ 53], 20.00th=[ 62], 00:22:32.812 | 30.00th=[ 72], 40.00th=[ 83], 50.00th=[ 91], 60.00th=[ 104], 00:22:32.812 | 70.00th=[ 115], 80.00th=[ 129], 90.00th=[ 146], 95.00th=[ 161], 00:22:32.812 | 99.00th=[ 178], 99.50th=[ 182], 99.90th=[ 192], 99.95th=[ 194], 00:22:32.812 | 99.99th=[ 201] 00:22:32.812 bw ( KiB/s): min=101376, max=285696, per=9.29%, avg=169459.40, stdev=51276.92, samples=20 00:22:32.812 iops : min= 396, max= 1116, avg=661.90, stdev=200.31, samples=20 00:22:32.812 lat (usec) : 1000=0.03% 00:22:32.812 lat (msec) : 2=0.03%, 4=0.03%, 10=0.75%, 20=0.82%, 50=6.42% 00:22:32.812 lat (msec) : 100=49.18%, 250=42.74% 00:22:32.812 cpu : usr=0.40%, sys=2.54%, ctx=1630, majf=0, minf=4097 00:22:32.812 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:22:32.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:32.812 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:32.812 issued rwts: total=6682,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:32.812 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:32.812 job6: (groupid=0, jobs=1): err= 0: pid=3448646: Tue Jul 23 01:02:15 2024 00:22:32.812 read: IOPS=597, BW=149MiB/s (157MB/s)(1507MiB/10090msec) 00:22:32.812 slat (usec): min=9, max=303864, avg=963.70, stdev=6382.93 00:22:32.812 clat (msec): min=2, max=474, avg=106.07, stdev=79.00 00:22:32.812 lat (msec): min=2, max=648, avg=107.03, stdev=79.92 00:22:32.812 clat percentiles (msec): 00:22:32.812 | 1.00th=[ 8], 5.00th=[ 25], 10.00th=[ 37], 20.00th=[ 51], 00:22:32.812 | 30.00th=[ 59], 40.00th=[ 68], 50.00th=[ 84], 60.00th=[ 103], 00:22:32.812 | 70.00th=[ 118], 80.00th=[ 148], 90.00th=[ 203], 95.00th=[ 266], 00:22:32.812 | 99.00th=[ 430], 99.50th=[ 443], 99.90th=[ 468], 99.95th=[ 468], 00:22:32.812 | 99.99th=[ 477] 00:22:32.812 bw ( KiB/s): min=32256, max=316806, per=8.37%, avg=152672.30, stdev=78789.57, samples=20 00:22:32.812 iops : min= 126, max= 1237, avg=596.35, stdev=307.71, samples=20 00:22:32.812 lat (msec) : 4=0.33%, 10=1.04%, 20=2.17%, 50=16.57%, 100=38.95% 00:22:32.812 lat (msec) : 250=35.41%, 500=5.52% 00:22:32.812 cpu : usr=0.28%, sys=1.72%, ctx=1679, majf=0, minf=4097 00:22:32.812 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:22:32.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:32.812 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:32.812 issued rwts: total=6029,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:32.812 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:32.812 job7: (groupid=0, jobs=1): err= 0: pid=3448670: Tue Jul 23 01:02:15 2024 00:22:32.812 read: IOPS=684, BW=171MiB/s (179MB/s)(1720MiB/10056msec) 00:22:32.812 slat (usec): min=10, max=128366, avg=1243.28, stdev=3925.82 00:22:32.812 clat (msec): min=5, max=379, avg=92.26, stdev=44.17 00:22:32.812 lat (msec): min=5, max=451, avg=93.50, stdev=44.77 00:22:32.812 clat percentiles (msec): 00:22:32.812 | 1.00th=[ 13], 5.00th=[ 24], 10.00th=[ 48], 20.00th=[ 61], 00:22:32.813 | 30.00th=[ 68], 40.00th=[ 78], 50.00th=[ 86], 60.00th=[ 99], 00:22:32.813 | 70.00th=[ 110], 80.00th=[ 122], 90.00th=[ 146], 95.00th=[ 163], 00:22:32.813 | 99.00th=[ 253], 99.50th=[ 330], 99.90th=[ 359], 99.95th=[ 359], 00:22:32.813 | 99.99th=[ 380] 00:22:32.813 bw ( KiB/s): min=97280, max=283136, per=9.57%, avg=174494.90, stdev=50742.85, samples=20 00:22:32.813 iops : min= 380, max= 1106, avg=681.60, stdev=198.22, samples=20 00:22:32.813 lat (msec) : 10=0.36%, 20=3.78%, 50=6.79%, 100=51.14%, 250=36.92% 00:22:32.813 lat (msec) : 500=1.00% 00:22:32.813 cpu : usr=0.47%, sys=2.58%, ctx=1672, majf=0, minf=4097 00:22:32.813 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:22:32.813 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:32.813 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:32.813 issued rwts: total=6879,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:32.813 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:32.813 job8: (groupid=0, jobs=1): err= 0: pid=3448761: Tue Jul 23 01:02:15 2024 00:22:32.813 read: IOPS=585, BW=146MiB/s (153MB/s)(1476MiB/10087msec) 00:22:32.813 slat (usec): min=14, max=87829, avg=1656.55, stdev=5268.82 00:22:32.813 clat (msec): min=10, max=500, avg=107.59, stdev=68.81 00:22:32.813 lat (msec): min=10, max=511, avg=109.25, stdev=69.90 00:22:32.813 clat percentiles (msec): 00:22:32.813 | 1.00th=[ 28], 5.00th=[ 41], 10.00th=[ 46], 20.00th=[ 56], 00:22:32.813 | 30.00th=[ 62], 40.00th=[ 71], 50.00th=[ 90], 60.00th=[ 107], 00:22:32.813 | 70.00th=[ 127], 80.00th=[ 150], 90.00th=[ 194], 95.00th=[ 230], 00:22:32.813 | 99.00th=[ 393], 99.50th=[ 422], 99.90th=[ 464], 99.95th=[ 498], 00:22:32.813 | 99.99th=[ 502] 00:22:32.813 bw ( KiB/s): min=40960, max=285184, per=8.20%, avg=149511.60, stdev=81258.56, samples=20 00:22:32.813 iops : min= 160, max= 1114, avg=584.00, stdev=317.40, samples=20 00:22:32.813 lat (msec) : 20=0.46%, 50=12.36%, 100=42.61%, 250=40.44%, 500=4.12% 00:22:32.813 lat (msec) : 750=0.02% 00:22:32.813 cpu : usr=0.36%, sys=2.22%, ctx=1202, majf=0, minf=4097 00:22:32.813 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:22:32.813 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:32.813 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:32.813 issued rwts: total=5905,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:32.813 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:32.813 job9: (groupid=0, jobs=1): err= 0: pid=3448771: Tue Jul 23 01:02:15 2024 00:22:32.813 read: IOPS=475, BW=119MiB/s (125MB/s)(1200MiB/10094msec) 00:22:32.813 slat (usec): min=9, max=225352, avg=1556.34, stdev=7658.89 00:22:32.813 clat (usec): min=1702, max=576626, avg=132930.23, stdev=77345.99 00:22:32.813 lat (usec): min=1734, max=583104, avg=134486.57, stdev=78459.05 00:22:32.813 clat percentiles (msec): 00:22:32.813 | 1.00th=[ 8], 5.00th=[ 21], 10.00th=[ 58], 20.00th=[ 77], 00:22:32.813 | 30.00th=[ 90], 40.00th=[ 106], 50.00th=[ 123], 60.00th=[ 138], 00:22:32.813 | 70.00th=[ 157], 80.00th=[ 178], 90.00th=[ 228], 95.00th=[ 284], 00:22:32.813 | 99.00th=[ 409], 99.50th=[ 426], 99.90th=[ 527], 99.95th=[ 575], 00:22:32.813 | 99.99th=[ 575] 00:22:32.813 bw ( KiB/s): min=31294, max=196608, per=6.65%, avg=121255.80, stdev=45760.01, samples=20 00:22:32.813 iops : min= 122, max= 768, avg=473.60, stdev=178.75, samples=20 00:22:32.813 lat (msec) : 2=0.02%, 4=0.19%, 10=1.65%, 20=3.08%, 50=4.23% 00:22:32.813 lat (msec) : 100=27.90%, 250=55.69%, 500=7.15%, 750=0.10% 00:22:32.813 cpu : usr=0.31%, sys=1.61%, ctx=1251, majf=0, minf=4097 00:22:32.813 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:22:32.813 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:32.813 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:32.813 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:32.813 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:32.813 job10: (groupid=0, jobs=1): err= 0: pid=3448772: Tue Jul 23 01:02:15 2024 00:22:32.813 read: IOPS=773, BW=193MiB/s (203MB/s)(1940MiB/10032msec) 00:22:32.813 slat (usec): min=13, max=95216, avg=1194.45, stdev=3876.49 00:22:32.813 clat (msec): min=3, max=246, avg=81.50, stdev=37.52 00:22:32.813 lat (msec): min=4, max=268, avg=82.69, stdev=38.12 00:22:32.813 clat percentiles (msec): 00:22:32.813 | 1.00th=[ 21], 5.00th=[ 33], 10.00th=[ 36], 20.00th=[ 48], 00:22:32.813 | 30.00th=[ 59], 40.00th=[ 68], 50.00th=[ 78], 60.00th=[ 85], 00:22:32.813 | 70.00th=[ 95], 80.00th=[ 114], 90.00th=[ 133], 95.00th=[ 155], 00:22:32.813 | 99.00th=[ 182], 99.50th=[ 188], 99.90th=[ 218], 99.95th=[ 232], 00:22:32.813 | 99.99th=[ 247] 00:22:32.813 bw ( KiB/s): min=104960, max=456192, per=10.80%, avg=197000.75, stdev=81419.70, samples=20 00:22:32.813 iops : min= 410, max= 1782, avg=769.50, stdev=318.06, samples=20 00:22:32.813 lat (msec) : 4=0.01%, 10=0.23%, 20=0.79%, 50=20.52%, 100=51.15% 00:22:32.813 lat (msec) : 250=27.30% 00:22:32.813 cpu : usr=0.58%, sys=2.76%, ctx=1733, majf=0, minf=4097 00:22:32.813 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:22:32.813 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:32.813 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:32.813 issued rwts: total=7759,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:32.813 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:32.813 00:22:32.813 Run status group 0 (all jobs): 00:22:32.813 READ: bw=1781MiB/s (1867MB/s), 119MiB/s-246MiB/s (125MB/s-258MB/s), io=17.6GiB (18.8GB), run=10032-10094msec 00:22:32.813 00:22:32.813 Disk stats (read/write): 00:22:32.813 nvme0n1: ios=11272/0, merge=0/0, ticks=1244770/0, in_queue=1244770, util=95.30% 00:22:32.813 nvme10n1: ios=9811/0, merge=0/0, ticks=1237777/0, in_queue=1237777, util=95.66% 00:22:32.813 nvme1n1: ios=14619/0, merge=0/0, ticks=1240851/0, in_queue=1240851, util=96.18% 00:22:32.813 nvme2n1: ios=19644/0, merge=0/0, ticks=1244928/0, in_queue=1244928, util=96.47% 00:22:32.813 nvme3n1: ios=11526/0, merge=0/0, ticks=1246917/0, in_queue=1246917, util=96.62% 00:22:32.813 nvme4n1: ios=13188/0, merge=0/0, ticks=1237445/0, in_queue=1237445, util=97.26% 00:22:32.813 nvme5n1: ios=11899/0, merge=0/0, ticks=1247018/0, in_queue=1247018, util=97.58% 00:22:32.813 nvme6n1: ios=13604/0, merge=0/0, ticks=1234673/0, in_queue=1234673, util=97.81% 00:22:32.813 nvme7n1: ios=11647/0, merge=0/0, ticks=1234810/0, in_queue=1234810, util=98.64% 00:22:32.813 nvme8n1: ios=9472/0, merge=0/0, ticks=1238908/0, in_queue=1238908, util=99.00% 00:22:32.813 nvme9n1: ios=15324/0, merge=0/0, ticks=1236524/0, in_queue=1236524, util=99.26% 00:22:32.813 01:02:15 -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:22:32.813 [global] 00:22:32.813 thread=1 00:22:32.813 invalidate=1 00:22:32.813 rw=randwrite 00:22:32.813 time_based=1 00:22:32.813 runtime=10 00:22:32.813 ioengine=libaio 00:22:32.813 direct=1 00:22:32.813 bs=262144 00:22:32.813 iodepth=64 00:22:32.813 norandommap=1 00:22:32.813 numjobs=1 00:22:32.813 00:22:32.813 [job0] 00:22:32.813 filename=/dev/nvme0n1 00:22:32.813 [job1] 00:22:32.813 filename=/dev/nvme10n1 00:22:32.813 [job2] 00:22:32.813 filename=/dev/nvme1n1 00:22:32.813 [job3] 00:22:32.813 filename=/dev/nvme2n1 00:22:32.813 [job4] 00:22:32.813 filename=/dev/nvme3n1 00:22:32.813 [job5] 00:22:32.813 filename=/dev/nvme4n1 00:22:32.813 [job6] 00:22:32.813 filename=/dev/nvme5n1 00:22:32.813 [job7] 00:22:32.813 filename=/dev/nvme6n1 00:22:32.813 [job8] 00:22:32.813 filename=/dev/nvme7n1 00:22:32.813 [job9] 00:22:32.813 filename=/dev/nvme8n1 00:22:32.813 [job10] 00:22:32.813 filename=/dev/nvme9n1 00:22:32.813 Could not set queue depth (nvme0n1) 00:22:32.813 Could not set queue depth (nvme10n1) 00:22:32.813 Could not set queue depth (nvme1n1) 00:22:32.813 Could not set queue depth (nvme2n1) 00:22:32.813 Could not set queue depth (nvme3n1) 00:22:32.813 Could not set queue depth (nvme4n1) 00:22:32.813 Could not set queue depth (nvme5n1) 00:22:32.813 Could not set queue depth (nvme6n1) 00:22:32.813 Could not set queue depth (nvme7n1) 00:22:32.813 Could not set queue depth (nvme8n1) 00:22:32.813 Could not set queue depth (nvme9n1) 00:22:32.813 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:32.813 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:32.813 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:32.813 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:32.813 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:32.814 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:32.814 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:32.814 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:32.814 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:32.814 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:32.814 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:32.814 fio-3.35 00:22:32.814 Starting 11 threads 00:22:42.790 00:22:42.790 job0: (groupid=0, jobs=1): err= 0: pid=3449813: Tue Jul 23 01:02:26 2024 00:22:42.790 write: IOPS=708, BW=177MiB/s (186MB/s)(1784MiB/10071msec); 0 zone resets 00:22:42.790 slat (usec): min=25, max=34594, avg=1392.60, stdev=2628.60 00:22:42.790 clat (msec): min=17, max=192, avg=88.84, stdev=33.38 00:22:42.790 lat (msec): min=17, max=192, avg=90.23, stdev=33.79 00:22:42.790 clat percentiles (msec): 00:22:42.790 | 1.00th=[ 42], 5.00th=[ 45], 10.00th=[ 47], 20.00th=[ 53], 00:22:42.790 | 30.00th=[ 75], 40.00th=[ 80], 50.00th=[ 83], 60.00th=[ 92], 00:22:42.790 | 70.00th=[ 100], 80.00th=[ 113], 90.00th=[ 138], 95.00th=[ 150], 00:22:42.790 | 99.00th=[ 186], 99.50th=[ 190], 99.90th=[ 192], 99.95th=[ 192], 00:22:42.790 | 99.99th=[ 192] 00:22:42.790 bw ( KiB/s): min=96256, max=352063, per=16.68%, avg=181068.75, stdev=61239.87, samples=20 00:22:42.790 iops : min= 376, max= 1375, avg=707.25, stdev=239.24, samples=20 00:22:42.790 lat (msec) : 20=0.06%, 50=17.49%, 100=53.12%, 250=29.34% 00:22:42.790 cpu : usr=2.11%, sys=2.46%, ctx=1805, majf=0, minf=1 00:22:42.790 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:22:42.790 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:42.790 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:42.790 issued rwts: total=0,7137,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:42.790 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:42.790 job1: (groupid=0, jobs=1): err= 0: pid=3449825: Tue Jul 23 01:02:26 2024 00:22:42.790 write: IOPS=357, BW=89.3MiB/s (93.7MB/s)(912MiB/10211msec); 0 zone resets 00:22:42.790 slat (usec): min=16, max=223687, avg=1884.80, stdev=10901.86 00:22:42.790 clat (usec): min=1291, max=1386.9k, avg=177103.02, stdev=257694.22 00:22:42.790 lat (usec): min=1329, max=1386.9k, avg=178987.82, stdev=260357.40 00:22:42.790 clat percentiles (msec): 00:22:42.790 | 1.00th=[ 3], 5.00th=[ 5], 10.00th=[ 7], 20.00th=[ 16], 00:22:42.790 | 30.00th=[ 24], 40.00th=[ 51], 50.00th=[ 81], 60.00th=[ 108], 00:22:42.790 | 70.00th=[ 146], 80.00th=[ 309], 90.00th=[ 468], 95.00th=[ 743], 00:22:42.790 | 99.00th=[ 1250], 99.50th=[ 1368], 99.90th=[ 1385], 99.95th=[ 1385], 00:22:42.790 | 99.99th=[ 1385] 00:22:42.790 bw ( KiB/s): min= 8192, max=229888, per=8.46%, avg=91799.55, stdev=71132.47, samples=20 00:22:42.790 iops : min= 32, max= 898, avg=358.55, stdev=277.91, samples=20 00:22:42.790 lat (msec) : 2=0.30%, 4=2.71%, 10=12.14%, 20=9.76%, 50=15.15% 00:22:42.790 lat (msec) : 100=17.18%, 250=17.84%, 500=15.35%, 750=4.69%, 1000=2.25% 00:22:42.790 lat (msec) : 2000=2.63% 00:22:42.790 cpu : usr=0.88%, sys=1.19%, ctx=2571, majf=0, minf=1 00:22:42.790 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:22:42.790 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:42.790 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:42.790 issued rwts: total=0,3649,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:42.790 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:42.790 job2: (groupid=0, jobs=1): err= 0: pid=3449826: Tue Jul 23 01:02:26 2024 00:22:42.790 write: IOPS=202, BW=50.7MiB/s (53.2MB/s)(518MiB/10217msec); 0 zone resets 00:22:42.790 slat (usec): min=22, max=461469, avg=4216.97, stdev=20400.25 00:22:42.790 clat (msec): min=4, max=1427, avg=311.18, stdev=319.77 00:22:42.790 lat (msec): min=4, max=1427, avg=315.40, stdev=323.89 00:22:42.790 clat percentiles (msec): 00:22:42.790 | 1.00th=[ 14], 5.00th=[ 32], 10.00th=[ 45], 20.00th=[ 56], 00:22:42.790 | 30.00th=[ 78], 40.00th=[ 144], 50.00th=[ 184], 60.00th=[ 326], 00:22:42.790 | 70.00th=[ 359], 80.00th=[ 430], 90.00th=[ 776], 95.00th=[ 1133], 00:22:42.790 | 99.00th=[ 1368], 99.50th=[ 1418], 99.90th=[ 1435], 99.95th=[ 1435], 00:22:42.790 | 99.99th=[ 1435] 00:22:42.790 bw ( KiB/s): min= 2554, max=199680, per=4.74%, avg=51404.50, stdev=49901.24, samples=20 00:22:42.790 iops : min= 9, max= 780, avg=200.75, stdev=194.98, samples=20 00:22:42.790 lat (msec) : 10=0.53%, 20=1.98%, 50=12.93%, 100=17.04%, 250=19.55% 00:22:42.790 lat (msec) : 500=31.32%, 750=6.13%, 1000=3.91%, 2000=6.61% 00:22:42.790 cpu : usr=0.65%, sys=0.71%, ctx=1234, majf=0, minf=1 00:22:42.790 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.5%, >=64=97.0% 00:22:42.790 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:42.790 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:42.790 issued rwts: total=0,2072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:42.790 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:42.790 job3: (groupid=0, jobs=1): err= 0: pid=3449827: Tue Jul 23 01:02:26 2024 00:22:42.790 write: IOPS=186, BW=46.6MiB/s (48.8MB/s)(476MiB/10219msec); 0 zone resets 00:22:42.790 slat (usec): min=17, max=194801, avg=3310.70, stdev=11774.75 00:22:42.790 clat (usec): min=1906, max=1280.7k, avg=339877.79, stdev=276841.45 00:22:42.790 lat (usec): min=1938, max=1280.8k, avg=343188.49, stdev=278458.86 00:22:42.790 clat percentiles (msec): 00:22:42.790 | 1.00th=[ 4], 5.00th=[ 7], 10.00th=[ 9], 20.00th=[ 33], 00:22:42.790 | 30.00th=[ 222], 40.00th=[ 271], 50.00th=[ 326], 60.00th=[ 363], 00:22:42.790 | 70.00th=[ 388], 80.00th=[ 506], 90.00th=[ 735], 95.00th=[ 944], 00:22:42.790 | 99.00th=[ 1150], 99.50th=[ 1150], 99.90th=[ 1234], 99.95th=[ 1284], 00:22:42.790 | 99.99th=[ 1284] 00:22:42.790 bw ( KiB/s): min= 7680, max=89088, per=4.34%, avg=47123.45, stdev=23999.86, samples=20 00:22:42.790 iops : min= 30, max= 348, avg=184.05, stdev=93.73, samples=20 00:22:42.790 lat (msec) : 2=0.05%, 4=1.52%, 10=10.50%, 20=4.04%, 50=6.20% 00:22:42.790 lat (msec) : 100=1.79%, 250=11.55%, 500=43.96%, 750=11.34%, 1000=4.78% 00:22:42.790 lat (msec) : 2000=4.25% 00:22:42.790 cpu : usr=0.50%, sys=0.72%, ctx=1117, majf=0, minf=1 00:22:42.790 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.7%, >=64=96.7% 00:22:42.790 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:42.790 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:42.790 issued rwts: total=0,1904,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:42.790 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:42.790 job4: (groupid=0, jobs=1): err= 0: pid=3449828: Tue Jul 23 01:02:26 2024 00:22:42.790 write: IOPS=253, BW=63.4MiB/s (66.5MB/s)(648MiB/10218msec); 0 zone resets 00:22:42.790 slat (usec): min=19, max=333433, avg=2822.76, stdev=14348.69 00:22:42.790 clat (usec): min=1706, max=1420.3k, avg=249217.13, stdev=289730.04 00:22:42.791 lat (usec): min=1768, max=1420.4k, avg=252039.89, stdev=292757.46 00:22:42.791 clat percentiles (msec): 00:22:42.791 | 1.00th=[ 4], 5.00th=[ 8], 10.00th=[ 13], 20.00th=[ 23], 00:22:42.791 | 30.00th=[ 32], 40.00th=[ 68], 50.00th=[ 110], 60.00th=[ 284], 00:22:42.791 | 70.00th=[ 351], 80.00th=[ 397], 90.00th=[ 609], 95.00th=[ 827], 00:22:42.791 | 99.00th=[ 1385], 99.50th=[ 1401], 99.90th=[ 1418], 99.95th=[ 1418], 00:22:42.791 | 99.99th=[ 1418] 00:22:42.791 bw ( KiB/s): min= 2048, max=198656, per=5.96%, avg=64711.80, stdev=59359.06, samples=20 00:22:42.791 iops : min= 8, max= 776, avg=252.70, stdev=231.76, samples=20 00:22:42.791 lat (msec) : 2=0.12%, 4=1.43%, 10=6.60%, 20=8.95%, 50=20.64% 00:22:42.791 lat (msec) : 100=9.18%, 250=9.41%, 500=30.67%, 750=7.21%, 1000=1.89% 00:22:42.791 lat (msec) : 2000=3.90% 00:22:42.791 cpu : usr=0.79%, sys=0.86%, ctx=1781, majf=0, minf=1 00:22:42.791 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:22:42.791 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:42.791 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:42.791 issued rwts: total=0,2592,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:42.791 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:42.791 job5: (groupid=0, jobs=1): err= 0: pid=3449829: Tue Jul 23 01:02:26 2024 00:22:42.791 write: IOPS=540, BW=135MiB/s (142MB/s)(1370MiB/10135msec); 0 zone resets 00:22:42.791 slat (usec): min=16, max=176725, avg=983.26, stdev=6185.48 00:22:42.791 clat (usec): min=1170, max=1432.0k, avg=117319.01, stdev=198890.58 00:22:42.791 lat (usec): min=1194, max=1432.0k, avg=118302.27, stdev=200252.17 00:22:42.791 clat percentiles (msec): 00:22:42.791 | 1.00th=[ 3], 5.00th=[ 5], 10.00th=[ 8], 20.00th=[ 13], 00:22:42.791 | 30.00th=[ 16], 40.00th=[ 23], 50.00th=[ 39], 60.00th=[ 82], 00:22:42.791 | 70.00th=[ 122], 80.00th=[ 142], 90.00th=[ 279], 95.00th=[ 498], 00:22:42.791 | 99.00th=[ 1133], 99.50th=[ 1167], 99.90th=[ 1401], 99.95th=[ 1435], 00:22:42.791 | 99.99th=[ 1435] 00:22:42.791 bw ( KiB/s): min= 8192, max=396800, per=12.77%, avg=138660.80, stdev=106765.93, samples=20 00:22:42.791 iops : min= 32, max= 1550, avg=541.60, stdev=417.06, samples=20 00:22:42.791 lat (msec) : 2=0.26%, 4=4.45%, 10=7.21%, 20=25.49%, 50=14.12% 00:22:42.791 lat (msec) : 100=10.86%, 250=26.06%, 500=6.68%, 750=2.21%, 1000=1.15% 00:22:42.791 lat (msec) : 2000=1.51% 00:22:42.791 cpu : usr=1.40%, sys=1.96%, ctx=4347, majf=0, minf=1 00:22:42.791 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:22:42.791 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:42.791 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:42.791 issued rwts: total=0,5480,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:42.791 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:42.791 job6: (groupid=0, jobs=1): err= 0: pid=3449830: Tue Jul 23 01:02:26 2024 00:22:42.791 write: IOPS=143, BW=35.8MiB/s (37.6MB/s)(366MiB/10212msec); 0 zone resets 00:22:42.791 slat (usec): min=23, max=322088, avg=6662.87, stdev=20322.24 00:22:42.791 clat (usec): min=1743, max=1327.4k, avg=439811.26, stdev=275441.09 00:22:42.791 lat (usec): min=1804, max=1327.5k, avg=446474.13, stdev=278341.73 00:22:42.791 clat percentiles (msec): 00:22:42.791 | 1.00th=[ 6], 5.00th=[ 124], 10.00th=[ 222], 20.00th=[ 271], 00:22:42.791 | 30.00th=[ 305], 40.00th=[ 334], 50.00th=[ 355], 60.00th=[ 380], 00:22:42.791 | 70.00th=[ 426], 80.00th=[ 617], 90.00th=[ 860], 95.00th=[ 1150], 00:22:42.791 | 99.00th=[ 1267], 99.50th=[ 1318], 99.90th=[ 1334], 99.95th=[ 1334], 00:22:42.791 | 99.99th=[ 1334] 00:22:42.791 bw ( KiB/s): min= 8192, max=73216, per=3.30%, avg=35838.15, stdev=19414.44, samples=20 00:22:42.791 iops : min= 32, max= 286, avg=139.95, stdev=75.88, samples=20 00:22:42.791 lat (msec) : 2=0.14%, 4=0.48%, 10=2.19%, 20=0.27%, 50=1.09% 00:22:42.791 lat (msec) : 100=0.27%, 250=10.53%, 500=60.56%, 750=12.17%, 1000=4.85% 00:22:42.791 lat (msec) : 2000=7.45% 00:22:42.791 cpu : usr=0.45%, sys=0.50%, ctx=474, majf=0, minf=1 00:22:42.791 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.1%, 32=2.2%, >=64=95.7% 00:22:42.791 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:42.791 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:42.791 issued rwts: total=0,1463,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:42.791 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:42.791 job7: (groupid=0, jobs=1): err= 0: pid=3449831: Tue Jul 23 01:02:26 2024 00:22:42.791 write: IOPS=397, BW=99.3MiB/s (104MB/s)(1007MiB/10136msec); 0 zone resets 00:22:42.791 slat (usec): min=15, max=451796, avg=2025.76, stdev=12512.21 00:22:42.791 clat (usec): min=1852, max=1380.1k, avg=158881.85, stdev=203434.43 00:22:42.791 lat (msec): min=2, max=1385, avg=160.91, stdev=205.83 00:22:42.791 clat percentiles (msec): 00:22:42.791 | 1.00th=[ 5], 5.00th=[ 11], 10.00th=[ 14], 20.00th=[ 67], 00:22:42.791 | 30.00th=[ 91], 40.00th=[ 102], 50.00th=[ 116], 60.00th=[ 126], 00:22:42.791 | 70.00th=[ 142], 80.00th=[ 157], 90.00th=[ 279], 95.00th=[ 542], 00:22:42.791 | 99.00th=[ 1217], 99.50th=[ 1334], 99.90th=[ 1368], 99.95th=[ 1368], 00:22:42.791 | 99.99th=[ 1385] 00:22:42.791 bw ( KiB/s): min= 4096, max=223744, per=9.35%, avg=101511.65, stdev=62979.42, samples=20 00:22:42.791 iops : min= 16, max= 874, avg=396.45, stdev=246.08, samples=20 00:22:42.791 lat (msec) : 2=0.02%, 4=0.60%, 10=4.34%, 20=8.54%, 50=4.72% 00:22:42.791 lat (msec) : 100=21.33%, 250=49.83%, 500=4.77%, 750=1.96%, 1000=2.33% 00:22:42.791 lat (msec) : 2000=1.56% 00:22:42.791 cpu : usr=0.95%, sys=1.39%, ctx=2051, majf=0, minf=1 00:22:42.791 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:22:42.791 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:42.791 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:42.791 issued rwts: total=0,4028,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:42.791 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:42.791 job8: (groupid=0, jobs=1): err= 0: pid=3449832: Tue Jul 23 01:02:26 2024 00:22:42.791 write: IOPS=551, BW=138MiB/s (145MB/s)(1397MiB/10125msec); 0 zone resets 00:22:42.791 slat (usec): min=17, max=220157, avg=797.46, stdev=6017.90 00:22:42.791 clat (usec): min=1378, max=1215.7k, avg=115135.92, stdev=187383.95 00:22:42.791 lat (usec): min=1416, max=1215.8k, avg=115933.38, stdev=188296.92 00:22:42.791 clat percentiles (msec): 00:22:42.791 | 1.00th=[ 3], 5.00th=[ 4], 10.00th=[ 6], 20.00th=[ 18], 00:22:42.791 | 30.00th=[ 31], 40.00th=[ 43], 50.00th=[ 45], 60.00th=[ 47], 00:22:42.791 | 70.00th=[ 73], 80.00th=[ 140], 90.00th=[ 388], 95.00th=[ 489], 00:22:42.791 | 99.00th=[ 995], 99.50th=[ 1099], 99.90th=[ 1200], 99.95th=[ 1217], 00:22:42.791 | 99.99th=[ 1217] 00:22:42.791 bw ( KiB/s): min= 7680, max=375296, per=13.03%, avg=141419.05, stdev=102133.26, samples=20 00:22:42.791 iops : min= 30, max= 1466, avg=552.35, stdev=399.00, samples=20 00:22:42.791 lat (msec) : 2=0.68%, 4=7.82%, 10=5.14%, 20=8.50%, 50=40.08% 00:22:42.791 lat (msec) : 100=12.85%, 250=10.86%, 500=9.47%, 750=2.56%, 1000=1.13% 00:22:42.791 lat (msec) : 2000=0.91% 00:22:42.791 cpu : usr=1.51%, sys=2.07%, ctx=3929, majf=0, minf=1 00:22:42.791 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:22:42.791 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:42.791 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:42.791 issued rwts: total=0,5587,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:42.791 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:42.791 job9: (groupid=0, jobs=1): err= 0: pid=3449833: Tue Jul 23 01:02:26 2024 00:22:42.791 write: IOPS=442, BW=111MiB/s (116MB/s)(1128MiB/10205msec); 0 zone resets 00:22:42.791 slat (usec): min=17, max=434441, avg=1761.05, stdev=11637.88 00:22:42.791 clat (usec): min=1438, max=1327.6k, avg=142938.03, stdev=208328.12 00:22:42.791 lat (usec): min=1521, max=1327.6k, avg=144699.08, stdev=210436.49 00:22:42.791 clat percentiles (msec): 00:22:42.791 | 1.00th=[ 6], 5.00th=[ 13], 10.00th=[ 21], 20.00th=[ 34], 00:22:42.791 | 30.00th=[ 48], 40.00th=[ 50], 50.00th=[ 53], 60.00th=[ 58], 00:22:42.791 | 70.00th=[ 108], 80.00th=[ 197], 90.00th=[ 384], 95.00th=[ 584], 00:22:42.791 | 99.00th=[ 1083], 99.50th=[ 1234], 99.90th=[ 1334], 99.95th=[ 1334], 00:22:42.791 | 99.99th=[ 1334] 00:22:42.791 bw ( KiB/s): min= 8192, max=330752, per=10.49%, avg=113839.90, stdev=101655.45, samples=20 00:22:42.791 iops : min= 32, max= 1292, avg=444.65, stdev=397.08, samples=20 00:22:42.791 lat (msec) : 2=0.11%, 4=0.53%, 10=3.15%, 20=6.18%, 50=32.61% 00:22:42.791 lat (msec) : 100=26.16%, 250=12.86%, 500=12.26%, 750=3.21%, 1000=1.73% 00:22:42.791 lat (msec) : 2000=1.20% 00:22:42.791 cpu : usr=1.30%, sys=1.47%, ctx=2374, majf=0, minf=1 00:22:42.791 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:22:42.791 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:42.791 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:42.791 issued rwts: total=0,4511,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:42.791 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:42.791 job10: (groupid=0, jobs=1): err= 0: pid=3449834: Tue Jul 23 01:02:26 2024 00:22:42.791 write: IOPS=487, BW=122MiB/s (128MB/s)(1227MiB/10068msec); 0 zone resets 00:22:42.791 slat (usec): min=19, max=264078, avg=1345.39, stdev=5542.56 00:22:42.791 clat (usec): min=1318, max=1220.6k, avg=129883.48, stdev=163788.75 00:22:42.791 lat (usec): min=1363, max=1229.8k, avg=131228.87, stdev=164487.64 00:22:42.791 clat percentiles (msec): 00:22:42.791 | 1.00th=[ 4], 5.00th=[ 10], 10.00th=[ 24], 20.00th=[ 72], 00:22:42.791 | 30.00th=[ 77], 40.00th=[ 79], 50.00th=[ 82], 60.00th=[ 92], 00:22:42.791 | 70.00th=[ 103], 80.00th=[ 129], 90.00th=[ 292], 95.00th=[ 443], 00:22:42.791 | 99.00th=[ 961], 99.50th=[ 1053], 99.90th=[ 1183], 99.95th=[ 1217], 00:22:42.791 | 99.99th=[ 1217] 00:22:42.791 bw ( KiB/s): min=18944, max=208896, per=11.42%, avg=124001.80, stdev=67896.26, samples=20 00:22:42.791 iops : min= 74, max= 816, avg=484.35, stdev=265.20, samples=20 00:22:42.791 lat (msec) : 2=0.16%, 4=1.14%, 10=3.75%, 20=4.32%, 50=7.09% 00:22:42.791 lat (msec) : 100=52.30%, 250=19.52%, 500=7.52%, 750=2.00%, 1000=1.37% 00:22:42.791 lat (msec) : 2000=0.84% 00:22:42.791 cpu : usr=1.40%, sys=1.61%, ctx=2340, majf=0, minf=1 00:22:42.791 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:22:42.791 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:42.791 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:42.791 issued rwts: total=0,4908,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:42.791 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:42.791 00:22:42.791 Run status group 0 (all jobs): 00:22:42.792 WRITE: bw=1060MiB/s (1112MB/s), 35.8MiB/s-177MiB/s (37.6MB/s-186MB/s), io=10.6GiB (11.4GB), run=10068-10219msec 00:22:42.792 00:22:42.792 Disk stats (read/write): 00:22:42.792 nvme0n1: ios=46/14033, merge=0/0, ticks=1708/1207185, in_queue=1208893, util=99.94% 00:22:42.792 nvme10n1: ios=37/7273, merge=0/0, ticks=703/1244910, in_queue=1245613, util=100.00% 00:22:42.792 nvme1n1: ios=0/4113, merge=0/0, ticks=0/1236876, in_queue=1236876, util=97.68% 00:22:42.792 nvme2n1: ios=46/3774, merge=0/0, ticks=1585/1246384, in_queue=1247969, util=100.00% 00:22:42.792 nvme3n1: ios=42/5149, merge=0/0, ticks=376/1242647, in_queue=1243023, util=100.00% 00:22:42.792 nvme4n1: ios=0/10781, merge=0/0, ticks=0/1227861, in_queue=1227861, util=98.15% 00:22:42.792 nvme5n1: ios=0/2902, merge=0/0, ticks=0/1232561, in_queue=1232561, util=98.35% 00:22:42.792 nvme6n1: ios=38/7873, merge=0/0, ticks=525/1213168, in_queue=1213693, util=100.00% 00:22:42.792 nvme7n1: ios=0/10989, merge=0/0, ticks=0/1229978, in_queue=1229978, util=98.79% 00:22:42.792 nvme8n1: ios=0/8998, merge=0/0, ticks=0/1243733, in_queue=1243733, util=98.94% 00:22:42.792 nvme9n1: ios=43/9569, merge=0/0, ticks=1744/1223322, in_queue=1225066, util=100.00% 00:22:42.792 01:02:26 -- target/multiconnection.sh@36 -- # sync 00:22:42.792 01:02:26 -- target/multiconnection.sh@37 -- # seq 1 11 00:22:42.792 01:02:26 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:42.792 01:02:26 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:22:42.792 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:42.792 01:02:26 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:22:42.792 01:02:26 -- common/autotest_common.sh@1198 -- # local i=0 00:22:42.792 01:02:26 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:42.792 01:02:26 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK1 00:22:42.792 01:02:26 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:42.792 01:02:26 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK1 00:22:42.792 01:02:26 -- common/autotest_common.sh@1210 -- # return 0 00:22:42.792 01:02:26 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:42.792 01:02:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:42.792 01:02:26 -- common/autotest_common.sh@10 -- # set +x 00:22:42.792 01:02:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:42.792 01:02:26 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:42.792 01:02:26 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:22:42.792 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:22:42.792 01:02:26 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:22:42.792 01:02:26 -- common/autotest_common.sh@1198 -- # local i=0 00:22:42.792 01:02:26 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:42.792 01:02:26 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK2 00:22:42.792 01:02:26 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:42.792 01:02:26 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK2 00:22:42.792 01:02:26 -- common/autotest_common.sh@1210 -- # return 0 00:22:42.792 01:02:26 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:42.792 01:02:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:42.792 01:02:26 -- common/autotest_common.sh@10 -- # set +x 00:22:42.792 01:02:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:42.792 01:02:26 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:42.792 01:02:26 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:22:43.050 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:22:43.050 01:02:26 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:22:43.050 01:02:26 -- common/autotest_common.sh@1198 -- # local i=0 00:22:43.050 01:02:26 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:43.050 01:02:26 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK3 00:22:43.050 01:02:27 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:43.050 01:02:27 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK3 00:22:43.050 01:02:27 -- common/autotest_common.sh@1210 -- # return 0 00:22:43.050 01:02:27 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:22:43.050 01:02:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:43.050 01:02:27 -- common/autotest_common.sh@10 -- # set +x 00:22:43.050 01:02:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:43.050 01:02:27 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:43.050 01:02:27 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:22:43.050 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:22:43.050 01:02:27 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:22:43.050 01:02:27 -- common/autotest_common.sh@1198 -- # local i=0 00:22:43.050 01:02:27 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:43.050 01:02:27 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK4 00:22:43.050 01:02:27 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:43.050 01:02:27 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK4 00:22:43.050 01:02:27 -- common/autotest_common.sh@1210 -- # return 0 00:22:43.051 01:02:27 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:22:43.051 01:02:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:43.051 01:02:27 -- common/autotest_common.sh@10 -- # set +x 00:22:43.051 01:02:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:43.051 01:02:27 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:43.051 01:02:27 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:22:43.617 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:22:43.617 01:02:27 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:22:43.617 01:02:27 -- common/autotest_common.sh@1198 -- # local i=0 00:22:43.617 01:02:27 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:43.617 01:02:27 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK5 00:22:43.617 01:02:27 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:43.617 01:02:27 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK5 00:22:43.617 01:02:27 -- common/autotest_common.sh@1210 -- # return 0 00:22:43.617 01:02:27 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:22:43.617 01:02:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:43.617 01:02:27 -- common/autotest_common.sh@10 -- # set +x 00:22:43.617 01:02:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:43.617 01:02:27 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:43.617 01:02:27 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:22:43.617 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:22:43.617 01:02:27 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:22:43.617 01:02:27 -- common/autotest_common.sh@1198 -- # local i=0 00:22:43.617 01:02:27 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:43.617 01:02:27 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK6 00:22:43.617 01:02:27 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:43.617 01:02:27 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK6 00:22:43.617 01:02:27 -- common/autotest_common.sh@1210 -- # return 0 00:22:43.617 01:02:27 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:22:43.617 01:02:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:43.617 01:02:27 -- common/autotest_common.sh@10 -- # set +x 00:22:43.617 01:02:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:43.617 01:02:27 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:43.617 01:02:27 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:22:43.876 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:22:43.876 01:02:27 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:22:43.876 01:02:27 -- common/autotest_common.sh@1198 -- # local i=0 00:22:43.876 01:02:27 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:43.876 01:02:27 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK7 00:22:43.876 01:02:27 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:43.876 01:02:27 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK7 00:22:43.876 01:02:27 -- common/autotest_common.sh@1210 -- # return 0 00:22:43.876 01:02:27 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:22:43.876 01:02:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:43.876 01:02:27 -- common/autotest_common.sh@10 -- # set +x 00:22:43.876 01:02:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:43.876 01:02:27 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:43.876 01:02:27 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:22:43.876 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:22:43.876 01:02:27 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:22:43.876 01:02:27 -- common/autotest_common.sh@1198 -- # local i=0 00:22:43.876 01:02:27 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:43.876 01:02:27 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK8 00:22:43.876 01:02:27 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:43.876 01:02:27 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK8 00:22:43.876 01:02:27 -- common/autotest_common.sh@1210 -- # return 0 00:22:43.876 01:02:27 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:22:43.876 01:02:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:43.876 01:02:27 -- common/autotest_common.sh@10 -- # set +x 00:22:43.876 01:02:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:43.876 01:02:27 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:43.876 01:02:27 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:22:44.135 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:22:44.135 01:02:28 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:22:44.135 01:02:28 -- common/autotest_common.sh@1198 -- # local i=0 00:22:44.135 01:02:28 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:44.135 01:02:28 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK9 00:22:44.135 01:02:28 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:44.135 01:02:28 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK9 00:22:44.135 01:02:28 -- common/autotest_common.sh@1210 -- # return 0 00:22:44.135 01:02:28 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:22:44.135 01:02:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:44.135 01:02:28 -- common/autotest_common.sh@10 -- # set +x 00:22:44.135 01:02:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:44.135 01:02:28 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:44.135 01:02:28 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:22:44.135 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:22:44.135 01:02:28 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:22:44.135 01:02:28 -- common/autotest_common.sh@1198 -- # local i=0 00:22:44.135 01:02:28 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:44.135 01:02:28 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK10 00:22:44.135 01:02:28 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:44.135 01:02:28 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK10 00:22:44.135 01:02:28 -- common/autotest_common.sh@1210 -- # return 0 00:22:44.135 01:02:28 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:22:44.135 01:02:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:44.135 01:02:28 -- common/autotest_common.sh@10 -- # set +x 00:22:44.394 01:02:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:44.394 01:02:28 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:44.394 01:02:28 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:22:44.394 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:22:44.394 01:02:28 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:22:44.394 01:02:28 -- common/autotest_common.sh@1198 -- # local i=0 00:22:44.394 01:02:28 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:44.394 01:02:28 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK11 00:22:44.394 01:02:28 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:44.394 01:02:28 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK11 00:22:44.394 01:02:28 -- common/autotest_common.sh@1210 -- # return 0 00:22:44.394 01:02:28 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:22:44.394 01:02:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:44.394 01:02:28 -- common/autotest_common.sh@10 -- # set +x 00:22:44.394 01:02:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:44.394 01:02:28 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:22:44.394 01:02:28 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:22:44.394 01:02:28 -- target/multiconnection.sh@47 -- # nvmftestfini 00:22:44.394 01:02:28 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:44.394 01:02:28 -- nvmf/common.sh@116 -- # sync 00:22:44.394 01:02:28 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:44.394 01:02:28 -- nvmf/common.sh@119 -- # set +e 00:22:44.394 01:02:28 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:44.394 01:02:28 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:44.394 rmmod nvme_tcp 00:22:44.394 rmmod nvme_fabrics 00:22:44.394 rmmod nvme_keyring 00:22:44.394 01:02:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:44.394 01:02:28 -- nvmf/common.sh@123 -- # set -e 00:22:44.394 01:02:28 -- nvmf/common.sh@124 -- # return 0 00:22:44.394 01:02:28 -- nvmf/common.sh@477 -- # '[' -n 3444229 ']' 00:22:44.394 01:02:28 -- nvmf/common.sh@478 -- # killprocess 3444229 00:22:44.394 01:02:28 -- common/autotest_common.sh@926 -- # '[' -z 3444229 ']' 00:22:44.394 01:02:28 -- common/autotest_common.sh@930 -- # kill -0 3444229 00:22:44.394 01:02:28 -- common/autotest_common.sh@931 -- # uname 00:22:44.394 01:02:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:44.394 01:02:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3444229 00:22:44.394 01:02:28 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:44.394 01:02:28 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:44.394 01:02:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3444229' 00:22:44.394 killing process with pid 3444229 00:22:44.394 01:02:28 -- common/autotest_common.sh@945 -- # kill 3444229 00:22:44.394 01:02:28 -- common/autotest_common.sh@950 -- # wait 3444229 00:22:44.960 01:02:29 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:44.960 01:02:29 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:44.960 01:02:29 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:44.960 01:02:29 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:44.960 01:02:29 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:44.960 01:02:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:44.960 01:02:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:44.960 01:02:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:46.862 01:02:31 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:22:46.862 00:22:46.862 real 1m1.145s 00:22:46.862 user 3m17.740s 00:22:46.862 sys 0m24.443s 00:22:46.862 01:02:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:46.862 01:02:31 -- common/autotest_common.sh@10 -- # set +x 00:22:46.862 ************************************ 00:22:46.862 END TEST nvmf_multiconnection 00:22:46.862 ************************************ 00:22:47.120 01:02:31 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:22:47.120 01:02:31 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:47.120 01:02:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:47.121 01:02:31 -- common/autotest_common.sh@10 -- # set +x 00:22:47.121 ************************************ 00:22:47.121 START TEST nvmf_initiator_timeout 00:22:47.121 ************************************ 00:22:47.121 01:02:31 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:22:47.121 * Looking for test storage... 00:22:47.121 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:47.121 01:02:31 -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:47.121 01:02:31 -- nvmf/common.sh@7 -- # uname -s 00:22:47.121 01:02:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:47.121 01:02:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:47.121 01:02:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:47.121 01:02:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:47.121 01:02:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:47.121 01:02:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:47.121 01:02:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:47.121 01:02:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:47.121 01:02:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:47.121 01:02:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:47.121 01:02:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:47.121 01:02:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:47.121 01:02:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:47.121 01:02:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:47.121 01:02:31 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:47.121 01:02:31 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:47.121 01:02:31 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:47.121 01:02:31 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:47.121 01:02:31 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:47.121 01:02:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.121 01:02:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.121 01:02:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.121 01:02:31 -- paths/export.sh@5 -- # export PATH 00:22:47.121 01:02:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.121 01:02:31 -- nvmf/common.sh@46 -- # : 0 00:22:47.121 01:02:31 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:47.121 01:02:31 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:47.121 01:02:31 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:47.121 01:02:31 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:47.121 01:02:31 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:47.121 01:02:31 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:47.121 01:02:31 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:47.121 01:02:31 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:47.121 01:02:31 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:47.121 01:02:31 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:47.121 01:02:31 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:22:47.121 01:02:31 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:47.121 01:02:31 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:47.121 01:02:31 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:47.121 01:02:31 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:47.121 01:02:31 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:47.121 01:02:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:47.121 01:02:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:47.121 01:02:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:47.121 01:02:31 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:22:47.121 01:02:31 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:22:47.121 01:02:31 -- nvmf/common.sh@284 -- # xtrace_disable 00:22:47.121 01:02:31 -- common/autotest_common.sh@10 -- # set +x 00:22:49.026 01:02:33 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:49.026 01:02:33 -- nvmf/common.sh@290 -- # pci_devs=() 00:22:49.026 01:02:33 -- nvmf/common.sh@290 -- # local -a pci_devs 00:22:49.026 01:02:33 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:22:49.026 01:02:33 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:22:49.026 01:02:33 -- nvmf/common.sh@292 -- # pci_drivers=() 00:22:49.026 01:02:33 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:22:49.026 01:02:33 -- nvmf/common.sh@294 -- # net_devs=() 00:22:49.026 01:02:33 -- nvmf/common.sh@294 -- # local -ga net_devs 00:22:49.027 01:02:33 -- nvmf/common.sh@295 -- # e810=() 00:22:49.027 01:02:33 -- nvmf/common.sh@295 -- # local -ga e810 00:22:49.027 01:02:33 -- nvmf/common.sh@296 -- # x722=() 00:22:49.027 01:02:33 -- nvmf/common.sh@296 -- # local -ga x722 00:22:49.027 01:02:33 -- nvmf/common.sh@297 -- # mlx=() 00:22:49.027 01:02:33 -- nvmf/common.sh@297 -- # local -ga mlx 00:22:49.027 01:02:33 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:49.027 01:02:33 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:49.027 01:02:33 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:49.027 01:02:33 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:49.027 01:02:33 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:49.027 01:02:33 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:49.027 01:02:33 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:49.027 01:02:33 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:49.027 01:02:33 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:49.027 01:02:33 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:49.027 01:02:33 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:49.027 01:02:33 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:22:49.027 01:02:33 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:22:49.027 01:02:33 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:22:49.027 01:02:33 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:22:49.027 01:02:33 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:22:49.027 01:02:33 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:22:49.027 01:02:33 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:49.027 01:02:33 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:49.027 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:49.027 01:02:33 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:49.027 01:02:33 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:49.027 01:02:33 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:49.027 01:02:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:49.027 01:02:33 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:49.027 01:02:33 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:49.027 01:02:33 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:49.027 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:49.027 01:02:33 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:49.027 01:02:33 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:49.027 01:02:33 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:49.027 01:02:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:49.027 01:02:33 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:49.027 01:02:33 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:22:49.027 01:02:33 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:22:49.027 01:02:33 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:22:49.027 01:02:33 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:49.027 01:02:33 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:49.027 01:02:33 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:49.027 01:02:33 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:49.027 01:02:33 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:49.027 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:49.027 01:02:33 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:49.027 01:02:33 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:49.027 01:02:33 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:49.027 01:02:33 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:49.027 01:02:33 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:49.027 01:02:33 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:49.027 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:49.027 01:02:33 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:49.027 01:02:33 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:22:49.027 01:02:33 -- nvmf/common.sh@402 -- # is_hw=yes 00:22:49.027 01:02:33 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:22:49.027 01:02:33 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:22:49.027 01:02:33 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:22:49.027 01:02:33 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:49.027 01:02:33 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:49.027 01:02:33 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:49.027 01:02:33 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:22:49.027 01:02:33 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:49.027 01:02:33 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:49.027 01:02:33 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:22:49.027 01:02:33 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:49.027 01:02:33 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:49.027 01:02:33 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:22:49.027 01:02:33 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:22:49.027 01:02:33 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:22:49.027 01:02:33 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:49.287 01:02:33 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:49.287 01:02:33 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:49.287 01:02:33 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:22:49.287 01:02:33 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:49.287 01:02:33 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:49.287 01:02:33 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:49.287 01:02:33 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:22:49.287 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:49.287 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.274 ms 00:22:49.287 00:22:49.287 --- 10.0.0.2 ping statistics --- 00:22:49.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:49.287 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:22:49.287 01:02:33 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:49.287 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:49.287 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:22:49.287 00:22:49.287 --- 10.0.0.1 ping statistics --- 00:22:49.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:49.287 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:22:49.287 01:02:33 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:49.287 01:02:33 -- nvmf/common.sh@410 -- # return 0 00:22:49.287 01:02:33 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:49.287 01:02:33 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:49.287 01:02:33 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:49.287 01:02:33 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:49.287 01:02:33 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:49.287 01:02:33 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:49.287 01:02:33 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:49.287 01:02:33 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:22:49.287 01:02:33 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:49.287 01:02:33 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:49.287 01:02:33 -- common/autotest_common.sh@10 -- # set +x 00:22:49.287 01:02:33 -- nvmf/common.sh@469 -- # nvmfpid=3453064 00:22:49.287 01:02:33 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:49.287 01:02:33 -- nvmf/common.sh@470 -- # waitforlisten 3453064 00:22:49.287 01:02:33 -- common/autotest_common.sh@819 -- # '[' -z 3453064 ']' 00:22:49.287 01:02:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:49.287 01:02:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:49.287 01:02:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:49.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:49.287 01:02:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:49.287 01:02:33 -- common/autotest_common.sh@10 -- # set +x 00:22:49.287 [2024-07-23 01:02:33.389675] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:22:49.287 [2024-07-23 01:02:33.389758] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:49.287 EAL: No free 2048 kB hugepages reported on node 1 00:22:49.287 [2024-07-23 01:02:33.456408] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:49.545 [2024-07-23 01:02:33.545177] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:49.545 [2024-07-23 01:02:33.545335] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:49.545 [2024-07-23 01:02:33.545351] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:49.545 [2024-07-23 01:02:33.545364] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:49.545 [2024-07-23 01:02:33.545496] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:49.545 [2024-07-23 01:02:33.545526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:49.545 [2024-07-23 01:02:33.545582] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:49.545 [2024-07-23 01:02:33.545584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:50.480 01:02:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:50.480 01:02:34 -- common/autotest_common.sh@852 -- # return 0 00:22:50.480 01:02:34 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:50.480 01:02:34 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:50.480 01:02:34 -- common/autotest_common.sh@10 -- # set +x 00:22:50.480 01:02:34 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:50.480 01:02:34 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:22:50.480 01:02:34 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:50.480 01:02:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:50.480 01:02:34 -- common/autotest_common.sh@10 -- # set +x 00:22:50.480 Malloc0 00:22:50.480 01:02:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:50.480 01:02:34 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:22:50.480 01:02:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:50.480 01:02:34 -- common/autotest_common.sh@10 -- # set +x 00:22:50.480 Delay0 00:22:50.480 01:02:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:50.480 01:02:34 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:50.480 01:02:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:50.480 01:02:34 -- common/autotest_common.sh@10 -- # set +x 00:22:50.480 [2024-07-23 01:02:34.384892] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:50.480 01:02:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:50.480 01:02:34 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:22:50.480 01:02:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:50.480 01:02:34 -- common/autotest_common.sh@10 -- # set +x 00:22:50.480 01:02:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:50.480 01:02:34 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:50.480 01:02:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:50.480 01:02:34 -- common/autotest_common.sh@10 -- # set +x 00:22:50.480 01:02:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:50.480 01:02:34 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:50.480 01:02:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:50.480 01:02:34 -- common/autotest_common.sh@10 -- # set +x 00:22:50.480 [2024-07-23 01:02:34.413196] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:50.480 01:02:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:50.480 01:02:34 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:22:51.048 01:02:35 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:22:51.048 01:02:35 -- common/autotest_common.sh@1177 -- # local i=0 00:22:51.048 01:02:35 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:51.048 01:02:35 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:51.048 01:02:35 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:52.985 01:02:37 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:52.985 01:02:37 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:52.985 01:02:37 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:22:52.985 01:02:37 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:52.985 01:02:37 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:52.985 01:02:37 -- common/autotest_common.sh@1187 -- # return 0 00:22:52.985 01:02:37 -- target/initiator_timeout.sh@35 -- # fio_pid=3453558 00:22:52.985 01:02:37 -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:22:52.985 01:02:37 -- target/initiator_timeout.sh@37 -- # sleep 3 00:22:52.985 [global] 00:22:52.985 thread=1 00:22:52.985 invalidate=1 00:22:52.985 rw=write 00:22:52.985 time_based=1 00:22:52.985 runtime=60 00:22:52.985 ioengine=libaio 00:22:52.985 direct=1 00:22:52.985 bs=4096 00:22:52.985 iodepth=1 00:22:52.985 norandommap=0 00:22:52.985 numjobs=1 00:22:52.985 00:22:52.985 verify_dump=1 00:22:52.985 verify_backlog=512 00:22:52.985 verify_state_save=0 00:22:52.985 do_verify=1 00:22:52.985 verify=crc32c-intel 00:22:52.985 [job0] 00:22:52.985 filename=/dev/nvme0n1 00:22:52.985 Could not set queue depth (nvme0n1) 00:22:53.243 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:22:53.243 fio-3.35 00:22:53.243 Starting 1 thread 00:22:56.532 01:02:40 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:22:56.532 01:02:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:56.532 01:02:40 -- common/autotest_common.sh@10 -- # set +x 00:22:56.532 true 00:22:56.532 01:02:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:56.532 01:02:40 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:22:56.532 01:02:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:56.532 01:02:40 -- common/autotest_common.sh@10 -- # set +x 00:22:56.532 true 00:22:56.532 01:02:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:56.532 01:02:40 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:22:56.532 01:02:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:56.532 01:02:40 -- common/autotest_common.sh@10 -- # set +x 00:22:56.532 true 00:22:56.532 01:02:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:56.532 01:02:40 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:22:56.532 01:02:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:56.532 01:02:40 -- common/autotest_common.sh@10 -- # set +x 00:22:56.532 true 00:22:56.532 01:02:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:56.532 01:02:40 -- target/initiator_timeout.sh@45 -- # sleep 3 00:22:59.063 01:02:43 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:22:59.063 01:02:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:59.063 01:02:43 -- common/autotest_common.sh@10 -- # set +x 00:22:59.063 true 00:22:59.063 01:02:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:59.063 01:02:43 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:22:59.063 01:02:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:59.063 01:02:43 -- common/autotest_common.sh@10 -- # set +x 00:22:59.063 true 00:22:59.063 01:02:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:59.063 01:02:43 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:22:59.063 01:02:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:59.063 01:02:43 -- common/autotest_common.sh@10 -- # set +x 00:22:59.063 true 00:22:59.063 01:02:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:59.063 01:02:43 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:22:59.063 01:02:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:59.063 01:02:43 -- common/autotest_common.sh@10 -- # set +x 00:22:59.063 true 00:22:59.063 01:02:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:59.063 01:02:43 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:22:59.063 01:02:43 -- target/initiator_timeout.sh@54 -- # wait 3453558 00:23:55.293 00:23:55.293 job0: (groupid=0, jobs=1): err= 0: pid=3453707: Tue Jul 23 01:03:37 2024 00:23:55.293 read: IOPS=11, BW=45.5KiB/s (46.6kB/s)(2732KiB/60039msec) 00:23:55.293 slat (nsec): min=6497, max=68621, avg=21362.85, stdev=9462.83 00:23:55.293 clat (usec): min=326, max=41140k, avg=87393.74, stdev=1573252.76 00:23:55.293 lat (usec): min=334, max=41140k, avg=87415.11, stdev=1573252.61 00:23:55.293 clat percentiles (usec): 00:23:55.293 | 1.00th=[ 343], 5.00th=[ 359], 10.00th=[ 371], 00:23:55.293 | 20.00th=[ 392], 30.00th=[ 506], 40.00th=[ 41157], 00:23:55.293 | 50.00th=[ 41157], 60.00th=[ 41157], 70.00th=[ 41157], 00:23:55.293 | 80.00th=[ 42206], 90.00th=[ 42206], 95.00th=[ 42206], 00:23:55.293 | 99.00th=[ 42206], 99.50th=[ 42206], 99.90th=[17112761], 00:23:55.293 | 99.95th=[17112761], 99.99th=[17112761] 00:23:55.293 write: IOPS=17, BW=68.2KiB/s (69.9kB/s)(4096KiB/60039msec); 0 zone resets 00:23:55.293 slat (usec): min=5, max=32180, avg=50.19, stdev=1005.14 00:23:55.293 clat (usec): min=207, max=390, avg=267.43, stdev=45.66 00:23:55.293 lat (usec): min=215, max=32558, avg=317.62, stdev=1010.15 00:23:55.293 clat percentiles (usec): 00:23:55.293 | 1.00th=[ 215], 5.00th=[ 219], 10.00th=[ 223], 20.00th=[ 227], 00:23:55.293 | 30.00th=[ 231], 40.00th=[ 237], 50.00th=[ 247], 60.00th=[ 277], 00:23:55.293 | 70.00th=[ 289], 80.00th=[ 318], 90.00th=[ 338], 95.00th=[ 351], 00:23:55.293 | 99.00th=[ 371], 99.50th=[ 379], 99.90th=[ 388], 99.95th=[ 392], 00:23:55.293 | 99.99th=[ 392] 00:23:55.293 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=2 00:23:55.293 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:23:55.293 lat (usec) : 250=31.40%, 500=40.54%, 750=1.87% 00:23:55.293 lat (msec) : 50=26.13%, >=2000=0.06% 00:23:55.293 cpu : usr=0.04%, sys=0.04%, ctx=1710, majf=0, minf=2 00:23:55.293 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:55.293 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:55.293 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:55.293 issued rwts: total=683,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:55.293 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:55.293 00:23:55.293 Run status group 0 (all jobs): 00:23:55.293 READ: bw=45.5KiB/s (46.6kB/s), 45.5KiB/s-45.5KiB/s (46.6kB/s-46.6kB/s), io=2732KiB (2798kB), run=60039-60039msec 00:23:55.293 WRITE: bw=68.2KiB/s (69.9kB/s), 68.2KiB/s-68.2KiB/s (69.9kB/s-69.9kB/s), io=4096KiB (4194kB), run=60039-60039msec 00:23:55.293 00:23:55.293 Disk stats (read/write): 00:23:55.293 nvme0n1: ios=731/1024, merge=0/0, ticks=19328/259, in_queue=19587, util=99.72% 00:23:55.293 01:03:37 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:23:55.293 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:55.293 01:03:37 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:23:55.293 01:03:37 -- common/autotest_common.sh@1198 -- # local i=0 00:23:55.293 01:03:37 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:23:55.293 01:03:37 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:55.293 01:03:37 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:55.293 01:03:37 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:55.293 01:03:37 -- common/autotest_common.sh@1210 -- # return 0 00:23:55.293 01:03:37 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:23:55.293 01:03:37 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:23:55.293 nvmf hotplug test: fio successful as expected 00:23:55.293 01:03:37 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:55.293 01:03:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:55.293 01:03:37 -- common/autotest_common.sh@10 -- # set +x 00:23:55.293 01:03:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:55.293 01:03:37 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:23:55.293 01:03:37 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:23:55.293 01:03:37 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:23:55.293 01:03:37 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:55.293 01:03:37 -- nvmf/common.sh@116 -- # sync 00:23:55.293 01:03:37 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:23:55.293 01:03:37 -- nvmf/common.sh@119 -- # set +e 00:23:55.293 01:03:37 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:55.293 01:03:37 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:23:55.293 rmmod nvme_tcp 00:23:55.293 rmmod nvme_fabrics 00:23:55.293 rmmod nvme_keyring 00:23:55.293 01:03:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:55.293 01:03:37 -- nvmf/common.sh@123 -- # set -e 00:23:55.293 01:03:37 -- nvmf/common.sh@124 -- # return 0 00:23:55.293 01:03:37 -- nvmf/common.sh@477 -- # '[' -n 3453064 ']' 00:23:55.293 01:03:37 -- nvmf/common.sh@478 -- # killprocess 3453064 00:23:55.293 01:03:37 -- common/autotest_common.sh@926 -- # '[' -z 3453064 ']' 00:23:55.293 01:03:37 -- common/autotest_common.sh@930 -- # kill -0 3453064 00:23:55.293 01:03:37 -- common/autotest_common.sh@931 -- # uname 00:23:55.293 01:03:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:55.293 01:03:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3453064 00:23:55.293 01:03:37 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:55.293 01:03:37 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:55.293 01:03:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3453064' 00:23:55.293 killing process with pid 3453064 00:23:55.293 01:03:37 -- common/autotest_common.sh@945 -- # kill 3453064 00:23:55.293 01:03:37 -- common/autotest_common.sh@950 -- # wait 3453064 00:23:55.293 01:03:38 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:55.293 01:03:38 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:23:55.293 01:03:38 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:23:55.293 01:03:38 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:55.293 01:03:38 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:23:55.293 01:03:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:55.293 01:03:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:55.293 01:03:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:55.861 01:03:40 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:23:55.861 00:23:55.861 real 1m8.967s 00:23:55.861 user 4m13.904s 00:23:55.861 sys 0m6.609s 00:23:55.861 01:03:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:55.861 01:03:40 -- common/autotest_common.sh@10 -- # set +x 00:23:55.861 ************************************ 00:23:55.861 END TEST nvmf_initiator_timeout 00:23:55.861 ************************************ 00:23:56.120 01:03:40 -- nvmf/nvmf.sh@69 -- # [[ phy == phy ]] 00:23:56.120 01:03:40 -- nvmf/nvmf.sh@70 -- # '[' tcp = tcp ']' 00:23:56.120 01:03:40 -- nvmf/nvmf.sh@71 -- # gather_supported_nvmf_pci_devs 00:23:56.120 01:03:40 -- nvmf/common.sh@284 -- # xtrace_disable 00:23:56.120 01:03:40 -- common/autotest_common.sh@10 -- # set +x 00:23:58.020 01:03:41 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:58.020 01:03:41 -- nvmf/common.sh@290 -- # pci_devs=() 00:23:58.020 01:03:41 -- nvmf/common.sh@290 -- # local -a pci_devs 00:23:58.020 01:03:41 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:23:58.020 01:03:41 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:23:58.020 01:03:41 -- nvmf/common.sh@292 -- # pci_drivers=() 00:23:58.020 01:03:41 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:23:58.020 01:03:41 -- nvmf/common.sh@294 -- # net_devs=() 00:23:58.020 01:03:41 -- nvmf/common.sh@294 -- # local -ga net_devs 00:23:58.020 01:03:41 -- nvmf/common.sh@295 -- # e810=() 00:23:58.020 01:03:41 -- nvmf/common.sh@295 -- # local -ga e810 00:23:58.020 01:03:41 -- nvmf/common.sh@296 -- # x722=() 00:23:58.020 01:03:41 -- nvmf/common.sh@296 -- # local -ga x722 00:23:58.020 01:03:41 -- nvmf/common.sh@297 -- # mlx=() 00:23:58.020 01:03:41 -- nvmf/common.sh@297 -- # local -ga mlx 00:23:58.020 01:03:41 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:58.020 01:03:41 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:58.020 01:03:41 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:58.020 01:03:41 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:58.020 01:03:41 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:58.020 01:03:41 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:58.020 01:03:41 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:58.020 01:03:41 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:58.020 01:03:41 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:58.020 01:03:41 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:58.020 01:03:41 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:58.020 01:03:41 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:23:58.020 01:03:41 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:23:58.020 01:03:41 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:23:58.020 01:03:41 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:23:58.020 01:03:41 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:23:58.020 01:03:41 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:23:58.020 01:03:41 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:58.020 01:03:41 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:58.020 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:58.020 01:03:41 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:58.020 01:03:41 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:58.021 01:03:41 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:58.021 01:03:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:58.021 01:03:41 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:58.021 01:03:41 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:58.021 01:03:41 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:58.021 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:58.021 01:03:41 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:58.021 01:03:41 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:58.021 01:03:41 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:58.021 01:03:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:58.021 01:03:41 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:58.021 01:03:41 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:23:58.021 01:03:41 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:23:58.021 01:03:41 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:23:58.021 01:03:41 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:58.021 01:03:41 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:58.021 01:03:41 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:58.021 01:03:41 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:58.021 01:03:41 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:58.021 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:58.021 01:03:41 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:58.021 01:03:41 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:58.021 01:03:41 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:58.021 01:03:41 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:58.021 01:03:41 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:58.021 01:03:41 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:58.021 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:58.021 01:03:41 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:58.021 01:03:41 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:23:58.021 01:03:41 -- nvmf/nvmf.sh@72 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:58.021 01:03:41 -- nvmf/nvmf.sh@73 -- # (( 2 > 0 )) 00:23:58.021 01:03:41 -- nvmf/nvmf.sh@74 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:23:58.021 01:03:41 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:23:58.021 01:03:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:58.021 01:03:41 -- common/autotest_common.sh@10 -- # set +x 00:23:58.021 ************************************ 00:23:58.021 START TEST nvmf_perf_adq 00:23:58.021 ************************************ 00:23:58.021 01:03:41 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:23:58.021 * Looking for test storage... 00:23:58.021 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:58.021 01:03:42 -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:58.021 01:03:42 -- nvmf/common.sh@7 -- # uname -s 00:23:58.021 01:03:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:58.021 01:03:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:58.021 01:03:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:58.021 01:03:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:58.021 01:03:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:58.021 01:03:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:58.021 01:03:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:58.021 01:03:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:58.021 01:03:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:58.021 01:03:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:58.021 01:03:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:58.021 01:03:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:58.021 01:03:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:58.021 01:03:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:58.021 01:03:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:58.021 01:03:42 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:58.021 01:03:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:58.021 01:03:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:58.021 01:03:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:58.021 01:03:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.021 01:03:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.021 01:03:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.021 01:03:42 -- paths/export.sh@5 -- # export PATH 00:23:58.021 01:03:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.021 01:03:42 -- nvmf/common.sh@46 -- # : 0 00:23:58.021 01:03:42 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:58.021 01:03:42 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:58.021 01:03:42 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:58.021 01:03:42 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:58.021 01:03:42 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:58.021 01:03:42 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:58.021 01:03:42 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:58.021 01:03:42 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:58.021 01:03:42 -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:23:58.021 01:03:42 -- nvmf/common.sh@284 -- # xtrace_disable 00:23:58.021 01:03:42 -- common/autotest_common.sh@10 -- # set +x 00:23:59.921 01:03:44 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:59.921 01:03:44 -- nvmf/common.sh@290 -- # pci_devs=() 00:23:59.921 01:03:44 -- nvmf/common.sh@290 -- # local -a pci_devs 00:23:59.921 01:03:44 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:23:59.921 01:03:44 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:23:59.921 01:03:44 -- nvmf/common.sh@292 -- # pci_drivers=() 00:23:59.921 01:03:44 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:23:59.921 01:03:44 -- nvmf/common.sh@294 -- # net_devs=() 00:23:59.921 01:03:44 -- nvmf/common.sh@294 -- # local -ga net_devs 00:23:59.921 01:03:44 -- nvmf/common.sh@295 -- # e810=() 00:23:59.921 01:03:44 -- nvmf/common.sh@295 -- # local -ga e810 00:23:59.921 01:03:44 -- nvmf/common.sh@296 -- # x722=() 00:23:59.922 01:03:44 -- nvmf/common.sh@296 -- # local -ga x722 00:23:59.922 01:03:44 -- nvmf/common.sh@297 -- # mlx=() 00:23:59.922 01:03:44 -- nvmf/common.sh@297 -- # local -ga mlx 00:23:59.922 01:03:44 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:59.922 01:03:44 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:59.922 01:03:44 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:59.922 01:03:44 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:59.922 01:03:44 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:59.922 01:03:44 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:59.922 01:03:44 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:59.922 01:03:44 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:59.922 01:03:44 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:59.922 01:03:44 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:59.922 01:03:44 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:59.922 01:03:44 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:23:59.922 01:03:44 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:23:59.922 01:03:44 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:23:59.922 01:03:44 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:23:59.922 01:03:44 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:23:59.922 01:03:44 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:23:59.922 01:03:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:59.922 01:03:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:59.922 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:59.922 01:03:44 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:59.922 01:03:44 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:59.922 01:03:44 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:59.922 01:03:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:59.922 01:03:44 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:59.922 01:03:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:59.922 01:03:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:59.922 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:59.922 01:03:44 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:59.922 01:03:44 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:59.922 01:03:44 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:59.922 01:03:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:59.922 01:03:44 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:59.922 01:03:44 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:23:59.922 01:03:44 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:23:59.922 01:03:44 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:23:59.922 01:03:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:59.922 01:03:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:59.922 01:03:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:59.922 01:03:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:59.922 01:03:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:59.922 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:59.922 01:03:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:59.922 01:03:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:59.922 01:03:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:59.922 01:03:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:59.922 01:03:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:59.922 01:03:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:59.922 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:59.922 01:03:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:59.922 01:03:44 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:23:59.922 01:03:44 -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:59.922 01:03:44 -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:23:59.922 01:03:44 -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:23:59.922 01:03:44 -- target/perf_adq.sh@59 -- # adq_reload_driver 00:23:59.922 01:03:44 -- target/perf_adq.sh@52 -- # rmmod ice 00:24:00.859 01:03:44 -- target/perf_adq.sh@53 -- # modprobe ice 00:24:02.760 01:03:46 -- target/perf_adq.sh@54 -- # sleep 5 00:24:08.069 01:03:51 -- target/perf_adq.sh@67 -- # nvmftestinit 00:24:08.069 01:03:51 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:08.069 01:03:51 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:08.069 01:03:51 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:08.069 01:03:51 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:08.069 01:03:51 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:08.069 01:03:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:08.069 01:03:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:08.069 01:03:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:08.069 01:03:51 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:08.069 01:03:51 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:08.069 01:03:51 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:08.069 01:03:51 -- common/autotest_common.sh@10 -- # set +x 00:24:08.069 01:03:51 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:08.069 01:03:51 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:08.069 01:03:51 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:08.069 01:03:51 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:08.069 01:03:51 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:08.069 01:03:51 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:08.069 01:03:51 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:08.069 01:03:51 -- nvmf/common.sh@294 -- # net_devs=() 00:24:08.069 01:03:51 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:08.069 01:03:51 -- nvmf/common.sh@295 -- # e810=() 00:24:08.069 01:03:51 -- nvmf/common.sh@295 -- # local -ga e810 00:24:08.069 01:03:51 -- nvmf/common.sh@296 -- # x722=() 00:24:08.069 01:03:51 -- nvmf/common.sh@296 -- # local -ga x722 00:24:08.069 01:03:51 -- nvmf/common.sh@297 -- # mlx=() 00:24:08.069 01:03:51 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:08.069 01:03:51 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:08.069 01:03:51 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:08.069 01:03:51 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:08.069 01:03:51 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:08.069 01:03:51 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:08.069 01:03:51 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:08.069 01:03:51 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:08.069 01:03:51 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:08.069 01:03:51 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:08.069 01:03:51 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:08.069 01:03:51 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:08.069 01:03:51 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:08.069 01:03:51 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:08.069 01:03:51 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:24:08.069 01:03:51 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:24:08.069 01:03:51 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:24:08.069 01:03:51 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:08.069 01:03:51 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:08.070 01:03:51 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:08.070 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:08.070 01:03:51 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:08.070 01:03:51 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:08.070 01:03:51 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:08.070 01:03:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:08.070 01:03:51 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:08.070 01:03:51 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:08.070 01:03:51 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:08.070 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:08.070 01:03:51 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:08.070 01:03:51 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:08.070 01:03:51 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:08.070 01:03:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:08.070 01:03:51 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:08.070 01:03:51 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:08.070 01:03:51 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:24:08.070 01:03:51 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:24:08.070 01:03:51 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:08.070 01:03:51 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:08.070 01:03:51 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:08.070 01:03:51 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:08.070 01:03:51 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:08.070 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:08.070 01:03:51 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:08.070 01:03:51 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:08.070 01:03:51 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:08.070 01:03:51 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:08.070 01:03:51 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:08.070 01:03:51 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:08.070 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:08.070 01:03:51 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:08.070 01:03:51 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:08.070 01:03:51 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:08.070 01:03:51 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:08.070 01:03:51 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:24:08.070 01:03:51 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:24:08.070 01:03:51 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:08.070 01:03:51 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:08.070 01:03:51 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:08.070 01:03:51 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:24:08.070 01:03:51 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:08.070 01:03:51 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:08.070 01:03:51 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:24:08.070 01:03:51 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:08.070 01:03:51 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:08.070 01:03:51 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:24:08.070 01:03:51 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:24:08.070 01:03:51 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:24:08.070 01:03:51 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:08.070 01:03:51 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:08.070 01:03:51 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:08.070 01:03:51 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:24:08.070 01:03:51 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:08.070 01:03:51 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:08.070 01:03:51 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:08.070 01:03:51 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:24:08.070 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:08.070 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:24:08.070 00:24:08.070 --- 10.0.0.2 ping statistics --- 00:24:08.070 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:08.070 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:24:08.070 01:03:51 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:08.070 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:08.070 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:24:08.070 00:24:08.070 --- 10.0.0.1 ping statistics --- 00:24:08.070 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:08.070 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:24:08.070 01:03:51 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:08.070 01:03:51 -- nvmf/common.sh@410 -- # return 0 00:24:08.070 01:03:51 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:08.070 01:03:51 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:08.070 01:03:51 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:08.070 01:03:51 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:08.070 01:03:51 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:08.070 01:03:51 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:08.070 01:03:51 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:08.070 01:03:51 -- target/perf_adq.sh@68 -- # nvmfappstart -m 0xF --wait-for-rpc 00:24:08.070 01:03:51 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:08.070 01:03:51 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:08.070 01:03:51 -- common/autotest_common.sh@10 -- # set +x 00:24:08.070 01:03:51 -- nvmf/common.sh@469 -- # nvmfpid=3466117 00:24:08.070 01:03:51 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:24:08.070 01:03:51 -- nvmf/common.sh@470 -- # waitforlisten 3466117 00:24:08.070 01:03:51 -- common/autotest_common.sh@819 -- # '[' -z 3466117 ']' 00:24:08.070 01:03:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:08.070 01:03:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:08.070 01:03:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:08.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:08.070 01:03:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:08.070 01:03:51 -- common/autotest_common.sh@10 -- # set +x 00:24:08.070 [2024-07-23 01:03:51.927633] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:24:08.070 [2024-07-23 01:03:51.927717] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:08.070 EAL: No free 2048 kB hugepages reported on node 1 00:24:08.070 [2024-07-23 01:03:51.992347] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:08.070 [2024-07-23 01:03:52.076317] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:08.070 [2024-07-23 01:03:52.076481] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:08.070 [2024-07-23 01:03:52.076499] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:08.070 [2024-07-23 01:03:52.076511] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:08.070 [2024-07-23 01:03:52.076562] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:08.070 [2024-07-23 01:03:52.076600] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:08.070 [2024-07-23 01:03:52.076683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:08.070 [2024-07-23 01:03:52.076687] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:08.070 01:03:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:08.070 01:03:52 -- common/autotest_common.sh@852 -- # return 0 00:24:08.070 01:03:52 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:08.070 01:03:52 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:08.070 01:03:52 -- common/autotest_common.sh@10 -- # set +x 00:24:08.070 01:03:52 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:08.070 01:03:52 -- target/perf_adq.sh@69 -- # adq_configure_nvmf_target 0 00:24:08.070 01:03:52 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:24:08.070 01:03:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:08.070 01:03:52 -- common/autotest_common.sh@10 -- # set +x 00:24:08.070 01:03:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:08.070 01:03:52 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:24:08.070 01:03:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:08.070 01:03:52 -- common/autotest_common.sh@10 -- # set +x 00:24:08.070 01:03:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:08.070 01:03:52 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:24:08.070 01:03:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:08.070 01:03:52 -- common/autotest_common.sh@10 -- # set +x 00:24:08.070 [2024-07-23 01:03:52.255092] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:08.070 01:03:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:08.070 01:03:52 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:08.070 01:03:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:08.070 01:03:52 -- common/autotest_common.sh@10 -- # set +x 00:24:08.328 Malloc1 00:24:08.328 01:03:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:08.328 01:03:52 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:08.328 01:03:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:08.328 01:03:52 -- common/autotest_common.sh@10 -- # set +x 00:24:08.328 01:03:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:08.328 01:03:52 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:08.328 01:03:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:08.328 01:03:52 -- common/autotest_common.sh@10 -- # set +x 00:24:08.328 01:03:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:08.328 01:03:52 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:08.328 01:03:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:08.328 01:03:52 -- common/autotest_common.sh@10 -- # set +x 00:24:08.328 [2024-07-23 01:03:52.306065] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:08.328 01:03:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:08.328 01:03:52 -- target/perf_adq.sh@73 -- # perfpid=3466146 00:24:08.328 01:03:52 -- target/perf_adq.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:24:08.328 01:03:52 -- target/perf_adq.sh@74 -- # sleep 2 00:24:08.329 EAL: No free 2048 kB hugepages reported on node 1 00:24:10.225 01:03:54 -- target/perf_adq.sh@76 -- # rpc_cmd nvmf_get_stats 00:24:10.225 01:03:54 -- target/perf_adq.sh@76 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:24:10.225 01:03:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:10.225 01:03:54 -- target/perf_adq.sh@76 -- # wc -l 00:24:10.225 01:03:54 -- common/autotest_common.sh@10 -- # set +x 00:24:10.225 01:03:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:10.225 01:03:54 -- target/perf_adq.sh@76 -- # count=4 00:24:10.225 01:03:54 -- target/perf_adq.sh@77 -- # [[ 4 -ne 4 ]] 00:24:10.225 01:03:54 -- target/perf_adq.sh@81 -- # wait 3466146 00:24:18.332 Initializing NVMe Controllers 00:24:18.332 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:18.332 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:24:18.332 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:24:18.332 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:24:18.332 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:24:18.332 Initialization complete. Launching workers. 00:24:18.332 ======================================================== 00:24:18.332 Latency(us) 00:24:18.332 Device Information : IOPS MiB/s Average min max 00:24:18.332 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10813.90 42.24 5918.66 1297.34 10217.08 00:24:18.332 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10992.30 42.94 5822.76 872.65 9880.60 00:24:18.332 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10967.40 42.84 5836.66 1125.66 8973.28 00:24:18.332 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 11601.60 45.32 5516.54 978.92 9146.27 00:24:18.332 ======================================================== 00:24:18.332 Total : 44375.20 173.34 5769.51 872.65 10217.08 00:24:18.332 00:24:18.332 01:04:02 -- target/perf_adq.sh@82 -- # nvmftestfini 00:24:18.332 01:04:02 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:18.332 01:04:02 -- nvmf/common.sh@116 -- # sync 00:24:18.332 01:04:02 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:18.332 01:04:02 -- nvmf/common.sh@119 -- # set +e 00:24:18.333 01:04:02 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:18.333 01:04:02 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:18.333 rmmod nvme_tcp 00:24:18.333 rmmod nvme_fabrics 00:24:18.333 rmmod nvme_keyring 00:24:18.333 01:04:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:18.333 01:04:02 -- nvmf/common.sh@123 -- # set -e 00:24:18.333 01:04:02 -- nvmf/common.sh@124 -- # return 0 00:24:18.333 01:04:02 -- nvmf/common.sh@477 -- # '[' -n 3466117 ']' 00:24:18.333 01:04:02 -- nvmf/common.sh@478 -- # killprocess 3466117 00:24:18.333 01:04:02 -- common/autotest_common.sh@926 -- # '[' -z 3466117 ']' 00:24:18.333 01:04:02 -- common/autotest_common.sh@930 -- # kill -0 3466117 00:24:18.333 01:04:02 -- common/autotest_common.sh@931 -- # uname 00:24:18.333 01:04:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:18.333 01:04:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3466117 00:24:18.333 01:04:02 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:18.333 01:04:02 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:18.333 01:04:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3466117' 00:24:18.333 killing process with pid 3466117 00:24:18.333 01:04:02 -- common/autotest_common.sh@945 -- # kill 3466117 00:24:18.333 01:04:02 -- common/autotest_common.sh@950 -- # wait 3466117 00:24:18.591 01:04:02 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:18.591 01:04:02 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:18.591 01:04:02 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:18.591 01:04:02 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:18.591 01:04:02 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:18.591 01:04:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:18.591 01:04:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:18.591 01:04:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:21.123 01:04:04 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:24:21.123 01:04:04 -- target/perf_adq.sh@84 -- # adq_reload_driver 00:24:21.123 01:04:04 -- target/perf_adq.sh@52 -- # rmmod ice 00:24:21.381 01:04:05 -- target/perf_adq.sh@53 -- # modprobe ice 00:24:23.280 01:04:07 -- target/perf_adq.sh@54 -- # sleep 5 00:24:28.548 01:04:12 -- target/perf_adq.sh@87 -- # nvmftestinit 00:24:28.548 01:04:12 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:28.548 01:04:12 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:28.548 01:04:12 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:28.548 01:04:12 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:28.548 01:04:12 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:28.548 01:04:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:28.549 01:04:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:28.549 01:04:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:28.549 01:04:12 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:28.549 01:04:12 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:28.549 01:04:12 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:28.549 01:04:12 -- common/autotest_common.sh@10 -- # set +x 00:24:28.549 01:04:12 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:28.549 01:04:12 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:28.549 01:04:12 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:28.549 01:04:12 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:28.549 01:04:12 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:28.549 01:04:12 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:28.549 01:04:12 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:28.549 01:04:12 -- nvmf/common.sh@294 -- # net_devs=() 00:24:28.549 01:04:12 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:28.549 01:04:12 -- nvmf/common.sh@295 -- # e810=() 00:24:28.549 01:04:12 -- nvmf/common.sh@295 -- # local -ga e810 00:24:28.549 01:04:12 -- nvmf/common.sh@296 -- # x722=() 00:24:28.549 01:04:12 -- nvmf/common.sh@296 -- # local -ga x722 00:24:28.549 01:04:12 -- nvmf/common.sh@297 -- # mlx=() 00:24:28.549 01:04:12 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:28.549 01:04:12 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:28.549 01:04:12 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:28.549 01:04:12 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:28.549 01:04:12 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:28.549 01:04:12 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:28.549 01:04:12 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:28.549 01:04:12 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:28.549 01:04:12 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:28.549 01:04:12 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:28.549 01:04:12 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:28.549 01:04:12 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:28.549 01:04:12 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:28.549 01:04:12 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:28.549 01:04:12 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:24:28.549 01:04:12 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:24:28.549 01:04:12 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:24:28.549 01:04:12 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:28.549 01:04:12 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:28.549 01:04:12 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:28.549 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:28.549 01:04:12 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:28.549 01:04:12 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:28.549 01:04:12 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:28.549 01:04:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:28.549 01:04:12 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:28.549 01:04:12 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:28.549 01:04:12 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:28.549 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:28.549 01:04:12 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:28.549 01:04:12 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:28.549 01:04:12 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:28.549 01:04:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:28.549 01:04:12 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:28.549 01:04:12 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:28.549 01:04:12 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:24:28.549 01:04:12 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:24:28.549 01:04:12 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:28.549 01:04:12 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:28.549 01:04:12 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:28.549 01:04:12 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:28.549 01:04:12 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:28.549 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:28.549 01:04:12 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:28.549 01:04:12 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:28.549 01:04:12 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:28.549 01:04:12 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:28.549 01:04:12 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:28.549 01:04:12 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:28.549 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:28.549 01:04:12 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:28.549 01:04:12 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:28.549 01:04:12 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:28.549 01:04:12 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:28.549 01:04:12 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:24:28.549 01:04:12 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:24:28.549 01:04:12 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:28.549 01:04:12 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:28.549 01:04:12 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:28.549 01:04:12 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:24:28.549 01:04:12 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:28.549 01:04:12 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:28.549 01:04:12 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:24:28.549 01:04:12 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:28.549 01:04:12 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:28.549 01:04:12 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:24:28.549 01:04:12 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:24:28.549 01:04:12 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:24:28.549 01:04:12 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:28.549 01:04:12 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:28.549 01:04:12 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:28.549 01:04:12 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:24:28.549 01:04:12 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:28.549 01:04:12 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:28.549 01:04:12 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:28.549 01:04:12 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:24:28.549 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:28.549 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.242 ms 00:24:28.549 00:24:28.549 --- 10.0.0.2 ping statistics --- 00:24:28.549 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:28.549 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:24:28.549 01:04:12 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:28.549 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:28.549 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.174 ms 00:24:28.549 00:24:28.549 --- 10.0.0.1 ping statistics --- 00:24:28.549 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:28.549 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:24:28.549 01:04:12 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:28.549 01:04:12 -- nvmf/common.sh@410 -- # return 0 00:24:28.549 01:04:12 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:28.549 01:04:12 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:28.549 01:04:12 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:28.549 01:04:12 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:28.549 01:04:12 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:28.549 01:04:12 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:28.549 01:04:12 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:28.549 01:04:12 -- target/perf_adq.sh@88 -- # adq_configure_driver 00:24:28.549 01:04:12 -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:24:28.549 01:04:12 -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:24:28.549 01:04:12 -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:24:28.549 net.core.busy_poll = 1 00:24:28.549 01:04:12 -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:24:28.549 net.core.busy_read = 1 00:24:28.549 01:04:12 -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:24:28.549 01:04:12 -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:24:28.549 01:04:12 -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:24:28.550 01:04:12 -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:24:28.550 01:04:12 -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:24:28.550 01:04:12 -- target/perf_adq.sh@89 -- # nvmfappstart -m 0xF --wait-for-rpc 00:24:28.550 01:04:12 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:28.550 01:04:12 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:28.550 01:04:12 -- common/autotest_common.sh@10 -- # set +x 00:24:28.550 01:04:12 -- nvmf/common.sh@469 -- # nvmfpid=3468832 00:24:28.550 01:04:12 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:24:28.550 01:04:12 -- nvmf/common.sh@470 -- # waitforlisten 3468832 00:24:28.550 01:04:12 -- common/autotest_common.sh@819 -- # '[' -z 3468832 ']' 00:24:28.550 01:04:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:28.550 01:04:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:28.550 01:04:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:28.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:28.550 01:04:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:28.550 01:04:12 -- common/autotest_common.sh@10 -- # set +x 00:24:28.807 [2024-07-23 01:04:12.786494] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:24:28.807 [2024-07-23 01:04:12.786569] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:28.807 EAL: No free 2048 kB hugepages reported on node 1 00:24:28.807 [2024-07-23 01:04:12.850120] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:28.807 [2024-07-23 01:04:12.937630] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:28.807 [2024-07-23 01:04:12.937792] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:28.807 [2024-07-23 01:04:12.937810] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:28.807 [2024-07-23 01:04:12.937822] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:28.807 [2024-07-23 01:04:12.937888] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:28.807 [2024-07-23 01:04:12.937943] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:28.807 [2024-07-23 01:04:12.937984] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:28.808 [2024-07-23 01:04:12.937986] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:29.065 01:04:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:29.065 01:04:13 -- common/autotest_common.sh@852 -- # return 0 00:24:29.065 01:04:13 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:29.065 01:04:13 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:29.065 01:04:13 -- common/autotest_common.sh@10 -- # set +x 00:24:29.065 01:04:13 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:29.065 01:04:13 -- target/perf_adq.sh@90 -- # adq_configure_nvmf_target 1 00:24:29.065 01:04:13 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:24:29.065 01:04:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:29.065 01:04:13 -- common/autotest_common.sh@10 -- # set +x 00:24:29.065 01:04:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:29.065 01:04:13 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:24:29.065 01:04:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:29.065 01:04:13 -- common/autotest_common.sh@10 -- # set +x 00:24:29.065 01:04:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:29.065 01:04:13 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:24:29.065 01:04:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:29.065 01:04:13 -- common/autotest_common.sh@10 -- # set +x 00:24:29.065 [2024-07-23 01:04:13.168557] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:29.065 01:04:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:29.065 01:04:13 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:29.065 01:04:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:29.065 01:04:13 -- common/autotest_common.sh@10 -- # set +x 00:24:29.065 Malloc1 00:24:29.065 01:04:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:29.065 01:04:13 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:29.065 01:04:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:29.065 01:04:13 -- common/autotest_common.sh@10 -- # set +x 00:24:29.065 01:04:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:29.065 01:04:13 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:29.065 01:04:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:29.065 01:04:13 -- common/autotest_common.sh@10 -- # set +x 00:24:29.065 01:04:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:29.065 01:04:13 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:29.065 01:04:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:29.065 01:04:13 -- common/autotest_common.sh@10 -- # set +x 00:24:29.065 [2024-07-23 01:04:13.221610] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:29.065 01:04:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:29.065 01:04:13 -- target/perf_adq.sh@94 -- # perfpid=3468976 00:24:29.065 01:04:13 -- target/perf_adq.sh@95 -- # sleep 2 00:24:29.065 01:04:13 -- target/perf_adq.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:24:29.065 EAL: No free 2048 kB hugepages reported on node 1 00:24:31.592 01:04:15 -- target/perf_adq.sh@97 -- # rpc_cmd nvmf_get_stats 00:24:31.592 01:04:15 -- target/perf_adq.sh@97 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:24:31.592 01:04:15 -- target/perf_adq.sh@97 -- # wc -l 00:24:31.592 01:04:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:31.592 01:04:15 -- common/autotest_common.sh@10 -- # set +x 00:24:31.592 01:04:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:31.592 01:04:15 -- target/perf_adq.sh@97 -- # count=2 00:24:31.592 01:04:15 -- target/perf_adq.sh@98 -- # [[ 2 -lt 2 ]] 00:24:31.592 01:04:15 -- target/perf_adq.sh@103 -- # wait 3468976 00:24:39.729 Initializing NVMe Controllers 00:24:39.730 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:39.730 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:24:39.730 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:24:39.730 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:24:39.730 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:24:39.730 Initialization complete. Launching workers. 00:24:39.730 ======================================================== 00:24:39.730 Latency(us) 00:24:39.730 Device Information : IOPS MiB/s Average min max 00:24:39.730 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 6316.50 24.67 10133.70 1693.60 57283.65 00:24:39.730 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 6096.10 23.81 10498.84 1630.16 56338.21 00:24:39.730 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 7700.00 30.08 8315.14 1570.26 51756.39 00:24:39.730 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 7783.40 30.40 8225.22 1413.75 52450.75 00:24:39.730 ======================================================== 00:24:39.730 Total : 27896.00 108.97 9179.03 1413.75 57283.65 00:24:39.730 00:24:39.730 01:04:23 -- target/perf_adq.sh@104 -- # nvmftestfini 00:24:39.730 01:04:23 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:39.730 01:04:23 -- nvmf/common.sh@116 -- # sync 00:24:39.730 01:04:23 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:39.730 01:04:23 -- nvmf/common.sh@119 -- # set +e 00:24:39.730 01:04:23 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:39.730 01:04:23 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:39.730 rmmod nvme_tcp 00:24:39.730 rmmod nvme_fabrics 00:24:39.730 rmmod nvme_keyring 00:24:39.730 01:04:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:39.730 01:04:23 -- nvmf/common.sh@123 -- # set -e 00:24:39.730 01:04:23 -- nvmf/common.sh@124 -- # return 0 00:24:39.730 01:04:23 -- nvmf/common.sh@477 -- # '[' -n 3468832 ']' 00:24:39.730 01:04:23 -- nvmf/common.sh@478 -- # killprocess 3468832 00:24:39.730 01:04:23 -- common/autotest_common.sh@926 -- # '[' -z 3468832 ']' 00:24:39.730 01:04:23 -- common/autotest_common.sh@930 -- # kill -0 3468832 00:24:39.730 01:04:23 -- common/autotest_common.sh@931 -- # uname 00:24:39.730 01:04:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:39.730 01:04:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3468832 00:24:39.730 01:04:23 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:39.730 01:04:23 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:39.730 01:04:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3468832' 00:24:39.730 killing process with pid 3468832 00:24:39.730 01:04:23 -- common/autotest_common.sh@945 -- # kill 3468832 00:24:39.730 01:04:23 -- common/autotest_common.sh@950 -- # wait 3468832 00:24:39.730 01:04:23 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:39.730 01:04:23 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:39.730 01:04:23 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:39.730 01:04:23 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:39.730 01:04:23 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:39.730 01:04:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:39.730 01:04:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:39.730 01:04:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:41.636 01:04:25 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:24:41.636 01:04:25 -- target/perf_adq.sh@106 -- # trap - SIGINT SIGTERM EXIT 00:24:41.636 00:24:41.636 real 0m43.771s 00:24:41.636 user 2m33.373s 00:24:41.636 sys 0m11.786s 00:24:41.636 01:04:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:41.636 01:04:25 -- common/autotest_common.sh@10 -- # set +x 00:24:41.636 ************************************ 00:24:41.636 END TEST nvmf_perf_adq 00:24:41.636 ************************************ 00:24:41.636 01:04:25 -- nvmf/nvmf.sh@81 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:24:41.636 01:04:25 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:24:41.636 01:04:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:41.636 01:04:25 -- common/autotest_common.sh@10 -- # set +x 00:24:41.636 ************************************ 00:24:41.636 START TEST nvmf_shutdown 00:24:41.636 ************************************ 00:24:41.636 01:04:25 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:24:41.895 * Looking for test storage... 00:24:41.895 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:41.895 01:04:25 -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:41.895 01:04:25 -- nvmf/common.sh@7 -- # uname -s 00:24:41.895 01:04:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:41.895 01:04:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:41.895 01:04:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:41.895 01:04:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:41.895 01:04:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:41.895 01:04:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:41.895 01:04:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:41.895 01:04:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:41.895 01:04:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:41.895 01:04:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:41.895 01:04:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:41.895 01:04:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:41.895 01:04:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:41.895 01:04:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:41.895 01:04:25 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:41.895 01:04:25 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:41.895 01:04:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:41.895 01:04:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:41.895 01:04:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:41.895 01:04:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:41.895 01:04:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:41.895 01:04:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:41.895 01:04:25 -- paths/export.sh@5 -- # export PATH 00:24:41.895 01:04:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:41.895 01:04:25 -- nvmf/common.sh@46 -- # : 0 00:24:41.895 01:04:25 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:41.895 01:04:25 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:41.895 01:04:25 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:41.895 01:04:25 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:41.895 01:04:25 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:41.895 01:04:25 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:41.895 01:04:25 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:41.895 01:04:25 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:41.895 01:04:25 -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:41.895 01:04:25 -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:41.895 01:04:25 -- target/shutdown.sh@146 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:24:41.895 01:04:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:24:41.895 01:04:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:41.895 01:04:25 -- common/autotest_common.sh@10 -- # set +x 00:24:41.895 ************************************ 00:24:41.895 START TEST nvmf_shutdown_tc1 00:24:41.895 ************************************ 00:24:41.895 01:04:25 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc1 00:24:41.895 01:04:25 -- target/shutdown.sh@74 -- # starttarget 00:24:41.895 01:04:25 -- target/shutdown.sh@15 -- # nvmftestinit 00:24:41.895 01:04:25 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:41.895 01:04:25 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:41.895 01:04:25 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:41.896 01:04:25 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:41.896 01:04:25 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:41.896 01:04:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:41.896 01:04:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:41.896 01:04:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:41.896 01:04:25 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:41.896 01:04:25 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:41.896 01:04:25 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:41.896 01:04:25 -- common/autotest_common.sh@10 -- # set +x 00:24:43.798 01:04:27 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:43.798 01:04:27 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:43.798 01:04:27 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:43.798 01:04:27 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:43.798 01:04:27 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:43.798 01:04:27 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:43.798 01:04:27 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:43.798 01:04:27 -- nvmf/common.sh@294 -- # net_devs=() 00:24:43.798 01:04:27 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:43.798 01:04:27 -- nvmf/common.sh@295 -- # e810=() 00:24:43.798 01:04:27 -- nvmf/common.sh@295 -- # local -ga e810 00:24:43.798 01:04:27 -- nvmf/common.sh@296 -- # x722=() 00:24:43.798 01:04:27 -- nvmf/common.sh@296 -- # local -ga x722 00:24:43.798 01:04:27 -- nvmf/common.sh@297 -- # mlx=() 00:24:43.798 01:04:27 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:43.798 01:04:27 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:43.798 01:04:27 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:43.798 01:04:27 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:43.798 01:04:27 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:43.798 01:04:27 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:43.798 01:04:27 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:43.798 01:04:27 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:43.798 01:04:27 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:43.798 01:04:27 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:43.798 01:04:27 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:43.798 01:04:27 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:43.798 01:04:27 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:43.798 01:04:27 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:43.798 01:04:27 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:24:43.798 01:04:27 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:24:43.798 01:04:27 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:24:43.798 01:04:27 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:43.798 01:04:27 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:43.798 01:04:27 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:43.798 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:43.798 01:04:27 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:43.798 01:04:27 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:43.798 01:04:27 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:43.798 01:04:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:43.798 01:04:27 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:43.798 01:04:27 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:43.798 01:04:27 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:43.798 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:43.798 01:04:27 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:43.798 01:04:27 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:43.798 01:04:27 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:43.798 01:04:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:43.798 01:04:27 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:43.798 01:04:27 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:43.798 01:04:27 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:24:43.798 01:04:27 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:24:43.798 01:04:27 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:43.798 01:04:27 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:43.798 01:04:27 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:43.798 01:04:27 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:43.798 01:04:27 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:43.798 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:43.798 01:04:27 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:43.798 01:04:27 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:43.798 01:04:27 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:43.798 01:04:27 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:43.798 01:04:27 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:43.798 01:04:27 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:43.798 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:43.798 01:04:27 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:43.798 01:04:27 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:43.798 01:04:27 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:43.798 01:04:27 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:43.798 01:04:27 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:24:43.798 01:04:27 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:24:43.798 01:04:27 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:43.798 01:04:27 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:43.798 01:04:27 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:43.798 01:04:27 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:24:43.798 01:04:27 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:43.798 01:04:27 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:43.798 01:04:27 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:24:43.798 01:04:27 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:43.798 01:04:27 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:43.798 01:04:27 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:24:43.798 01:04:27 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:24:43.798 01:04:27 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:24:43.798 01:04:27 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:43.798 01:04:27 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:43.798 01:04:27 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:43.798 01:04:27 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:24:43.798 01:04:27 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:43.798 01:04:27 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:43.798 01:04:27 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:44.056 01:04:28 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:24:44.056 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:44.056 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:24:44.056 00:24:44.057 --- 10.0.0.2 ping statistics --- 00:24:44.057 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:44.057 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:24:44.057 01:04:28 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:44.057 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:44.057 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:24:44.057 00:24:44.057 --- 10.0.0.1 ping statistics --- 00:24:44.057 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:44.057 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:24:44.057 01:04:28 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:44.057 01:04:28 -- nvmf/common.sh@410 -- # return 0 00:24:44.057 01:04:28 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:44.057 01:04:28 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:44.057 01:04:28 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:44.057 01:04:28 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:44.057 01:04:28 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:44.057 01:04:28 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:44.057 01:04:28 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:44.057 01:04:28 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:24:44.057 01:04:28 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:44.057 01:04:28 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:44.057 01:04:28 -- common/autotest_common.sh@10 -- # set +x 00:24:44.057 01:04:28 -- nvmf/common.sh@469 -- # nvmfpid=3472183 00:24:44.057 01:04:28 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:44.057 01:04:28 -- nvmf/common.sh@470 -- # waitforlisten 3472183 00:24:44.057 01:04:28 -- common/autotest_common.sh@819 -- # '[' -z 3472183 ']' 00:24:44.057 01:04:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:44.057 01:04:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:44.057 01:04:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:44.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:44.057 01:04:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:44.057 01:04:28 -- common/autotest_common.sh@10 -- # set +x 00:24:44.057 [2024-07-23 01:04:28.076317] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:24:44.057 [2024-07-23 01:04:28.076389] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:44.057 EAL: No free 2048 kB hugepages reported on node 1 00:24:44.057 [2024-07-23 01:04:28.143872] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:44.057 [2024-07-23 01:04:28.235529] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:44.057 [2024-07-23 01:04:28.235703] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:44.057 [2024-07-23 01:04:28.235722] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:44.057 [2024-07-23 01:04:28.235734] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:44.057 [2024-07-23 01:04:28.235975] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:44.057 [2024-07-23 01:04:28.236244] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:44.057 [2024-07-23 01:04:28.236656] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:24:44.057 [2024-07-23 01:04:28.236661] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:44.992 01:04:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:44.992 01:04:29 -- common/autotest_common.sh@852 -- # return 0 00:24:44.992 01:04:29 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:44.992 01:04:29 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:44.992 01:04:29 -- common/autotest_common.sh@10 -- # set +x 00:24:44.992 01:04:29 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:44.992 01:04:29 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:44.992 01:04:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:44.992 01:04:29 -- common/autotest_common.sh@10 -- # set +x 00:24:44.992 [2024-07-23 01:04:29.072310] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:44.992 01:04:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:44.992 01:04:29 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:24:44.992 01:04:29 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:24:44.992 01:04:29 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:44.992 01:04:29 -- common/autotest_common.sh@10 -- # set +x 00:24:44.992 01:04:29 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:44.992 01:04:29 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:44.992 01:04:29 -- target/shutdown.sh@28 -- # cat 00:24:44.992 01:04:29 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:44.992 01:04:29 -- target/shutdown.sh@28 -- # cat 00:24:44.992 01:04:29 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:44.992 01:04:29 -- target/shutdown.sh@28 -- # cat 00:24:44.992 01:04:29 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:44.992 01:04:29 -- target/shutdown.sh@28 -- # cat 00:24:44.992 01:04:29 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:44.992 01:04:29 -- target/shutdown.sh@28 -- # cat 00:24:44.992 01:04:29 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:44.992 01:04:29 -- target/shutdown.sh@28 -- # cat 00:24:44.992 01:04:29 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:44.992 01:04:29 -- target/shutdown.sh@28 -- # cat 00:24:44.992 01:04:29 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:44.992 01:04:29 -- target/shutdown.sh@28 -- # cat 00:24:44.992 01:04:29 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:44.992 01:04:29 -- target/shutdown.sh@28 -- # cat 00:24:44.992 01:04:29 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:44.992 01:04:29 -- target/shutdown.sh@28 -- # cat 00:24:44.992 01:04:29 -- target/shutdown.sh@35 -- # rpc_cmd 00:24:44.992 01:04:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:44.992 01:04:29 -- common/autotest_common.sh@10 -- # set +x 00:24:44.992 Malloc1 00:24:44.992 [2024-07-23 01:04:29.161714] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:44.992 Malloc2 00:24:45.250 Malloc3 00:24:45.250 Malloc4 00:24:45.250 Malloc5 00:24:45.250 Malloc6 00:24:45.250 Malloc7 00:24:45.509 Malloc8 00:24:45.509 Malloc9 00:24:45.509 Malloc10 00:24:45.509 01:04:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:45.509 01:04:29 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:24:45.509 01:04:29 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:45.509 01:04:29 -- common/autotest_common.sh@10 -- # set +x 00:24:45.509 01:04:29 -- target/shutdown.sh@78 -- # perfpid=3472376 00:24:45.509 01:04:29 -- target/shutdown.sh@79 -- # waitforlisten 3472376 /var/tmp/bdevperf.sock 00:24:45.509 01:04:29 -- common/autotest_common.sh@819 -- # '[' -z 3472376 ']' 00:24:45.509 01:04:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:45.509 01:04:29 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:24:45.509 01:04:29 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:45.509 01:04:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:45.509 01:04:29 -- nvmf/common.sh@520 -- # config=() 00:24:45.509 01:04:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:45.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:45.509 01:04:29 -- nvmf/common.sh@520 -- # local subsystem config 00:24:45.509 01:04:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:45.509 01:04:29 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:45.509 01:04:29 -- common/autotest_common.sh@10 -- # set +x 00:24:45.509 01:04:29 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:45.509 { 00:24:45.509 "params": { 00:24:45.509 "name": "Nvme$subsystem", 00:24:45.509 "trtype": "$TEST_TRANSPORT", 00:24:45.509 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:45.509 "adrfam": "ipv4", 00:24:45.509 "trsvcid": "$NVMF_PORT", 00:24:45.509 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:45.509 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:45.509 "hdgst": ${hdgst:-false}, 00:24:45.509 "ddgst": ${ddgst:-false} 00:24:45.509 }, 00:24:45.509 "method": "bdev_nvme_attach_controller" 00:24:45.509 } 00:24:45.509 EOF 00:24:45.509 )") 00:24:45.509 01:04:29 -- nvmf/common.sh@542 -- # cat 00:24:45.509 01:04:29 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:45.509 01:04:29 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:45.509 { 00:24:45.509 "params": { 00:24:45.509 "name": "Nvme$subsystem", 00:24:45.509 "trtype": "$TEST_TRANSPORT", 00:24:45.509 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:45.509 "adrfam": "ipv4", 00:24:45.509 "trsvcid": "$NVMF_PORT", 00:24:45.509 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:45.509 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:45.509 "hdgst": ${hdgst:-false}, 00:24:45.509 "ddgst": ${ddgst:-false} 00:24:45.509 }, 00:24:45.509 "method": "bdev_nvme_attach_controller" 00:24:45.509 } 00:24:45.509 EOF 00:24:45.509 )") 00:24:45.509 01:04:29 -- nvmf/common.sh@542 -- # cat 00:24:45.509 01:04:29 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:45.509 01:04:29 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:45.509 { 00:24:45.509 "params": { 00:24:45.509 "name": "Nvme$subsystem", 00:24:45.509 "trtype": "$TEST_TRANSPORT", 00:24:45.509 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:45.509 "adrfam": "ipv4", 00:24:45.509 "trsvcid": "$NVMF_PORT", 00:24:45.509 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:45.509 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:45.509 "hdgst": ${hdgst:-false}, 00:24:45.509 "ddgst": ${ddgst:-false} 00:24:45.509 }, 00:24:45.509 "method": "bdev_nvme_attach_controller" 00:24:45.509 } 00:24:45.509 EOF 00:24:45.509 )") 00:24:45.509 01:04:29 -- nvmf/common.sh@542 -- # cat 00:24:45.509 01:04:29 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:45.509 01:04:29 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:45.509 { 00:24:45.509 "params": { 00:24:45.509 "name": "Nvme$subsystem", 00:24:45.509 "trtype": "$TEST_TRANSPORT", 00:24:45.509 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:45.509 "adrfam": "ipv4", 00:24:45.509 "trsvcid": "$NVMF_PORT", 00:24:45.509 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:45.509 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:45.509 "hdgst": ${hdgst:-false}, 00:24:45.509 "ddgst": ${ddgst:-false} 00:24:45.509 }, 00:24:45.509 "method": "bdev_nvme_attach_controller" 00:24:45.509 } 00:24:45.509 EOF 00:24:45.509 )") 00:24:45.509 01:04:29 -- nvmf/common.sh@542 -- # cat 00:24:45.509 01:04:29 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:45.509 01:04:29 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:45.509 { 00:24:45.509 "params": { 00:24:45.509 "name": "Nvme$subsystem", 00:24:45.509 "trtype": "$TEST_TRANSPORT", 00:24:45.509 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:45.509 "adrfam": "ipv4", 00:24:45.509 "trsvcid": "$NVMF_PORT", 00:24:45.509 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:45.509 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:45.509 "hdgst": ${hdgst:-false}, 00:24:45.509 "ddgst": ${ddgst:-false} 00:24:45.509 }, 00:24:45.509 "method": "bdev_nvme_attach_controller" 00:24:45.509 } 00:24:45.509 EOF 00:24:45.510 )") 00:24:45.510 01:04:29 -- nvmf/common.sh@542 -- # cat 00:24:45.510 01:04:29 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:45.510 01:04:29 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:45.510 { 00:24:45.510 "params": { 00:24:45.510 "name": "Nvme$subsystem", 00:24:45.510 "trtype": "$TEST_TRANSPORT", 00:24:45.510 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:45.510 "adrfam": "ipv4", 00:24:45.510 "trsvcid": "$NVMF_PORT", 00:24:45.510 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:45.510 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:45.510 "hdgst": ${hdgst:-false}, 00:24:45.510 "ddgst": ${ddgst:-false} 00:24:45.510 }, 00:24:45.510 "method": "bdev_nvme_attach_controller" 00:24:45.510 } 00:24:45.510 EOF 00:24:45.510 )") 00:24:45.510 01:04:29 -- nvmf/common.sh@542 -- # cat 00:24:45.510 01:04:29 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:45.510 01:04:29 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:45.510 { 00:24:45.510 "params": { 00:24:45.510 "name": "Nvme$subsystem", 00:24:45.510 "trtype": "$TEST_TRANSPORT", 00:24:45.510 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:45.510 "adrfam": "ipv4", 00:24:45.510 "trsvcid": "$NVMF_PORT", 00:24:45.510 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:45.510 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:45.510 "hdgst": ${hdgst:-false}, 00:24:45.510 "ddgst": ${ddgst:-false} 00:24:45.510 }, 00:24:45.510 "method": "bdev_nvme_attach_controller" 00:24:45.510 } 00:24:45.510 EOF 00:24:45.510 )") 00:24:45.510 01:04:29 -- nvmf/common.sh@542 -- # cat 00:24:45.510 01:04:29 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:45.510 01:04:29 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:45.510 { 00:24:45.510 "params": { 00:24:45.510 "name": "Nvme$subsystem", 00:24:45.510 "trtype": "$TEST_TRANSPORT", 00:24:45.510 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:45.510 "adrfam": "ipv4", 00:24:45.510 "trsvcid": "$NVMF_PORT", 00:24:45.510 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:45.510 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:45.510 "hdgst": ${hdgst:-false}, 00:24:45.510 "ddgst": ${ddgst:-false} 00:24:45.510 }, 00:24:45.510 "method": "bdev_nvme_attach_controller" 00:24:45.510 } 00:24:45.510 EOF 00:24:45.510 )") 00:24:45.510 01:04:29 -- nvmf/common.sh@542 -- # cat 00:24:45.510 01:04:29 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:45.510 01:04:29 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:45.510 { 00:24:45.510 "params": { 00:24:45.510 "name": "Nvme$subsystem", 00:24:45.510 "trtype": "$TEST_TRANSPORT", 00:24:45.510 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:45.510 "adrfam": "ipv4", 00:24:45.510 "trsvcid": "$NVMF_PORT", 00:24:45.510 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:45.510 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:45.510 "hdgst": ${hdgst:-false}, 00:24:45.510 "ddgst": ${ddgst:-false} 00:24:45.510 }, 00:24:45.510 "method": "bdev_nvme_attach_controller" 00:24:45.510 } 00:24:45.510 EOF 00:24:45.510 )") 00:24:45.510 01:04:29 -- nvmf/common.sh@542 -- # cat 00:24:45.510 01:04:29 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:45.510 01:04:29 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:45.510 { 00:24:45.510 "params": { 00:24:45.510 "name": "Nvme$subsystem", 00:24:45.510 "trtype": "$TEST_TRANSPORT", 00:24:45.510 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:45.510 "adrfam": "ipv4", 00:24:45.510 "trsvcid": "$NVMF_PORT", 00:24:45.510 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:45.510 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:45.510 "hdgst": ${hdgst:-false}, 00:24:45.510 "ddgst": ${ddgst:-false} 00:24:45.510 }, 00:24:45.510 "method": "bdev_nvme_attach_controller" 00:24:45.510 } 00:24:45.510 EOF 00:24:45.510 )") 00:24:45.510 01:04:29 -- nvmf/common.sh@542 -- # cat 00:24:45.510 01:04:29 -- nvmf/common.sh@544 -- # jq . 00:24:45.510 01:04:29 -- nvmf/common.sh@545 -- # IFS=, 00:24:45.510 01:04:29 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:24:45.510 "params": { 00:24:45.510 "name": "Nvme1", 00:24:45.510 "trtype": "tcp", 00:24:45.510 "traddr": "10.0.0.2", 00:24:45.510 "adrfam": "ipv4", 00:24:45.510 "trsvcid": "4420", 00:24:45.510 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:45.510 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:45.510 "hdgst": false, 00:24:45.510 "ddgst": false 00:24:45.510 }, 00:24:45.510 "method": "bdev_nvme_attach_controller" 00:24:45.510 },{ 00:24:45.510 "params": { 00:24:45.510 "name": "Nvme2", 00:24:45.510 "trtype": "tcp", 00:24:45.510 "traddr": "10.0.0.2", 00:24:45.510 "adrfam": "ipv4", 00:24:45.510 "trsvcid": "4420", 00:24:45.510 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:45.510 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:45.510 "hdgst": false, 00:24:45.510 "ddgst": false 00:24:45.510 }, 00:24:45.510 "method": "bdev_nvme_attach_controller" 00:24:45.510 },{ 00:24:45.510 "params": { 00:24:45.510 "name": "Nvme3", 00:24:45.510 "trtype": "tcp", 00:24:45.510 "traddr": "10.0.0.2", 00:24:45.510 "adrfam": "ipv4", 00:24:45.510 "trsvcid": "4420", 00:24:45.510 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:45.510 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:45.510 "hdgst": false, 00:24:45.510 "ddgst": false 00:24:45.510 }, 00:24:45.510 "method": "bdev_nvme_attach_controller" 00:24:45.510 },{ 00:24:45.510 "params": { 00:24:45.510 "name": "Nvme4", 00:24:45.510 "trtype": "tcp", 00:24:45.510 "traddr": "10.0.0.2", 00:24:45.510 "adrfam": "ipv4", 00:24:45.510 "trsvcid": "4420", 00:24:45.510 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:45.510 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:45.510 "hdgst": false, 00:24:45.510 "ddgst": false 00:24:45.510 }, 00:24:45.510 "method": "bdev_nvme_attach_controller" 00:24:45.510 },{ 00:24:45.510 "params": { 00:24:45.510 "name": "Nvme5", 00:24:45.510 "trtype": "tcp", 00:24:45.510 "traddr": "10.0.0.2", 00:24:45.510 "adrfam": "ipv4", 00:24:45.510 "trsvcid": "4420", 00:24:45.510 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:45.510 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:45.510 "hdgst": false, 00:24:45.510 "ddgst": false 00:24:45.510 }, 00:24:45.510 "method": "bdev_nvme_attach_controller" 00:24:45.510 },{ 00:24:45.510 "params": { 00:24:45.510 "name": "Nvme6", 00:24:45.510 "trtype": "tcp", 00:24:45.510 "traddr": "10.0.0.2", 00:24:45.510 "adrfam": "ipv4", 00:24:45.510 "trsvcid": "4420", 00:24:45.510 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:45.510 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:45.510 "hdgst": false, 00:24:45.510 "ddgst": false 00:24:45.510 }, 00:24:45.510 "method": "bdev_nvme_attach_controller" 00:24:45.510 },{ 00:24:45.510 "params": { 00:24:45.510 "name": "Nvme7", 00:24:45.510 "trtype": "tcp", 00:24:45.510 "traddr": "10.0.0.2", 00:24:45.510 "adrfam": "ipv4", 00:24:45.510 "trsvcid": "4420", 00:24:45.510 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:45.510 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:45.510 "hdgst": false, 00:24:45.510 "ddgst": false 00:24:45.510 }, 00:24:45.511 "method": "bdev_nvme_attach_controller" 00:24:45.511 },{ 00:24:45.511 "params": { 00:24:45.511 "name": "Nvme8", 00:24:45.511 "trtype": "tcp", 00:24:45.511 "traddr": "10.0.0.2", 00:24:45.511 "adrfam": "ipv4", 00:24:45.511 "trsvcid": "4420", 00:24:45.511 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:45.511 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:45.511 "hdgst": false, 00:24:45.511 "ddgst": false 00:24:45.511 }, 00:24:45.511 "method": "bdev_nvme_attach_controller" 00:24:45.511 },{ 00:24:45.511 "params": { 00:24:45.511 "name": "Nvme9", 00:24:45.511 "trtype": "tcp", 00:24:45.511 "traddr": "10.0.0.2", 00:24:45.511 "adrfam": "ipv4", 00:24:45.511 "trsvcid": "4420", 00:24:45.511 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:45.511 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:45.511 "hdgst": false, 00:24:45.511 "ddgst": false 00:24:45.511 }, 00:24:45.511 "method": "bdev_nvme_attach_controller" 00:24:45.511 },{ 00:24:45.511 "params": { 00:24:45.511 "name": "Nvme10", 00:24:45.511 "trtype": "tcp", 00:24:45.511 "traddr": "10.0.0.2", 00:24:45.511 "adrfam": "ipv4", 00:24:45.511 "trsvcid": "4420", 00:24:45.511 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:45.511 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:45.511 "hdgst": false, 00:24:45.511 "ddgst": false 00:24:45.511 }, 00:24:45.511 "method": "bdev_nvme_attach_controller" 00:24:45.511 }' 00:24:45.511 [2024-07-23 01:04:29.663697] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:24:45.511 [2024-07-23 01:04:29.663775] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:24:45.511 EAL: No free 2048 kB hugepages reported on node 1 00:24:45.768 [2024-07-23 01:04:29.728376] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:45.768 [2024-07-23 01:04:29.813711] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:47.665 01:04:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:47.665 01:04:31 -- common/autotest_common.sh@852 -- # return 0 00:24:47.665 01:04:31 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:47.665 01:04:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:47.665 01:04:31 -- common/autotest_common.sh@10 -- # set +x 00:24:47.665 01:04:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:47.665 01:04:31 -- target/shutdown.sh@83 -- # kill -9 3472376 00:24:47.665 01:04:31 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:24:47.665 01:04:31 -- target/shutdown.sh@87 -- # sleep 1 00:24:48.231 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 3472376 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:24:48.231 01:04:32 -- target/shutdown.sh@88 -- # kill -0 3472183 00:24:48.231 01:04:32 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:24:48.231 01:04:32 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:48.231 01:04:32 -- nvmf/common.sh@520 -- # config=() 00:24:48.231 01:04:32 -- nvmf/common.sh@520 -- # local subsystem config 00:24:48.231 01:04:32 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:48.231 01:04:32 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:48.231 { 00:24:48.231 "params": { 00:24:48.231 "name": "Nvme$subsystem", 00:24:48.231 "trtype": "$TEST_TRANSPORT", 00:24:48.231 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:48.231 "adrfam": "ipv4", 00:24:48.231 "trsvcid": "$NVMF_PORT", 00:24:48.231 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:48.231 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:48.231 "hdgst": ${hdgst:-false}, 00:24:48.231 "ddgst": ${ddgst:-false} 00:24:48.231 }, 00:24:48.231 "method": "bdev_nvme_attach_controller" 00:24:48.231 } 00:24:48.231 EOF 00:24:48.231 )") 00:24:48.231 01:04:32 -- nvmf/common.sh@542 -- # cat 00:24:48.231 01:04:32 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:48.231 01:04:32 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:48.231 { 00:24:48.231 "params": { 00:24:48.231 "name": "Nvme$subsystem", 00:24:48.231 "trtype": "$TEST_TRANSPORT", 00:24:48.231 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:48.231 "adrfam": "ipv4", 00:24:48.231 "trsvcid": "$NVMF_PORT", 00:24:48.231 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:48.231 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:48.231 "hdgst": ${hdgst:-false}, 00:24:48.231 "ddgst": ${ddgst:-false} 00:24:48.231 }, 00:24:48.231 "method": "bdev_nvme_attach_controller" 00:24:48.231 } 00:24:48.231 EOF 00:24:48.231 )") 00:24:48.231 01:04:32 -- nvmf/common.sh@542 -- # cat 00:24:48.231 01:04:32 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:48.231 01:04:32 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:48.231 { 00:24:48.231 "params": { 00:24:48.231 "name": "Nvme$subsystem", 00:24:48.231 "trtype": "$TEST_TRANSPORT", 00:24:48.231 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:48.231 "adrfam": "ipv4", 00:24:48.231 "trsvcid": "$NVMF_PORT", 00:24:48.231 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:48.231 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:48.231 "hdgst": ${hdgst:-false}, 00:24:48.231 "ddgst": ${ddgst:-false} 00:24:48.231 }, 00:24:48.231 "method": "bdev_nvme_attach_controller" 00:24:48.231 } 00:24:48.231 EOF 00:24:48.231 )") 00:24:48.231 01:04:32 -- nvmf/common.sh@542 -- # cat 00:24:48.231 01:04:32 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:48.231 01:04:32 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:48.231 { 00:24:48.231 "params": { 00:24:48.231 "name": "Nvme$subsystem", 00:24:48.231 "trtype": "$TEST_TRANSPORT", 00:24:48.231 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:48.231 "adrfam": "ipv4", 00:24:48.231 "trsvcid": "$NVMF_PORT", 00:24:48.231 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:48.231 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:48.231 "hdgst": ${hdgst:-false}, 00:24:48.231 "ddgst": ${ddgst:-false} 00:24:48.231 }, 00:24:48.231 "method": "bdev_nvme_attach_controller" 00:24:48.231 } 00:24:48.231 EOF 00:24:48.231 )") 00:24:48.231 01:04:32 -- nvmf/common.sh@542 -- # cat 00:24:48.231 01:04:32 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:48.231 01:04:32 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:48.231 { 00:24:48.231 "params": { 00:24:48.231 "name": "Nvme$subsystem", 00:24:48.231 "trtype": "$TEST_TRANSPORT", 00:24:48.231 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:48.231 "adrfam": "ipv4", 00:24:48.231 "trsvcid": "$NVMF_PORT", 00:24:48.231 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:48.231 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:48.231 "hdgst": ${hdgst:-false}, 00:24:48.231 "ddgst": ${ddgst:-false} 00:24:48.231 }, 00:24:48.231 "method": "bdev_nvme_attach_controller" 00:24:48.231 } 00:24:48.231 EOF 00:24:48.231 )") 00:24:48.232 01:04:32 -- nvmf/common.sh@542 -- # cat 00:24:48.232 01:04:32 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:48.232 01:04:32 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:48.232 { 00:24:48.232 "params": { 00:24:48.232 "name": "Nvme$subsystem", 00:24:48.232 "trtype": "$TEST_TRANSPORT", 00:24:48.232 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:48.232 "adrfam": "ipv4", 00:24:48.232 "trsvcid": "$NVMF_PORT", 00:24:48.232 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:48.232 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:48.232 "hdgst": ${hdgst:-false}, 00:24:48.232 "ddgst": ${ddgst:-false} 00:24:48.232 }, 00:24:48.232 "method": "bdev_nvme_attach_controller" 00:24:48.232 } 00:24:48.232 EOF 00:24:48.232 )") 00:24:48.232 01:04:32 -- nvmf/common.sh@542 -- # cat 00:24:48.232 01:04:32 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:48.232 01:04:32 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:48.232 { 00:24:48.232 "params": { 00:24:48.232 "name": "Nvme$subsystem", 00:24:48.232 "trtype": "$TEST_TRANSPORT", 00:24:48.232 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:48.232 "adrfam": "ipv4", 00:24:48.232 "trsvcid": "$NVMF_PORT", 00:24:48.232 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:48.232 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:48.232 "hdgst": ${hdgst:-false}, 00:24:48.232 "ddgst": ${ddgst:-false} 00:24:48.232 }, 00:24:48.232 "method": "bdev_nvme_attach_controller" 00:24:48.232 } 00:24:48.232 EOF 00:24:48.232 )") 00:24:48.232 01:04:32 -- nvmf/common.sh@542 -- # cat 00:24:48.232 01:04:32 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:48.232 01:04:32 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:48.232 { 00:24:48.232 "params": { 00:24:48.232 "name": "Nvme$subsystem", 00:24:48.232 "trtype": "$TEST_TRANSPORT", 00:24:48.232 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:48.232 "adrfam": "ipv4", 00:24:48.232 "trsvcid": "$NVMF_PORT", 00:24:48.232 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:48.232 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:48.232 "hdgst": ${hdgst:-false}, 00:24:48.232 "ddgst": ${ddgst:-false} 00:24:48.232 }, 00:24:48.232 "method": "bdev_nvme_attach_controller" 00:24:48.232 } 00:24:48.232 EOF 00:24:48.232 )") 00:24:48.232 01:04:32 -- nvmf/common.sh@542 -- # cat 00:24:48.232 01:04:32 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:48.232 01:04:32 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:48.232 { 00:24:48.232 "params": { 00:24:48.232 "name": "Nvme$subsystem", 00:24:48.232 "trtype": "$TEST_TRANSPORT", 00:24:48.232 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:48.232 "adrfam": "ipv4", 00:24:48.232 "trsvcid": "$NVMF_PORT", 00:24:48.232 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:48.232 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:48.232 "hdgst": ${hdgst:-false}, 00:24:48.232 "ddgst": ${ddgst:-false} 00:24:48.232 }, 00:24:48.232 "method": "bdev_nvme_attach_controller" 00:24:48.232 } 00:24:48.232 EOF 00:24:48.232 )") 00:24:48.232 01:04:32 -- nvmf/common.sh@542 -- # cat 00:24:48.232 01:04:32 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:48.232 01:04:32 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:48.232 { 00:24:48.232 "params": { 00:24:48.232 "name": "Nvme$subsystem", 00:24:48.232 "trtype": "$TEST_TRANSPORT", 00:24:48.232 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:48.232 "adrfam": "ipv4", 00:24:48.232 "trsvcid": "$NVMF_PORT", 00:24:48.232 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:48.232 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:48.232 "hdgst": ${hdgst:-false}, 00:24:48.232 "ddgst": ${ddgst:-false} 00:24:48.232 }, 00:24:48.232 "method": "bdev_nvme_attach_controller" 00:24:48.232 } 00:24:48.232 EOF 00:24:48.232 )") 00:24:48.232 01:04:32 -- nvmf/common.sh@542 -- # cat 00:24:48.232 01:04:32 -- nvmf/common.sh@544 -- # jq . 00:24:48.232 01:04:32 -- nvmf/common.sh@545 -- # IFS=, 00:24:48.232 01:04:32 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:24:48.232 "params": { 00:24:48.232 "name": "Nvme1", 00:24:48.232 "trtype": "tcp", 00:24:48.232 "traddr": "10.0.0.2", 00:24:48.232 "adrfam": "ipv4", 00:24:48.232 "trsvcid": "4420", 00:24:48.232 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:48.232 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:48.232 "hdgst": false, 00:24:48.232 "ddgst": false 00:24:48.232 }, 00:24:48.232 "method": "bdev_nvme_attach_controller" 00:24:48.232 },{ 00:24:48.232 "params": { 00:24:48.232 "name": "Nvme2", 00:24:48.232 "trtype": "tcp", 00:24:48.232 "traddr": "10.0.0.2", 00:24:48.232 "adrfam": "ipv4", 00:24:48.232 "trsvcid": "4420", 00:24:48.232 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:48.232 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:48.232 "hdgst": false, 00:24:48.232 "ddgst": false 00:24:48.232 }, 00:24:48.232 "method": "bdev_nvme_attach_controller" 00:24:48.232 },{ 00:24:48.232 "params": { 00:24:48.232 "name": "Nvme3", 00:24:48.232 "trtype": "tcp", 00:24:48.232 "traddr": "10.0.0.2", 00:24:48.232 "adrfam": "ipv4", 00:24:48.232 "trsvcid": "4420", 00:24:48.232 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:48.232 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:48.232 "hdgst": false, 00:24:48.232 "ddgst": false 00:24:48.232 }, 00:24:48.232 "method": "bdev_nvme_attach_controller" 00:24:48.232 },{ 00:24:48.232 "params": { 00:24:48.232 "name": "Nvme4", 00:24:48.232 "trtype": "tcp", 00:24:48.232 "traddr": "10.0.0.2", 00:24:48.232 "adrfam": "ipv4", 00:24:48.232 "trsvcid": "4420", 00:24:48.232 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:48.232 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:48.232 "hdgst": false, 00:24:48.232 "ddgst": false 00:24:48.232 }, 00:24:48.232 "method": "bdev_nvme_attach_controller" 00:24:48.232 },{ 00:24:48.232 "params": { 00:24:48.232 "name": "Nvme5", 00:24:48.232 "trtype": "tcp", 00:24:48.232 "traddr": "10.0.0.2", 00:24:48.232 "adrfam": "ipv4", 00:24:48.232 "trsvcid": "4420", 00:24:48.232 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:48.232 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:48.232 "hdgst": false, 00:24:48.232 "ddgst": false 00:24:48.232 }, 00:24:48.232 "method": "bdev_nvme_attach_controller" 00:24:48.232 },{ 00:24:48.232 "params": { 00:24:48.232 "name": "Nvme6", 00:24:48.232 "trtype": "tcp", 00:24:48.232 "traddr": "10.0.0.2", 00:24:48.232 "adrfam": "ipv4", 00:24:48.232 "trsvcid": "4420", 00:24:48.232 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:48.232 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:48.232 "hdgst": false, 00:24:48.232 "ddgst": false 00:24:48.232 }, 00:24:48.232 "method": "bdev_nvme_attach_controller" 00:24:48.232 },{ 00:24:48.232 "params": { 00:24:48.232 "name": "Nvme7", 00:24:48.232 "trtype": "tcp", 00:24:48.232 "traddr": "10.0.0.2", 00:24:48.232 "adrfam": "ipv4", 00:24:48.232 "trsvcid": "4420", 00:24:48.232 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:48.232 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:48.232 "hdgst": false, 00:24:48.232 "ddgst": false 00:24:48.232 }, 00:24:48.232 "method": "bdev_nvme_attach_controller" 00:24:48.232 },{ 00:24:48.232 "params": { 00:24:48.232 "name": "Nvme8", 00:24:48.232 "trtype": "tcp", 00:24:48.232 "traddr": "10.0.0.2", 00:24:48.232 "adrfam": "ipv4", 00:24:48.232 "trsvcid": "4420", 00:24:48.232 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:48.232 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:48.232 "hdgst": false, 00:24:48.232 "ddgst": false 00:24:48.232 }, 00:24:48.232 "method": "bdev_nvme_attach_controller" 00:24:48.232 },{ 00:24:48.232 "params": { 00:24:48.232 "name": "Nvme9", 00:24:48.232 "trtype": "tcp", 00:24:48.232 "traddr": "10.0.0.2", 00:24:48.232 "adrfam": "ipv4", 00:24:48.232 "trsvcid": "4420", 00:24:48.232 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:48.232 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:48.232 "hdgst": false, 00:24:48.232 "ddgst": false 00:24:48.232 }, 00:24:48.232 "method": "bdev_nvme_attach_controller" 00:24:48.232 },{ 00:24:48.232 "params": { 00:24:48.232 "name": "Nvme10", 00:24:48.232 "trtype": "tcp", 00:24:48.232 "traddr": "10.0.0.2", 00:24:48.232 "adrfam": "ipv4", 00:24:48.232 "trsvcid": "4420", 00:24:48.232 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:48.232 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:48.232 "hdgst": false, 00:24:48.232 "ddgst": false 00:24:48.232 }, 00:24:48.232 "method": "bdev_nvme_attach_controller" 00:24:48.232 }' 00:24:48.232 [2024-07-23 01:04:32.409240] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:24:48.232 [2024-07-23 01:04:32.409328] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3472803 ] 00:24:48.491 EAL: No free 2048 kB hugepages reported on node 1 00:24:48.491 [2024-07-23 01:04:32.474460] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:48.491 [2024-07-23 01:04:32.562467] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:50.386 Running I/O for 1 seconds... 00:24:51.320 00:24:51.320 Latency(us) 00:24:51.320 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:51.320 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:51.320 Verification LBA range: start 0x0 length 0x400 00:24:51.320 Nvme1n1 : 1.13 352.21 22.01 0.00 0.00 172314.67 28544.57 180199.73 00:24:51.320 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:51.320 Verification LBA range: start 0x0 length 0x400 00:24:51.320 Nvme2n1 : 1.07 372.74 23.30 0.00 0.00 164691.85 10922.67 132042.90 00:24:51.320 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:51.320 Verification LBA range: start 0x0 length 0x400 00:24:51.320 Nvme3n1 : 1.08 370.07 23.13 0.00 0.00 164743.82 54758.97 125052.40 00:24:51.320 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:51.320 Verification LBA range: start 0x0 length 0x400 00:24:51.320 Nvme4n1 : 1.09 366.10 22.88 0.00 0.00 167822.90 26796.94 141363.58 00:24:51.320 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:51.320 Verification LBA range: start 0x0 length 0x400 00:24:51.320 Nvme5n1 : 1.09 369.14 23.07 0.00 0.00 165519.33 13204.29 162335.10 00:24:51.320 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:51.320 Verification LBA range: start 0x0 length 0x400 00:24:51.320 Nvme6n1 : 1.11 392.54 24.53 0.00 0.00 155677.53 9709.04 128936.01 00:24:51.320 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:51.320 Verification LBA range: start 0x0 length 0x400 00:24:51.320 Nvme7n1 : 1.11 391.92 24.50 0.00 0.00 154829.33 8543.95 131266.18 00:24:51.320 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:51.320 Verification LBA range: start 0x0 length 0x400 00:24:51.320 Nvme8n1 : 1.09 364.82 22.80 0.00 0.00 163280.70 29321.29 127382.57 00:24:51.320 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:51.320 Verification LBA range: start 0x0 length 0x400 00:24:51.320 Nvme9n1 : 1.11 390.26 24.39 0.00 0.00 153325.21 8543.95 126605.84 00:24:51.320 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:51.320 Verification LBA range: start 0x0 length 0x400 00:24:51.320 Nvme10n1 : 1.11 391.12 24.45 0.00 0.00 151996.37 8689.59 126605.84 00:24:51.320 =================================================================================================================== 00:24:51.320 Total : 3760.92 235.06 0.00 0.00 161166.22 8543.95 180199.73 00:24:51.320 01:04:35 -- target/shutdown.sh@93 -- # stoptarget 00:24:51.320 01:04:35 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:24:51.320 01:04:35 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:51.320 01:04:35 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:51.320 01:04:35 -- target/shutdown.sh@45 -- # nvmftestfini 00:24:51.320 01:04:35 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:51.320 01:04:35 -- nvmf/common.sh@116 -- # sync 00:24:51.320 01:04:35 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:51.320 01:04:35 -- nvmf/common.sh@119 -- # set +e 00:24:51.320 01:04:35 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:51.320 01:04:35 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:51.320 rmmod nvme_tcp 00:24:51.320 rmmod nvme_fabrics 00:24:51.578 rmmod nvme_keyring 00:24:51.578 01:04:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:51.578 01:04:35 -- nvmf/common.sh@123 -- # set -e 00:24:51.578 01:04:35 -- nvmf/common.sh@124 -- # return 0 00:24:51.578 01:04:35 -- nvmf/common.sh@477 -- # '[' -n 3472183 ']' 00:24:51.578 01:04:35 -- nvmf/common.sh@478 -- # killprocess 3472183 00:24:51.578 01:04:35 -- common/autotest_common.sh@926 -- # '[' -z 3472183 ']' 00:24:51.578 01:04:35 -- common/autotest_common.sh@930 -- # kill -0 3472183 00:24:51.578 01:04:35 -- common/autotest_common.sh@931 -- # uname 00:24:51.578 01:04:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:51.578 01:04:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3472183 00:24:51.578 01:04:35 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:24:51.578 01:04:35 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:24:51.578 01:04:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3472183' 00:24:51.578 killing process with pid 3472183 00:24:51.578 01:04:35 -- common/autotest_common.sh@945 -- # kill 3472183 00:24:51.578 01:04:35 -- common/autotest_common.sh@950 -- # wait 3472183 00:24:52.145 01:04:36 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:52.145 01:04:36 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:52.145 01:04:36 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:52.145 01:04:36 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:52.145 01:04:36 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:52.145 01:04:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:52.145 01:04:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:52.145 01:04:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:54.050 01:04:38 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:24:54.050 00:24:54.050 real 0m12.275s 00:24:54.050 user 0m35.690s 00:24:54.050 sys 0m3.372s 00:24:54.050 01:04:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:54.050 01:04:38 -- common/autotest_common.sh@10 -- # set +x 00:24:54.050 ************************************ 00:24:54.050 END TEST nvmf_shutdown_tc1 00:24:54.050 ************************************ 00:24:54.050 01:04:38 -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:24:54.050 01:04:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:24:54.050 01:04:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:54.050 01:04:38 -- common/autotest_common.sh@10 -- # set +x 00:24:54.050 ************************************ 00:24:54.050 START TEST nvmf_shutdown_tc2 00:24:54.050 ************************************ 00:24:54.050 01:04:38 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc2 00:24:54.050 01:04:38 -- target/shutdown.sh@98 -- # starttarget 00:24:54.050 01:04:38 -- target/shutdown.sh@15 -- # nvmftestinit 00:24:54.050 01:04:38 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:54.050 01:04:38 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:54.050 01:04:38 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:54.050 01:04:38 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:54.050 01:04:38 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:54.050 01:04:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:54.050 01:04:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:54.050 01:04:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:54.050 01:04:38 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:54.050 01:04:38 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:54.050 01:04:38 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:54.050 01:04:38 -- common/autotest_common.sh@10 -- # set +x 00:24:54.050 01:04:38 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:54.050 01:04:38 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:54.050 01:04:38 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:54.050 01:04:38 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:54.050 01:04:38 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:54.050 01:04:38 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:54.050 01:04:38 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:54.050 01:04:38 -- nvmf/common.sh@294 -- # net_devs=() 00:24:54.050 01:04:38 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:54.050 01:04:38 -- nvmf/common.sh@295 -- # e810=() 00:24:54.050 01:04:38 -- nvmf/common.sh@295 -- # local -ga e810 00:24:54.050 01:04:38 -- nvmf/common.sh@296 -- # x722=() 00:24:54.050 01:04:38 -- nvmf/common.sh@296 -- # local -ga x722 00:24:54.050 01:04:38 -- nvmf/common.sh@297 -- # mlx=() 00:24:54.050 01:04:38 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:54.050 01:04:38 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:54.050 01:04:38 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:54.050 01:04:38 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:54.050 01:04:38 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:54.050 01:04:38 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:54.050 01:04:38 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:54.050 01:04:38 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:54.050 01:04:38 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:54.050 01:04:38 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:54.050 01:04:38 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:54.050 01:04:38 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:54.050 01:04:38 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:54.050 01:04:38 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:54.050 01:04:38 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:24:54.050 01:04:38 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:24:54.050 01:04:38 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:24:54.050 01:04:38 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:54.050 01:04:38 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:54.050 01:04:38 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:54.050 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:54.050 01:04:38 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:54.050 01:04:38 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:54.050 01:04:38 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:54.050 01:04:38 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:54.050 01:04:38 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:54.050 01:04:38 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:54.050 01:04:38 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:54.050 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:54.050 01:04:38 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:54.050 01:04:38 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:54.050 01:04:38 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:54.050 01:04:38 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:54.050 01:04:38 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:54.050 01:04:38 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:54.050 01:04:38 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:24:54.050 01:04:38 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:24:54.050 01:04:38 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:54.050 01:04:38 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:54.050 01:04:38 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:54.050 01:04:38 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:54.050 01:04:38 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:54.050 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:54.050 01:04:38 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:54.050 01:04:38 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:54.050 01:04:38 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:54.050 01:04:38 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:54.050 01:04:38 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:54.050 01:04:38 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:54.050 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:54.050 01:04:38 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:54.050 01:04:38 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:54.050 01:04:38 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:54.050 01:04:38 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:54.050 01:04:38 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:24:54.050 01:04:38 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:24:54.050 01:04:38 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:54.050 01:04:38 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:54.050 01:04:38 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:54.050 01:04:38 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:24:54.050 01:04:38 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:54.050 01:04:38 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:54.050 01:04:38 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:24:54.050 01:04:38 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:54.050 01:04:38 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:54.050 01:04:38 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:24:54.050 01:04:38 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:24:54.050 01:04:38 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:24:54.050 01:04:38 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:54.308 01:04:38 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:54.308 01:04:38 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:54.308 01:04:38 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:24:54.308 01:04:38 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:54.308 01:04:38 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:54.308 01:04:38 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:54.308 01:04:38 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:24:54.308 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:54.308 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:24:54.308 00:24:54.309 --- 10.0.0.2 ping statistics --- 00:24:54.309 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:54.309 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:24:54.309 01:04:38 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:54.309 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:54.309 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:24:54.309 00:24:54.309 --- 10.0.0.1 ping statistics --- 00:24:54.309 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:54.309 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:24:54.309 01:04:38 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:54.309 01:04:38 -- nvmf/common.sh@410 -- # return 0 00:24:54.309 01:04:38 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:54.309 01:04:38 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:54.309 01:04:38 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:54.309 01:04:38 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:54.309 01:04:38 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:54.309 01:04:38 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:54.309 01:04:38 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:54.309 01:04:38 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:24:54.309 01:04:38 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:54.309 01:04:38 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:54.309 01:04:38 -- common/autotest_common.sh@10 -- # set +x 00:24:54.309 01:04:38 -- nvmf/common.sh@469 -- # nvmfpid=3473594 00:24:54.309 01:04:38 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:54.309 01:04:38 -- nvmf/common.sh@470 -- # waitforlisten 3473594 00:24:54.309 01:04:38 -- common/autotest_common.sh@819 -- # '[' -z 3473594 ']' 00:24:54.309 01:04:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:54.309 01:04:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:54.309 01:04:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:54.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:54.309 01:04:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:54.309 01:04:38 -- common/autotest_common.sh@10 -- # set +x 00:24:54.309 [2024-07-23 01:04:38.401147] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:24:54.309 [2024-07-23 01:04:38.401217] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:54.309 EAL: No free 2048 kB hugepages reported on node 1 00:24:54.309 [2024-07-23 01:04:38.463032] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:54.566 [2024-07-23 01:04:38.547456] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:54.566 [2024-07-23 01:04:38.547610] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:54.566 [2024-07-23 01:04:38.547635] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:54.566 [2024-07-23 01:04:38.547647] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:54.566 [2024-07-23 01:04:38.547796] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:54.566 [2024-07-23 01:04:38.547860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:54.566 [2024-07-23 01:04:38.547926] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:24:54.566 [2024-07-23 01:04:38.547928] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:55.499 01:04:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:55.499 01:04:39 -- common/autotest_common.sh@852 -- # return 0 00:24:55.499 01:04:39 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:55.499 01:04:39 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:55.499 01:04:39 -- common/autotest_common.sh@10 -- # set +x 00:24:55.499 01:04:39 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:55.499 01:04:39 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:55.499 01:04:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:55.499 01:04:39 -- common/autotest_common.sh@10 -- # set +x 00:24:55.499 [2024-07-23 01:04:39.413365] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:55.499 01:04:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:55.499 01:04:39 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:24:55.499 01:04:39 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:24:55.499 01:04:39 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:55.499 01:04:39 -- common/autotest_common.sh@10 -- # set +x 00:24:55.499 01:04:39 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:55.499 01:04:39 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:55.499 01:04:39 -- target/shutdown.sh@28 -- # cat 00:24:55.499 01:04:39 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:55.499 01:04:39 -- target/shutdown.sh@28 -- # cat 00:24:55.499 01:04:39 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:55.499 01:04:39 -- target/shutdown.sh@28 -- # cat 00:24:55.499 01:04:39 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:55.499 01:04:39 -- target/shutdown.sh@28 -- # cat 00:24:55.499 01:04:39 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:55.499 01:04:39 -- target/shutdown.sh@28 -- # cat 00:24:55.499 01:04:39 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:55.499 01:04:39 -- target/shutdown.sh@28 -- # cat 00:24:55.499 01:04:39 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:55.499 01:04:39 -- target/shutdown.sh@28 -- # cat 00:24:55.499 01:04:39 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:55.499 01:04:39 -- target/shutdown.sh@28 -- # cat 00:24:55.499 01:04:39 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:55.499 01:04:39 -- target/shutdown.sh@28 -- # cat 00:24:55.499 01:04:39 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:55.499 01:04:39 -- target/shutdown.sh@28 -- # cat 00:24:55.499 01:04:39 -- target/shutdown.sh@35 -- # rpc_cmd 00:24:55.499 01:04:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:55.499 01:04:39 -- common/autotest_common.sh@10 -- # set +x 00:24:55.499 Malloc1 00:24:55.499 [2024-07-23 01:04:39.502784] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:55.499 Malloc2 00:24:55.499 Malloc3 00:24:55.499 Malloc4 00:24:55.499 Malloc5 00:24:55.757 Malloc6 00:24:55.757 Malloc7 00:24:55.757 Malloc8 00:24:55.757 Malloc9 00:24:55.757 Malloc10 00:24:55.757 01:04:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:55.757 01:04:39 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:24:55.757 01:04:39 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:55.757 01:04:39 -- common/autotest_common.sh@10 -- # set +x 00:24:56.015 01:04:39 -- target/shutdown.sh@102 -- # perfpid=3473791 00:24:56.015 01:04:39 -- target/shutdown.sh@103 -- # waitforlisten 3473791 /var/tmp/bdevperf.sock 00:24:56.015 01:04:39 -- common/autotest_common.sh@819 -- # '[' -z 3473791 ']' 00:24:56.015 01:04:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:56.015 01:04:39 -- target/shutdown.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:24:56.015 01:04:39 -- target/shutdown.sh@101 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:56.015 01:04:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:56.015 01:04:39 -- nvmf/common.sh@520 -- # config=() 00:24:56.015 01:04:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:56.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:56.015 01:04:39 -- nvmf/common.sh@520 -- # local subsystem config 00:24:56.015 01:04:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:56.015 01:04:39 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:56.015 01:04:39 -- common/autotest_common.sh@10 -- # set +x 00:24:56.015 01:04:39 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:56.015 { 00:24:56.015 "params": { 00:24:56.015 "name": "Nvme$subsystem", 00:24:56.015 "trtype": "$TEST_TRANSPORT", 00:24:56.015 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:56.015 "adrfam": "ipv4", 00:24:56.015 "trsvcid": "$NVMF_PORT", 00:24:56.015 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:56.015 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:56.015 "hdgst": ${hdgst:-false}, 00:24:56.015 "ddgst": ${ddgst:-false} 00:24:56.015 }, 00:24:56.015 "method": "bdev_nvme_attach_controller" 00:24:56.015 } 00:24:56.015 EOF 00:24:56.015 )") 00:24:56.015 01:04:39 -- nvmf/common.sh@542 -- # cat 00:24:56.016 01:04:39 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:56.016 01:04:39 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:56.016 { 00:24:56.016 "params": { 00:24:56.016 "name": "Nvme$subsystem", 00:24:56.016 "trtype": "$TEST_TRANSPORT", 00:24:56.016 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:56.016 "adrfam": "ipv4", 00:24:56.016 "trsvcid": "$NVMF_PORT", 00:24:56.016 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:56.016 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:56.016 "hdgst": ${hdgst:-false}, 00:24:56.016 "ddgst": ${ddgst:-false} 00:24:56.016 }, 00:24:56.016 "method": "bdev_nvme_attach_controller" 00:24:56.016 } 00:24:56.016 EOF 00:24:56.016 )") 00:24:56.016 01:04:39 -- nvmf/common.sh@542 -- # cat 00:24:56.016 01:04:39 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:56.016 01:04:39 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:56.016 { 00:24:56.016 "params": { 00:24:56.016 "name": "Nvme$subsystem", 00:24:56.016 "trtype": "$TEST_TRANSPORT", 00:24:56.016 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:56.016 "adrfam": "ipv4", 00:24:56.016 "trsvcid": "$NVMF_PORT", 00:24:56.016 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:56.016 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:56.016 "hdgst": ${hdgst:-false}, 00:24:56.016 "ddgst": ${ddgst:-false} 00:24:56.016 }, 00:24:56.016 "method": "bdev_nvme_attach_controller" 00:24:56.016 } 00:24:56.016 EOF 00:24:56.016 )") 00:24:56.016 01:04:39 -- nvmf/common.sh@542 -- # cat 00:24:56.016 01:04:39 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:56.016 01:04:39 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:56.016 { 00:24:56.016 "params": { 00:24:56.016 "name": "Nvme$subsystem", 00:24:56.016 "trtype": "$TEST_TRANSPORT", 00:24:56.016 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:56.016 "adrfam": "ipv4", 00:24:56.016 "trsvcid": "$NVMF_PORT", 00:24:56.016 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:56.016 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:56.016 "hdgst": ${hdgst:-false}, 00:24:56.016 "ddgst": ${ddgst:-false} 00:24:56.016 }, 00:24:56.016 "method": "bdev_nvme_attach_controller" 00:24:56.016 } 00:24:56.016 EOF 00:24:56.016 )") 00:24:56.016 01:04:39 -- nvmf/common.sh@542 -- # cat 00:24:56.016 01:04:39 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:56.016 01:04:39 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:56.016 { 00:24:56.016 "params": { 00:24:56.016 "name": "Nvme$subsystem", 00:24:56.016 "trtype": "$TEST_TRANSPORT", 00:24:56.016 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:56.016 "adrfam": "ipv4", 00:24:56.016 "trsvcid": "$NVMF_PORT", 00:24:56.016 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:56.016 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:56.016 "hdgst": ${hdgst:-false}, 00:24:56.016 "ddgst": ${ddgst:-false} 00:24:56.016 }, 00:24:56.016 "method": "bdev_nvme_attach_controller" 00:24:56.016 } 00:24:56.016 EOF 00:24:56.016 )") 00:24:56.016 01:04:39 -- nvmf/common.sh@542 -- # cat 00:24:56.016 01:04:39 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:56.016 01:04:39 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:56.016 { 00:24:56.016 "params": { 00:24:56.016 "name": "Nvme$subsystem", 00:24:56.016 "trtype": "$TEST_TRANSPORT", 00:24:56.016 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:56.016 "adrfam": "ipv4", 00:24:56.016 "trsvcid": "$NVMF_PORT", 00:24:56.016 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:56.016 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:56.016 "hdgst": ${hdgst:-false}, 00:24:56.016 "ddgst": ${ddgst:-false} 00:24:56.016 }, 00:24:56.016 "method": "bdev_nvme_attach_controller" 00:24:56.016 } 00:24:56.016 EOF 00:24:56.016 )") 00:24:56.016 01:04:39 -- nvmf/common.sh@542 -- # cat 00:24:56.016 01:04:39 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:56.016 01:04:39 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:56.016 { 00:24:56.016 "params": { 00:24:56.016 "name": "Nvme$subsystem", 00:24:56.016 "trtype": "$TEST_TRANSPORT", 00:24:56.016 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:56.016 "adrfam": "ipv4", 00:24:56.016 "trsvcid": "$NVMF_PORT", 00:24:56.016 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:56.016 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:56.016 "hdgst": ${hdgst:-false}, 00:24:56.016 "ddgst": ${ddgst:-false} 00:24:56.016 }, 00:24:56.016 "method": "bdev_nvme_attach_controller" 00:24:56.016 } 00:24:56.016 EOF 00:24:56.016 )") 00:24:56.016 01:04:39 -- nvmf/common.sh@542 -- # cat 00:24:56.016 01:04:39 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:56.016 01:04:39 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:56.016 { 00:24:56.016 "params": { 00:24:56.016 "name": "Nvme$subsystem", 00:24:56.016 "trtype": "$TEST_TRANSPORT", 00:24:56.016 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:56.016 "adrfam": "ipv4", 00:24:56.016 "trsvcid": "$NVMF_PORT", 00:24:56.016 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:56.016 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:56.016 "hdgst": ${hdgst:-false}, 00:24:56.016 "ddgst": ${ddgst:-false} 00:24:56.016 }, 00:24:56.016 "method": "bdev_nvme_attach_controller" 00:24:56.016 } 00:24:56.016 EOF 00:24:56.016 )") 00:24:56.016 01:04:39 -- nvmf/common.sh@542 -- # cat 00:24:56.016 01:04:39 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:56.016 01:04:39 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:56.016 { 00:24:56.016 "params": { 00:24:56.016 "name": "Nvme$subsystem", 00:24:56.016 "trtype": "$TEST_TRANSPORT", 00:24:56.016 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:56.016 "adrfam": "ipv4", 00:24:56.016 "trsvcid": "$NVMF_PORT", 00:24:56.016 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:56.016 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:56.016 "hdgst": ${hdgst:-false}, 00:24:56.016 "ddgst": ${ddgst:-false} 00:24:56.016 }, 00:24:56.016 "method": "bdev_nvme_attach_controller" 00:24:56.016 } 00:24:56.016 EOF 00:24:56.016 )") 00:24:56.016 01:04:39 -- nvmf/common.sh@542 -- # cat 00:24:56.016 01:04:39 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:56.016 01:04:39 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:56.016 { 00:24:56.016 "params": { 00:24:56.016 "name": "Nvme$subsystem", 00:24:56.016 "trtype": "$TEST_TRANSPORT", 00:24:56.016 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:56.016 "adrfam": "ipv4", 00:24:56.016 "trsvcid": "$NVMF_PORT", 00:24:56.016 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:56.016 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:56.016 "hdgst": ${hdgst:-false}, 00:24:56.016 "ddgst": ${ddgst:-false} 00:24:56.016 }, 00:24:56.016 "method": "bdev_nvme_attach_controller" 00:24:56.016 } 00:24:56.016 EOF 00:24:56.016 )") 00:24:56.016 01:04:39 -- nvmf/common.sh@542 -- # cat 00:24:56.016 01:04:39 -- nvmf/common.sh@544 -- # jq . 00:24:56.016 01:04:39 -- nvmf/common.sh@545 -- # IFS=, 00:24:56.016 01:04:39 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:24:56.016 "params": { 00:24:56.016 "name": "Nvme1", 00:24:56.016 "trtype": "tcp", 00:24:56.016 "traddr": "10.0.0.2", 00:24:56.016 "adrfam": "ipv4", 00:24:56.016 "trsvcid": "4420", 00:24:56.016 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:56.016 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:56.016 "hdgst": false, 00:24:56.016 "ddgst": false 00:24:56.016 }, 00:24:56.016 "method": "bdev_nvme_attach_controller" 00:24:56.016 },{ 00:24:56.016 "params": { 00:24:56.016 "name": "Nvme2", 00:24:56.016 "trtype": "tcp", 00:24:56.016 "traddr": "10.0.0.2", 00:24:56.016 "adrfam": "ipv4", 00:24:56.016 "trsvcid": "4420", 00:24:56.016 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:56.016 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:56.016 "hdgst": false, 00:24:56.016 "ddgst": false 00:24:56.016 }, 00:24:56.016 "method": "bdev_nvme_attach_controller" 00:24:56.016 },{ 00:24:56.016 "params": { 00:24:56.016 "name": "Nvme3", 00:24:56.016 "trtype": "tcp", 00:24:56.016 "traddr": "10.0.0.2", 00:24:56.016 "adrfam": "ipv4", 00:24:56.016 "trsvcid": "4420", 00:24:56.016 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:56.016 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:56.016 "hdgst": false, 00:24:56.016 "ddgst": false 00:24:56.016 }, 00:24:56.016 "method": "bdev_nvme_attach_controller" 00:24:56.016 },{ 00:24:56.016 "params": { 00:24:56.016 "name": "Nvme4", 00:24:56.016 "trtype": "tcp", 00:24:56.016 "traddr": "10.0.0.2", 00:24:56.016 "adrfam": "ipv4", 00:24:56.016 "trsvcid": "4420", 00:24:56.016 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:56.016 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:56.016 "hdgst": false, 00:24:56.016 "ddgst": false 00:24:56.016 }, 00:24:56.016 "method": "bdev_nvme_attach_controller" 00:24:56.016 },{ 00:24:56.016 "params": { 00:24:56.016 "name": "Nvme5", 00:24:56.016 "trtype": "tcp", 00:24:56.016 "traddr": "10.0.0.2", 00:24:56.016 "adrfam": "ipv4", 00:24:56.016 "trsvcid": "4420", 00:24:56.016 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:56.016 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:56.016 "hdgst": false, 00:24:56.016 "ddgst": false 00:24:56.016 }, 00:24:56.017 "method": "bdev_nvme_attach_controller" 00:24:56.017 },{ 00:24:56.017 "params": { 00:24:56.017 "name": "Nvme6", 00:24:56.017 "trtype": "tcp", 00:24:56.017 "traddr": "10.0.0.2", 00:24:56.017 "adrfam": "ipv4", 00:24:56.017 "trsvcid": "4420", 00:24:56.017 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:56.017 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:56.017 "hdgst": false, 00:24:56.017 "ddgst": false 00:24:56.017 }, 00:24:56.017 "method": "bdev_nvme_attach_controller" 00:24:56.017 },{ 00:24:56.017 "params": { 00:24:56.017 "name": "Nvme7", 00:24:56.017 "trtype": "tcp", 00:24:56.017 "traddr": "10.0.0.2", 00:24:56.017 "adrfam": "ipv4", 00:24:56.017 "trsvcid": "4420", 00:24:56.017 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:56.017 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:56.017 "hdgst": false, 00:24:56.017 "ddgst": false 00:24:56.017 }, 00:24:56.017 "method": "bdev_nvme_attach_controller" 00:24:56.017 },{ 00:24:56.017 "params": { 00:24:56.017 "name": "Nvme8", 00:24:56.017 "trtype": "tcp", 00:24:56.017 "traddr": "10.0.0.2", 00:24:56.017 "adrfam": "ipv4", 00:24:56.017 "trsvcid": "4420", 00:24:56.017 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:56.017 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:56.017 "hdgst": false, 00:24:56.017 "ddgst": false 00:24:56.017 }, 00:24:56.017 "method": "bdev_nvme_attach_controller" 00:24:56.017 },{ 00:24:56.017 "params": { 00:24:56.017 "name": "Nvme9", 00:24:56.017 "trtype": "tcp", 00:24:56.017 "traddr": "10.0.0.2", 00:24:56.017 "adrfam": "ipv4", 00:24:56.017 "trsvcid": "4420", 00:24:56.017 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:56.017 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:56.017 "hdgst": false, 00:24:56.017 "ddgst": false 00:24:56.017 }, 00:24:56.017 "method": "bdev_nvme_attach_controller" 00:24:56.017 },{ 00:24:56.017 "params": { 00:24:56.017 "name": "Nvme10", 00:24:56.017 "trtype": "tcp", 00:24:56.017 "traddr": "10.0.0.2", 00:24:56.017 "adrfam": "ipv4", 00:24:56.017 "trsvcid": "4420", 00:24:56.017 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:56.017 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:56.017 "hdgst": false, 00:24:56.017 "ddgst": false 00:24:56.017 }, 00:24:56.017 "method": "bdev_nvme_attach_controller" 00:24:56.017 }' 00:24:56.017 [2024-07-23 01:04:40.007283] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:24:56.017 [2024-07-23 01:04:40.007360] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3473791 ] 00:24:56.017 EAL: No free 2048 kB hugepages reported on node 1 00:24:56.017 [2024-07-23 01:04:40.088763] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:56.017 [2024-07-23 01:04:40.176330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:57.943 Running I/O for 10 seconds... 00:24:57.943 01:04:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:57.943 01:04:41 -- common/autotest_common.sh@852 -- # return 0 00:24:57.943 01:04:41 -- target/shutdown.sh@104 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:57.943 01:04:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:57.943 01:04:41 -- common/autotest_common.sh@10 -- # set +x 00:24:57.943 01:04:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:57.943 01:04:41 -- target/shutdown.sh@106 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:24:57.943 01:04:41 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:24:57.943 01:04:41 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:24:57.943 01:04:41 -- target/shutdown.sh@57 -- # local ret=1 00:24:57.943 01:04:41 -- target/shutdown.sh@58 -- # local i 00:24:57.943 01:04:41 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:24:57.943 01:04:41 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:57.943 01:04:41 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:57.943 01:04:41 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:57.943 01:04:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:57.943 01:04:41 -- common/autotest_common.sh@10 -- # set +x 00:24:57.943 01:04:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:57.943 01:04:41 -- target/shutdown.sh@60 -- # read_io_count=3 00:24:57.943 01:04:41 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:24:57.943 01:04:41 -- target/shutdown.sh@67 -- # sleep 0.25 00:24:57.943 01:04:42 -- target/shutdown.sh@59 -- # (( i-- )) 00:24:57.943 01:04:42 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:57.943 01:04:42 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:57.943 01:04:42 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:57.943 01:04:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:57.943 01:04:42 -- common/autotest_common.sh@10 -- # set +x 00:24:57.943 01:04:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:57.943 01:04:42 -- target/shutdown.sh@60 -- # read_io_count=129 00:24:57.943 01:04:42 -- target/shutdown.sh@63 -- # '[' 129 -ge 100 ']' 00:24:57.943 01:04:42 -- target/shutdown.sh@64 -- # ret=0 00:24:57.943 01:04:42 -- target/shutdown.sh@65 -- # break 00:24:57.943 01:04:42 -- target/shutdown.sh@69 -- # return 0 00:24:57.943 01:04:42 -- target/shutdown.sh@109 -- # killprocess 3473791 00:24:57.943 01:04:42 -- common/autotest_common.sh@926 -- # '[' -z 3473791 ']' 00:24:57.943 01:04:42 -- common/autotest_common.sh@930 -- # kill -0 3473791 00:24:57.943 01:04:42 -- common/autotest_common.sh@931 -- # uname 00:24:57.943 01:04:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:57.943 01:04:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3473791 00:24:57.943 01:04:42 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:57.943 01:04:42 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:57.944 01:04:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3473791' 00:24:57.944 killing process with pid 3473791 00:24:57.944 01:04:42 -- common/autotest_common.sh@945 -- # kill 3473791 00:24:57.944 01:04:42 -- common/autotest_common.sh@950 -- # wait 3473791 00:24:58.201 Received shutdown signal, test time was about 0.514912 seconds 00:24:58.201 00:24:58.201 Latency(us) 00:24:58.201 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:58.201 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:58.201 Verification LBA range: start 0x0 length 0x400 00:24:58.201 Nvme1n1 : 0.49 386.33 24.15 0.00 0.00 157588.91 32428.18 125052.40 00:24:58.201 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:58.201 Verification LBA range: start 0x0 length 0x400 00:24:58.201 Nvme2n1 : 0.49 385.18 24.07 0.00 0.00 155541.84 32622.36 120392.06 00:24:58.201 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:58.201 Verification LBA range: start 0x0 length 0x400 00:24:58.201 Nvme3n1 : 0.51 374.67 23.42 0.00 0.00 157082.00 34564.17 146800.64 00:24:58.201 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:58.201 Verification LBA range: start 0x0 length 0x400 00:24:58.201 Nvme4n1 : 0.49 383.87 23.99 0.00 0.00 150883.55 32622.36 121168.78 00:24:58.201 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:58.201 Verification LBA range: start 0x0 length 0x400 00:24:58.201 Nvme5n1 : 0.50 381.20 23.82 0.00 0.00 149247.32 33787.45 121945.51 00:24:58.201 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:58.201 Verification LBA range: start 0x0 length 0x400 00:24:58.201 Nvme6n1 : 0.51 376.19 23.51 0.00 0.00 148776.17 34758.35 124275.67 00:24:58.201 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:58.201 Verification LBA range: start 0x0 length 0x400 00:24:58.201 Nvme7n1 : 0.50 379.42 23.71 0.00 0.00 145022.35 33981.63 124275.67 00:24:58.201 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:58.201 Verification LBA range: start 0x0 length 0x400 00:24:58.201 Nvme8n1 : 0.51 373.39 23.34 0.00 0.00 145432.18 33010.73 128936.01 00:24:58.201 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:58.201 Verification LBA range: start 0x0 length 0x400 00:24:58.201 Nvme9n1 : 0.51 371.39 23.21 0.00 0.00 143846.03 33204.91 132042.90 00:24:58.202 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:58.202 Verification LBA range: start 0x0 length 0x400 00:24:58.202 Nvme10n1 : 0.51 375.42 23.46 0.00 0.00 140681.18 13398.47 129712.73 00:24:58.202 =================================================================================================================== 00:24:58.202 Total : 3787.06 236.69 0.00 0.00 149396.39 13398.47 146800.64 00:24:58.459 01:04:42 -- target/shutdown.sh@112 -- # sleep 1 00:24:59.391 01:04:43 -- target/shutdown.sh@113 -- # kill -0 3473594 00:24:59.391 01:04:43 -- target/shutdown.sh@115 -- # stoptarget 00:24:59.391 01:04:43 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:24:59.391 01:04:43 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:59.392 01:04:43 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:59.392 01:04:43 -- target/shutdown.sh@45 -- # nvmftestfini 00:24:59.392 01:04:43 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:59.392 01:04:43 -- nvmf/common.sh@116 -- # sync 00:24:59.392 01:04:43 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:59.392 01:04:43 -- nvmf/common.sh@119 -- # set +e 00:24:59.392 01:04:43 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:59.392 01:04:43 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:59.392 rmmod nvme_tcp 00:24:59.392 rmmod nvme_fabrics 00:24:59.392 rmmod nvme_keyring 00:24:59.392 01:04:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:59.392 01:04:43 -- nvmf/common.sh@123 -- # set -e 00:24:59.392 01:04:43 -- nvmf/common.sh@124 -- # return 0 00:24:59.392 01:04:43 -- nvmf/common.sh@477 -- # '[' -n 3473594 ']' 00:24:59.392 01:04:43 -- nvmf/common.sh@478 -- # killprocess 3473594 00:24:59.392 01:04:43 -- common/autotest_common.sh@926 -- # '[' -z 3473594 ']' 00:24:59.392 01:04:43 -- common/autotest_common.sh@930 -- # kill -0 3473594 00:24:59.392 01:04:43 -- common/autotest_common.sh@931 -- # uname 00:24:59.392 01:04:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:59.392 01:04:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3473594 00:24:59.392 01:04:43 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:24:59.392 01:04:43 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:24:59.392 01:04:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3473594' 00:24:59.392 killing process with pid 3473594 00:24:59.392 01:04:43 -- common/autotest_common.sh@945 -- # kill 3473594 00:24:59.392 01:04:43 -- common/autotest_common.sh@950 -- # wait 3473594 00:24:59.954 01:04:44 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:59.954 01:04:44 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:59.954 01:04:44 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:59.954 01:04:44 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:59.954 01:04:44 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:59.954 01:04:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:59.954 01:04:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:59.954 01:04:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:01.858 01:04:46 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:01.858 00:25:01.858 real 0m7.870s 00:25:01.858 user 0m23.947s 00:25:01.858 sys 0m1.454s 00:25:01.858 01:04:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:01.858 01:04:46 -- common/autotest_common.sh@10 -- # set +x 00:25:01.858 ************************************ 00:25:01.858 END TEST nvmf_shutdown_tc2 00:25:01.858 ************************************ 00:25:02.117 01:04:46 -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:25:02.117 01:04:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:02.117 01:04:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:02.117 01:04:46 -- common/autotest_common.sh@10 -- # set +x 00:25:02.117 ************************************ 00:25:02.117 START TEST nvmf_shutdown_tc3 00:25:02.117 ************************************ 00:25:02.117 01:04:46 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc3 00:25:02.117 01:04:46 -- target/shutdown.sh@120 -- # starttarget 00:25:02.117 01:04:46 -- target/shutdown.sh@15 -- # nvmftestinit 00:25:02.117 01:04:46 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:02.117 01:04:46 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:02.117 01:04:46 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:02.117 01:04:46 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:02.117 01:04:46 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:02.117 01:04:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:02.117 01:04:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:02.117 01:04:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:02.117 01:04:46 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:02.117 01:04:46 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:02.117 01:04:46 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:02.117 01:04:46 -- common/autotest_common.sh@10 -- # set +x 00:25:02.117 01:04:46 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:02.117 01:04:46 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:02.117 01:04:46 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:02.117 01:04:46 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:02.117 01:04:46 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:02.117 01:04:46 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:02.117 01:04:46 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:02.117 01:04:46 -- nvmf/common.sh@294 -- # net_devs=() 00:25:02.117 01:04:46 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:02.117 01:04:46 -- nvmf/common.sh@295 -- # e810=() 00:25:02.117 01:04:46 -- nvmf/common.sh@295 -- # local -ga e810 00:25:02.117 01:04:46 -- nvmf/common.sh@296 -- # x722=() 00:25:02.117 01:04:46 -- nvmf/common.sh@296 -- # local -ga x722 00:25:02.117 01:04:46 -- nvmf/common.sh@297 -- # mlx=() 00:25:02.117 01:04:46 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:02.117 01:04:46 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:02.117 01:04:46 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:02.117 01:04:46 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:02.117 01:04:46 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:02.117 01:04:46 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:02.117 01:04:46 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:02.117 01:04:46 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:02.117 01:04:46 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:02.117 01:04:46 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:02.117 01:04:46 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:02.117 01:04:46 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:02.117 01:04:46 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:02.117 01:04:46 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:02.117 01:04:46 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:02.117 01:04:46 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:02.117 01:04:46 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:02.117 01:04:46 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:02.117 01:04:46 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:02.117 01:04:46 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:02.117 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:02.117 01:04:46 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:02.117 01:04:46 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:02.117 01:04:46 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:02.117 01:04:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:02.117 01:04:46 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:02.117 01:04:46 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:02.117 01:04:46 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:02.117 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:02.117 01:04:46 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:02.117 01:04:46 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:02.117 01:04:46 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:02.117 01:04:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:02.117 01:04:46 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:02.117 01:04:46 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:02.117 01:04:46 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:02.117 01:04:46 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:02.117 01:04:46 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:02.117 01:04:46 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:02.117 01:04:46 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:02.117 01:04:46 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:02.117 01:04:46 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:02.117 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:02.117 01:04:46 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:02.117 01:04:46 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:02.117 01:04:46 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:02.117 01:04:46 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:02.117 01:04:46 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:02.117 01:04:46 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:02.117 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:02.117 01:04:46 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:02.117 01:04:46 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:02.117 01:04:46 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:02.117 01:04:46 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:02.117 01:04:46 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:02.117 01:04:46 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:02.117 01:04:46 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:02.117 01:04:46 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:02.117 01:04:46 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:02.117 01:04:46 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:02.117 01:04:46 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:02.117 01:04:46 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:02.117 01:04:46 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:02.117 01:04:46 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:02.117 01:04:46 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:02.117 01:04:46 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:02.117 01:04:46 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:02.117 01:04:46 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:02.117 01:04:46 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:02.117 01:04:46 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:02.117 01:04:46 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:02.117 01:04:46 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:02.117 01:04:46 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:02.117 01:04:46 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:02.117 01:04:46 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:02.117 01:04:46 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:02.117 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:02.117 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.125 ms 00:25:02.117 00:25:02.117 --- 10.0.0.2 ping statistics --- 00:25:02.117 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:02.117 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:25:02.117 01:04:46 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:02.117 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:02.117 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.230 ms 00:25:02.117 00:25:02.117 --- 10.0.0.1 ping statistics --- 00:25:02.117 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:02.117 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:25:02.117 01:04:46 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:02.117 01:04:46 -- nvmf/common.sh@410 -- # return 0 00:25:02.117 01:04:46 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:02.117 01:04:46 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:02.117 01:04:46 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:02.117 01:04:46 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:02.117 01:04:46 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:02.117 01:04:46 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:02.117 01:04:46 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:02.117 01:04:46 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:25:02.117 01:04:46 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:02.117 01:04:46 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:02.117 01:04:46 -- common/autotest_common.sh@10 -- # set +x 00:25:02.118 01:04:46 -- nvmf/common.sh@469 -- # nvmfpid=3474717 00:25:02.118 01:04:46 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:25:02.118 01:04:46 -- nvmf/common.sh@470 -- # waitforlisten 3474717 00:25:02.118 01:04:46 -- common/autotest_common.sh@819 -- # '[' -z 3474717 ']' 00:25:02.118 01:04:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:02.118 01:04:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:02.118 01:04:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:02.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:02.118 01:04:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:02.118 01:04:46 -- common/autotest_common.sh@10 -- # set +x 00:25:02.376 [2024-07-23 01:04:46.322071] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:25:02.376 [2024-07-23 01:04:46.322184] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:02.376 EAL: No free 2048 kB hugepages reported on node 1 00:25:02.376 [2024-07-23 01:04:46.387457] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:02.376 [2024-07-23 01:04:46.475819] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:02.376 [2024-07-23 01:04:46.475982] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:02.376 [2024-07-23 01:04:46.475999] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:02.376 [2024-07-23 01:04:46.476011] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:02.376 [2024-07-23 01:04:46.476099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:02.376 [2024-07-23 01:04:46.476146] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:02.376 [2024-07-23 01:04:46.476239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:25:02.376 [2024-07-23 01:04:46.476241] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:03.309 01:04:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:03.309 01:04:47 -- common/autotest_common.sh@852 -- # return 0 00:25:03.309 01:04:47 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:03.309 01:04:47 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:03.309 01:04:47 -- common/autotest_common.sh@10 -- # set +x 00:25:03.309 01:04:47 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:03.309 01:04:47 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:03.309 01:04:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:03.309 01:04:47 -- common/autotest_common.sh@10 -- # set +x 00:25:03.309 [2024-07-23 01:04:47.291157] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:03.309 01:04:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:03.309 01:04:47 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:25:03.309 01:04:47 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:25:03.309 01:04:47 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:03.309 01:04:47 -- common/autotest_common.sh@10 -- # set +x 00:25:03.309 01:04:47 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:03.309 01:04:47 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:03.309 01:04:47 -- target/shutdown.sh@28 -- # cat 00:25:03.309 01:04:47 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:03.309 01:04:47 -- target/shutdown.sh@28 -- # cat 00:25:03.309 01:04:47 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:03.309 01:04:47 -- target/shutdown.sh@28 -- # cat 00:25:03.309 01:04:47 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:03.309 01:04:47 -- target/shutdown.sh@28 -- # cat 00:25:03.309 01:04:47 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:03.309 01:04:47 -- target/shutdown.sh@28 -- # cat 00:25:03.309 01:04:47 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:03.309 01:04:47 -- target/shutdown.sh@28 -- # cat 00:25:03.309 01:04:47 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:03.309 01:04:47 -- target/shutdown.sh@28 -- # cat 00:25:03.309 01:04:47 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:03.309 01:04:47 -- target/shutdown.sh@28 -- # cat 00:25:03.309 01:04:47 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:03.309 01:04:47 -- target/shutdown.sh@28 -- # cat 00:25:03.309 01:04:47 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:03.309 01:04:47 -- target/shutdown.sh@28 -- # cat 00:25:03.309 01:04:47 -- target/shutdown.sh@35 -- # rpc_cmd 00:25:03.309 01:04:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:03.309 01:04:47 -- common/autotest_common.sh@10 -- # set +x 00:25:03.309 Malloc1 00:25:03.309 [2024-07-23 01:04:47.366072] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:03.309 Malloc2 00:25:03.309 Malloc3 00:25:03.309 Malloc4 00:25:03.567 Malloc5 00:25:03.567 Malloc6 00:25:03.567 Malloc7 00:25:03.567 Malloc8 00:25:03.567 Malloc9 00:25:03.825 Malloc10 00:25:03.825 01:04:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:03.825 01:04:47 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:25:03.825 01:04:47 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:03.825 01:04:47 -- common/autotest_common.sh@10 -- # set +x 00:25:03.825 01:04:47 -- target/shutdown.sh@124 -- # perfpid=3474910 00:25:03.825 01:04:47 -- target/shutdown.sh@125 -- # waitforlisten 3474910 /var/tmp/bdevperf.sock 00:25:03.825 01:04:47 -- common/autotest_common.sh@819 -- # '[' -z 3474910 ']' 00:25:03.825 01:04:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:03.825 01:04:47 -- target/shutdown.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:25:03.825 01:04:47 -- target/shutdown.sh@123 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:03.825 01:04:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:03.825 01:04:47 -- nvmf/common.sh@520 -- # config=() 00:25:03.825 01:04:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:03.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:03.825 01:04:47 -- nvmf/common.sh@520 -- # local subsystem config 00:25:03.825 01:04:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:03.825 01:04:47 -- common/autotest_common.sh@10 -- # set +x 00:25:03.825 01:04:47 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:03.825 01:04:47 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:03.825 { 00:25:03.825 "params": { 00:25:03.825 "name": "Nvme$subsystem", 00:25:03.825 "trtype": "$TEST_TRANSPORT", 00:25:03.825 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:03.825 "adrfam": "ipv4", 00:25:03.825 "trsvcid": "$NVMF_PORT", 00:25:03.825 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:03.825 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:03.825 "hdgst": ${hdgst:-false}, 00:25:03.825 "ddgst": ${ddgst:-false} 00:25:03.825 }, 00:25:03.825 "method": "bdev_nvme_attach_controller" 00:25:03.825 } 00:25:03.825 EOF 00:25:03.825 )") 00:25:03.825 01:04:47 -- nvmf/common.sh@542 -- # cat 00:25:03.825 01:04:47 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:03.825 01:04:47 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:03.825 { 00:25:03.825 "params": { 00:25:03.825 "name": "Nvme$subsystem", 00:25:03.825 "trtype": "$TEST_TRANSPORT", 00:25:03.825 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:03.825 "adrfam": "ipv4", 00:25:03.825 "trsvcid": "$NVMF_PORT", 00:25:03.825 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:03.825 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:03.825 "hdgst": ${hdgst:-false}, 00:25:03.825 "ddgst": ${ddgst:-false} 00:25:03.825 }, 00:25:03.825 "method": "bdev_nvme_attach_controller" 00:25:03.825 } 00:25:03.826 EOF 00:25:03.826 )") 00:25:03.826 01:04:47 -- nvmf/common.sh@542 -- # cat 00:25:03.826 01:04:47 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:03.826 01:04:47 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:03.826 { 00:25:03.826 "params": { 00:25:03.826 "name": "Nvme$subsystem", 00:25:03.826 "trtype": "$TEST_TRANSPORT", 00:25:03.826 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:03.826 "adrfam": "ipv4", 00:25:03.826 "trsvcid": "$NVMF_PORT", 00:25:03.826 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:03.826 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:03.826 "hdgst": ${hdgst:-false}, 00:25:03.826 "ddgst": ${ddgst:-false} 00:25:03.826 }, 00:25:03.826 "method": "bdev_nvme_attach_controller" 00:25:03.826 } 00:25:03.826 EOF 00:25:03.826 )") 00:25:03.826 01:04:47 -- nvmf/common.sh@542 -- # cat 00:25:03.826 01:04:47 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:03.826 01:04:47 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:03.826 { 00:25:03.826 "params": { 00:25:03.826 "name": "Nvme$subsystem", 00:25:03.826 "trtype": "$TEST_TRANSPORT", 00:25:03.826 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:03.826 "adrfam": "ipv4", 00:25:03.826 "trsvcid": "$NVMF_PORT", 00:25:03.826 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:03.826 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:03.826 "hdgst": ${hdgst:-false}, 00:25:03.826 "ddgst": ${ddgst:-false} 00:25:03.826 }, 00:25:03.826 "method": "bdev_nvme_attach_controller" 00:25:03.826 } 00:25:03.826 EOF 00:25:03.826 )") 00:25:03.826 01:04:47 -- nvmf/common.sh@542 -- # cat 00:25:03.826 01:04:47 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:03.826 01:04:47 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:03.826 { 00:25:03.826 "params": { 00:25:03.826 "name": "Nvme$subsystem", 00:25:03.826 "trtype": "$TEST_TRANSPORT", 00:25:03.826 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:03.826 "adrfam": "ipv4", 00:25:03.826 "trsvcid": "$NVMF_PORT", 00:25:03.826 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:03.826 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:03.826 "hdgst": ${hdgst:-false}, 00:25:03.826 "ddgst": ${ddgst:-false} 00:25:03.826 }, 00:25:03.826 "method": "bdev_nvme_attach_controller" 00:25:03.826 } 00:25:03.826 EOF 00:25:03.826 )") 00:25:03.826 01:04:47 -- nvmf/common.sh@542 -- # cat 00:25:03.826 01:04:47 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:03.826 01:04:47 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:03.826 { 00:25:03.826 "params": { 00:25:03.826 "name": "Nvme$subsystem", 00:25:03.826 "trtype": "$TEST_TRANSPORT", 00:25:03.826 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:03.826 "adrfam": "ipv4", 00:25:03.826 "trsvcid": "$NVMF_PORT", 00:25:03.826 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:03.826 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:03.826 "hdgst": ${hdgst:-false}, 00:25:03.826 "ddgst": ${ddgst:-false} 00:25:03.826 }, 00:25:03.826 "method": "bdev_nvme_attach_controller" 00:25:03.826 } 00:25:03.826 EOF 00:25:03.826 )") 00:25:03.826 01:04:47 -- nvmf/common.sh@542 -- # cat 00:25:03.826 01:04:47 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:03.826 01:04:47 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:03.826 { 00:25:03.826 "params": { 00:25:03.826 "name": "Nvme$subsystem", 00:25:03.826 "trtype": "$TEST_TRANSPORT", 00:25:03.826 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:03.826 "adrfam": "ipv4", 00:25:03.826 "trsvcid": "$NVMF_PORT", 00:25:03.826 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:03.826 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:03.826 "hdgst": ${hdgst:-false}, 00:25:03.826 "ddgst": ${ddgst:-false} 00:25:03.826 }, 00:25:03.826 "method": "bdev_nvme_attach_controller" 00:25:03.826 } 00:25:03.826 EOF 00:25:03.826 )") 00:25:03.826 01:04:47 -- nvmf/common.sh@542 -- # cat 00:25:03.826 01:04:47 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:03.826 01:04:47 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:03.826 { 00:25:03.826 "params": { 00:25:03.826 "name": "Nvme$subsystem", 00:25:03.826 "trtype": "$TEST_TRANSPORT", 00:25:03.826 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:03.826 "adrfam": "ipv4", 00:25:03.826 "trsvcid": "$NVMF_PORT", 00:25:03.826 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:03.826 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:03.826 "hdgst": ${hdgst:-false}, 00:25:03.826 "ddgst": ${ddgst:-false} 00:25:03.826 }, 00:25:03.826 "method": "bdev_nvme_attach_controller" 00:25:03.826 } 00:25:03.826 EOF 00:25:03.826 )") 00:25:03.826 01:04:47 -- nvmf/common.sh@542 -- # cat 00:25:03.826 01:04:47 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:03.826 01:04:47 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:03.826 { 00:25:03.826 "params": { 00:25:03.826 "name": "Nvme$subsystem", 00:25:03.826 "trtype": "$TEST_TRANSPORT", 00:25:03.826 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:03.826 "adrfam": "ipv4", 00:25:03.826 "trsvcid": "$NVMF_PORT", 00:25:03.826 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:03.826 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:03.826 "hdgst": ${hdgst:-false}, 00:25:03.826 "ddgst": ${ddgst:-false} 00:25:03.826 }, 00:25:03.826 "method": "bdev_nvme_attach_controller" 00:25:03.826 } 00:25:03.826 EOF 00:25:03.826 )") 00:25:03.826 01:04:47 -- nvmf/common.sh@542 -- # cat 00:25:03.826 01:04:47 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:03.826 01:04:47 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:03.826 { 00:25:03.826 "params": { 00:25:03.826 "name": "Nvme$subsystem", 00:25:03.826 "trtype": "$TEST_TRANSPORT", 00:25:03.826 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:03.826 "adrfam": "ipv4", 00:25:03.826 "trsvcid": "$NVMF_PORT", 00:25:03.826 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:03.826 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:03.826 "hdgst": ${hdgst:-false}, 00:25:03.826 "ddgst": ${ddgst:-false} 00:25:03.826 }, 00:25:03.826 "method": "bdev_nvme_attach_controller" 00:25:03.826 } 00:25:03.826 EOF 00:25:03.826 )") 00:25:03.826 01:04:47 -- nvmf/common.sh@542 -- # cat 00:25:03.826 01:04:47 -- nvmf/common.sh@544 -- # jq . 00:25:03.826 01:04:47 -- nvmf/common.sh@545 -- # IFS=, 00:25:03.826 01:04:47 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:03.826 "params": { 00:25:03.826 "name": "Nvme1", 00:25:03.826 "trtype": "tcp", 00:25:03.826 "traddr": "10.0.0.2", 00:25:03.826 "adrfam": "ipv4", 00:25:03.826 "trsvcid": "4420", 00:25:03.826 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:03.826 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:03.826 "hdgst": false, 00:25:03.826 "ddgst": false 00:25:03.826 }, 00:25:03.826 "method": "bdev_nvme_attach_controller" 00:25:03.826 },{ 00:25:03.826 "params": { 00:25:03.826 "name": "Nvme2", 00:25:03.826 "trtype": "tcp", 00:25:03.826 "traddr": "10.0.0.2", 00:25:03.826 "adrfam": "ipv4", 00:25:03.826 "trsvcid": "4420", 00:25:03.826 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:03.826 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:03.826 "hdgst": false, 00:25:03.826 "ddgst": false 00:25:03.826 }, 00:25:03.826 "method": "bdev_nvme_attach_controller" 00:25:03.826 },{ 00:25:03.826 "params": { 00:25:03.826 "name": "Nvme3", 00:25:03.826 "trtype": "tcp", 00:25:03.826 "traddr": "10.0.0.2", 00:25:03.826 "adrfam": "ipv4", 00:25:03.826 "trsvcid": "4420", 00:25:03.826 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:03.826 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:03.826 "hdgst": false, 00:25:03.826 "ddgst": false 00:25:03.826 }, 00:25:03.826 "method": "bdev_nvme_attach_controller" 00:25:03.826 },{ 00:25:03.826 "params": { 00:25:03.826 "name": "Nvme4", 00:25:03.826 "trtype": "tcp", 00:25:03.826 "traddr": "10.0.0.2", 00:25:03.826 "adrfam": "ipv4", 00:25:03.826 "trsvcid": "4420", 00:25:03.826 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:03.826 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:03.826 "hdgst": false, 00:25:03.826 "ddgst": false 00:25:03.826 }, 00:25:03.826 "method": "bdev_nvme_attach_controller" 00:25:03.826 },{ 00:25:03.826 "params": { 00:25:03.826 "name": "Nvme5", 00:25:03.826 "trtype": "tcp", 00:25:03.826 "traddr": "10.0.0.2", 00:25:03.826 "adrfam": "ipv4", 00:25:03.826 "trsvcid": "4420", 00:25:03.826 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:03.826 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:03.826 "hdgst": false, 00:25:03.826 "ddgst": false 00:25:03.826 }, 00:25:03.826 "method": "bdev_nvme_attach_controller" 00:25:03.827 },{ 00:25:03.827 "params": { 00:25:03.827 "name": "Nvme6", 00:25:03.827 "trtype": "tcp", 00:25:03.827 "traddr": "10.0.0.2", 00:25:03.827 "adrfam": "ipv4", 00:25:03.827 "trsvcid": "4420", 00:25:03.827 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:03.827 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:03.827 "hdgst": false, 00:25:03.827 "ddgst": false 00:25:03.827 }, 00:25:03.827 "method": "bdev_nvme_attach_controller" 00:25:03.827 },{ 00:25:03.827 "params": { 00:25:03.827 "name": "Nvme7", 00:25:03.827 "trtype": "tcp", 00:25:03.827 "traddr": "10.0.0.2", 00:25:03.827 "adrfam": "ipv4", 00:25:03.827 "trsvcid": "4420", 00:25:03.827 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:03.827 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:03.827 "hdgst": false, 00:25:03.827 "ddgst": false 00:25:03.827 }, 00:25:03.827 "method": "bdev_nvme_attach_controller" 00:25:03.827 },{ 00:25:03.827 "params": { 00:25:03.827 "name": "Nvme8", 00:25:03.827 "trtype": "tcp", 00:25:03.827 "traddr": "10.0.0.2", 00:25:03.827 "adrfam": "ipv4", 00:25:03.827 "trsvcid": "4420", 00:25:03.827 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:03.827 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:03.827 "hdgst": false, 00:25:03.827 "ddgst": false 00:25:03.827 }, 00:25:03.827 "method": "bdev_nvme_attach_controller" 00:25:03.827 },{ 00:25:03.827 "params": { 00:25:03.827 "name": "Nvme9", 00:25:03.827 "trtype": "tcp", 00:25:03.827 "traddr": "10.0.0.2", 00:25:03.827 "adrfam": "ipv4", 00:25:03.827 "trsvcid": "4420", 00:25:03.827 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:03.827 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:03.827 "hdgst": false, 00:25:03.827 "ddgst": false 00:25:03.827 }, 00:25:03.827 "method": "bdev_nvme_attach_controller" 00:25:03.827 },{ 00:25:03.827 "params": { 00:25:03.827 "name": "Nvme10", 00:25:03.827 "trtype": "tcp", 00:25:03.827 "traddr": "10.0.0.2", 00:25:03.827 "adrfam": "ipv4", 00:25:03.827 "trsvcid": "4420", 00:25:03.827 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:03.827 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:03.827 "hdgst": false, 00:25:03.827 "ddgst": false 00:25:03.827 }, 00:25:03.827 "method": "bdev_nvme_attach_controller" 00:25:03.827 }' 00:25:03.827 [2024-07-23 01:04:47.878213] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:25:03.827 [2024-07-23 01:04:47.878284] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3474910 ] 00:25:03.827 EAL: No free 2048 kB hugepages reported on node 1 00:25:03.827 [2024-07-23 01:04:47.941004] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:04.085 [2024-07-23 01:04:48.028619] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:05.982 Running I/O for 10 seconds... 00:25:06.250 01:04:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:06.250 01:04:50 -- common/autotest_common.sh@852 -- # return 0 00:25:06.250 01:04:50 -- target/shutdown.sh@126 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:25:06.250 01:04:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:06.250 01:04:50 -- common/autotest_common.sh@10 -- # set +x 00:25:06.250 01:04:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:06.250 01:04:50 -- target/shutdown.sh@129 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:06.250 01:04:50 -- target/shutdown.sh@131 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:25:06.250 01:04:50 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:25:06.250 01:04:50 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:25:06.250 01:04:50 -- target/shutdown.sh@57 -- # local ret=1 00:25:06.250 01:04:50 -- target/shutdown.sh@58 -- # local i 00:25:06.250 01:04:50 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:25:06.250 01:04:50 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:25:06.250 01:04:50 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:06.250 01:04:50 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:25:06.250 01:04:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:06.250 01:04:50 -- common/autotest_common.sh@10 -- # set +x 00:25:06.250 01:04:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:06.250 01:04:50 -- target/shutdown.sh@60 -- # read_io_count=214 00:25:06.250 01:04:50 -- target/shutdown.sh@63 -- # '[' 214 -ge 100 ']' 00:25:06.250 01:04:50 -- target/shutdown.sh@64 -- # ret=0 00:25:06.250 01:04:50 -- target/shutdown.sh@65 -- # break 00:25:06.250 01:04:50 -- target/shutdown.sh@69 -- # return 0 00:25:06.250 01:04:50 -- target/shutdown.sh@134 -- # killprocess 3474717 00:25:06.250 01:04:50 -- common/autotest_common.sh@926 -- # '[' -z 3474717 ']' 00:25:06.250 01:04:50 -- common/autotest_common.sh@930 -- # kill -0 3474717 00:25:06.250 01:04:50 -- common/autotest_common.sh@931 -- # uname 00:25:06.250 01:04:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:06.250 01:04:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3474717 00:25:06.250 01:04:50 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:25:06.250 01:04:50 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:25:06.250 01:04:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3474717' 00:25:06.250 killing process with pid 3474717 00:25:06.250 01:04:50 -- common/autotest_common.sh@945 -- # kill 3474717 00:25:06.250 01:04:50 -- common/autotest_common.sh@950 -- # wait 3474717 00:25:06.250 [2024-07-23 01:04:50.413180] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd190c0 is same with the state(5) to be set 00:25:06.250 [2024-07-23 01:04:50.413264] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd190c0 is same with the state(5) to be set 00:25:06.250 [2024-07-23 01:04:50.414464] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0f2e0 is same with the state(5) to be set 00:25:06.250 [2024-07-23 01:04:50.414500] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0f2e0 is same with the state(5) to be set 00:25:06.250 [2024-07-23 01:04:50.414516] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0f2e0 is same with the state(5) to be set 00:25:06.250 [2024-07-23 01:04:50.414529] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0f2e0 is same with the state(5) to be set 00:25:06.250 [2024-07-23 01:04:50.414541] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0f2e0 is same with the state(5) to be set 00:25:06.250 [2024-07-23 01:04:50.414554] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0f2e0 is same with the state(5) to be set 00:25:06.250 [2024-07-23 01:04:50.414567] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0f2e0 is same with the state(5) to be set 00:25:06.250 [2024-07-23 01:04:50.414579] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0f2e0 is same with the state(5) to be set 00:25:06.250 [2024-07-23 01:04:50.414592] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0f2e0 is same with the state(5) to be set 00:25:06.250 [2024-07-23 01:04:50.414620] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0f2e0 is same with the state(5) to be set 00:25:06.250 [2024-07-23 01:04:50.414655] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0f2e0 is same with the state(5) to be set 00:25:06.250 [2024-07-23 01:04:50.414670] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0f2e0 is same with the state(5) to be set 00:25:06.250 [2024-07-23 01:04:50.414683] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0f2e0 is same with the state(5) to be set 00:25:06.250 [2024-07-23 01:04:50.414695] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0f2e0 is same with the state(5) to be set 00:25:06.250 [2024-07-23 01:04:50.414708] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0f2e0 is same with the state(5) to be set 00:25:06.250 [2024-07-23 01:04:50.414720] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0f2e0 is same with the state(5) to be set 00:25:06.250 [2024-07-23 01:04:50.414733] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0f2e0 is same with the state(5) to be set 00:25:06.250 [2024-07-23 01:04:50.414746] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0f2e0 is same with the state(5) to be set 00:25:06.250 [2024-07-23 01:04:50.414758] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0f2e0 is same with the state(5) to be set 00:25:06.250 [2024-07-23 01:04:50.414770] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0f2e0 is same with the state(5) to be set 00:25:06.250 [2024-07-23 01:04:50.414783] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0f2e0 is same with the state(5) to be set 00:25:06.250 [2024-07-23 01:04:50.414796] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0f2e0 is same with the state(5) to be set 00:25:06.250 [2024-07-23 01:04:50.414808] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0f2e0 is same with the state(5) to be set 00:25:06.250 [2024-07-23 01:04:50.414821] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0f2e0 is same with the state(5) to be set 00:25:06.250 [2024-07-23 01:04:50.414834] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0f2e0 is same with the state(5) to be set 00:25:06.250 [2024-07-23 01:04:50.414846] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0f2e0 is same with the state(5) to be set 00:25:06.250 [2024-07-23 01:04:50.414858] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0f2e0 is same with the state(5) to be set 00:25:06.250 [2024-07-23 01:04:50.414871] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0f2e0 is same with the state(5) to be set 00:25:06.250 [2024-07-23 01:04:50.414883] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0f2e0 is same with the state(5) to be set 00:25:06.250 [2024-07-23 01:04:50.414895] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0f2e0 is same with the state(5) to be set 00:25:06.250 [2024-07-23 01:04:50.414915] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0f2e0 is same with the state(5) to be set 00:25:06.250 [2024-07-23 01:04:50.414927] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0f2e0 is same with the state(5) to be set 00:25:06.250 [2024-07-23 01:04:50.414940] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0f2e0 is same with the state(5) to be set 00:25:06.250 [2024-07-23 01:04:50.414952] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0f2e0 is same with the state(5) to be set 00:25:06.250 [2024-07-23 01:04:50.414964] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0f2e0 is same with the state(5) to be set 00:25:06.250 [2024-07-23 01:04:50.414976] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0f2e0 is same with the state(5) to be set 00:25:06.250 [2024-07-23 01:04:50.414988] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0f2e0 is same with the state(5) to be set 00:25:06.250 [2024-07-23 01:04:50.415003] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0f2e0 is same with the state(5) to be set 00:25:06.250 [2024-07-23 01:04:50.415016] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0f2e0 is same with the state(5) to be set 00:25:06.250 [2024-07-23 01:04:50.415029] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0f2e0 is same with the state(5) to be set 00:25:06.250 [2024-07-23 01:04:50.415041] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0f2e0 is same with the state(5) to be set 00:25:06.250 [2024-07-23 01:04:50.415053] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0f2e0 is same with the state(5) to be set 00:25:06.250 [2024-07-23 01:04:50.415066] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0f2e0 is same with the state(5) to be set 00:25:06.250 [2024-07-23 01:04:50.415078] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0f2e0 is same with the state(5) to be set 00:25:06.250 [2024-07-23 01:04:50.415090] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0f2e0 is same with the state(5) to be set 00:25:06.250 [2024-07-23 01:04:50.415102] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0f2e0 is same with the state(5) to be set 00:25:06.250 [2024-07-23 01:04:50.415115] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0f2e0 is same with the state(5) to be set 00:25:06.250 [2024-07-23 01:04:50.415126] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0f2e0 is same with the state(5) to be set 00:25:06.250 [2024-07-23 01:04:50.415138] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0f2e0 is same with the state(5) to be set 00:25:06.250 [2024-07-23 01:04:50.415150] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0f2e0 is same with the state(5) to be set 00:25:06.251 [2024-07-23 01:04:50.415162] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0f2e0 is same with the state(5) to be set 00:25:06.251 [2024-07-23 01:04:50.415174] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0f2e0 is same with the state(5) to be set 00:25:06.251 [2024-07-23 01:04:50.415187] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0f2e0 is same with the state(5) to be set 00:25:06.251 [2024-07-23 01:04:50.415199] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0f2e0 is same with the state(5) to be set 00:25:06.251 [2024-07-23 01:04:50.415210] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0f2e0 is same with the state(5) to be set 00:25:06.251 [2024-07-23 01:04:50.415222] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0f2e0 is same with the state(5) to be set 00:25:06.251 [2024-07-23 01:04:50.415235] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0f2e0 is same with the state(5) to be set 00:25:06.251 [2024-07-23 01:04:50.415247] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0f2e0 is same with the state(5) to be set 00:25:06.251 [2024-07-23 01:04:50.415259] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0f2e0 is same with the state(5) to be set 00:25:06.251 [2024-07-23 01:04:50.415271] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0f2e0 is same with the state(5) to be set 00:25:06.251 [2024-07-23 01:04:50.415282] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0f2e0 is same with the state(5) to be set 00:25:06.251 [2024-07-23 01:04:50.415294] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0f2e0 is same with the state(5) to be set 00:25:06.251 [2024-07-23 01:04:50.415306] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0f2e0 is same with the state(5) to be set 00:25:06.251 [2024-07-23 01:04:50.416932] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd19570 is same with the state(5) to be set 00:25:06.251 [2024-07-23 01:04:50.416961] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd19570 is same with the state(5) to be set 00:25:06.251 [2024-07-23 01:04:50.416975] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd19570 is same with the state(5) to be set 00:25:06.251 [2024-07-23 01:04:50.416987] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd19570 is same with the state(5) to be set 00:25:06.251 [2024-07-23 01:04:50.416999] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd19570 is same with the state(5) to be set 00:25:06.251 [2024-07-23 01:04:50.417012] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd19570 is same with the state(5) to be set 00:25:06.251 [2024-07-23 01:04:50.417025] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd19570 is same with the state(5) to be set 00:25:06.251 [2024-07-23 01:04:50.417037] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd19570 is same with the state(5) to be set 00:25:06.251 [2024-07-23 01:04:50.417049] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd19570 is same with the state(5) to be set 00:25:06.251 [2024-07-23 01:04:50.417061] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd19570 is same with the state(5) to be set 00:25:06.251 [2024-07-23 01:04:50.417073] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd19570 is same with the state(5) to be set 00:25:06.251 [2024-07-23 01:04:50.417086] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd19570 is same with the state(5) to be set 00:25:06.251 [2024-07-23 01:04:50.417098] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd19570 is same with the state(5) to be set 00:25:06.251 [2024-07-23 01:04:50.417110] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd19570 is same with the state(5) to be set 00:25:06.251 [2024-07-23 01:04:50.417122] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd19570 is same with the state(5) to be set 00:25:06.251 [2024-07-23 01:04:50.417134] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd19570 is same with the state(5) to be set 00:25:06.251 [2024-07-23 01:04:50.417146] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd19570 is same with the state(5) to be set 00:25:06.251 [2024-07-23 01:04:50.417159] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd19570 is same with the state(5) to be set 00:25:06.251 [2024-07-23 01:04:50.417171] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd19570 is same with the state(5) to be set 00:25:06.251 [2024-07-23 01:04:50.417183] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd19570 is same with the state(5) to be set 00:25:06.251 [2024-07-23 01:04:50.417195] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd19570 is same with the state(5) to be set 00:25:06.251 [2024-07-23 01:04:50.417208] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd19570 is same with the state(5) to be set 00:25:06.251 [2024-07-23 01:04:50.417220] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd19570 is same with the state(5) to be set 00:25:06.251 [2024-07-23 01:04:50.417232] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd19570 is same with the state(5) to be set 00:25:06.251 [2024-07-23 01:04:50.417245] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd19570 is same with the state(5) to be set 00:25:06.251 [2024-07-23 01:04:50.417257] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd19570 is same with the state(5) to be set 00:25:06.251 [2024-07-23 01:04:50.417269] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd19570 is same with the state(5) to be set 00:25:06.251 [2024-07-23 01:04:50.417282] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd19570 is same with the state(5) to be set 00:25:06.251 [2024-07-23 01:04:50.417298] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd19570 is same with the state(5) to be set 00:25:06.251 [2024-07-23 01:04:50.417311] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd19570 is same with the state(5) to be set 00:25:06.251 [2024-07-23 01:04:50.417324] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd19570 is same with the state(5) to be set 00:25:06.251 [2024-07-23 01:04:50.417336] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd19570 is same with the state(5) to be set 00:25:06.251 [2024-07-23 01:04:50.417348] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd19570 is same with the state(5) to be set 00:25:06.251 [2024-07-23 01:04:50.417360] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd19570 is same with the state(5) to be set 00:25:06.251 [2024-07-23 01:04:50.417372] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd19570 is same with the state(5) to be set 00:25:06.251 [2024-07-23 01:04:50.417384] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd19570 is same with the state(5) to be set 00:25:06.251 [2024-07-23 01:04:50.417396] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd19570 is same with the state(5) to be set 00:25:06.251 [2024-07-23 01:04:50.417408] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd19570 is same with the state(5) to be set 00:25:06.251 [2024-07-23 01:04:50.417420] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd19570 is same with the state(5) to be set 00:25:06.251 [2024-07-23 01:04:50.417432] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd19570 is same with the state(5) to be set 00:25:06.251 [2024-07-23 01:04:50.417444] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd19570 is same with the state(5) to be set 00:25:06.251 [2024-07-23 01:04:50.417457] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd19570 is same with the state(5) to be set 00:25:06.251 [2024-07-23 01:04:50.417469] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd19570 is same with the state(5) to be set 00:25:06.251 [2024-07-23 01:04:50.417481] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd19570 is same with the state(5) to be set 00:25:06.251 [2024-07-23 01:04:50.417493] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd19570 is same with the state(5) to be set 00:25:06.251 [2024-07-23 01:04:50.417505] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd19570 is same with the state(5) to be set 00:25:06.251 [2024-07-23 01:04:50.417517] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd19570 is same with the state(5) to be set 00:25:06.251 [2024-07-23 01:04:50.417530] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd19570 is same with the state(5) to be set 00:25:06.251 [2024-07-23 01:04:50.417542] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd19570 is same with the state(5) to be set 00:25:06.251 [2024-07-23 01:04:50.417554] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd19570 is same with the state(5) to be set 00:25:06.251 [2024-07-23 01:04:50.417567] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd19570 is same with the state(5) to be set 00:25:06.251 [2024-07-23 01:04:50.417579] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd19570 is same with the state(5) to be set 00:25:06.251 [2024-07-23 01:04:50.417591] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd19570 is same with the state(5) to be set 00:25:06.251 [2024-07-23 01:04:50.417609] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd19570 is same with the state(5) to be set 00:25:06.251 [2024-07-23 01:04:50.417632] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd19570 is same with the state(5) to be set 00:25:06.251 [2024-07-23 01:04:50.417645] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd19570 is same with the state(5) to be set 00:25:06.251 [2024-07-23 01:04:50.417661] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd19570 is same with the state(5) to be set 00:25:06.251 [2024-07-23 01:04:50.417674] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd19570 is same with the state(5) to be set 00:25:06.251 [2024-07-23 01:04:50.417686] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd19570 is same with the state(5) to be set 00:25:06.251 [2024-07-23 01:04:50.417699] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd19570 is same with the state(5) to be set 00:25:06.251 [2024-07-23 01:04:50.417712] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd19570 is same with the state(5) to be set 00:25:06.251 [2024-07-23 01:04:50.417725] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd19570 is same with the state(5) to be set 00:25:06.251 [2024-07-23 01:04:50.417737] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd19570 is same with the state(5) to be set 00:25:06.251 [2024-07-23 01:04:50.418554] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.251 [2024-07-23 01:04:50.418621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.251 [2024-07-23 01:04:50.418653] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.251 [2024-07-23 01:04:50.418679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.251 [2024-07-23 01:04:50.418697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.251 [2024-07-23 01:04:50.418710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.252 [2024-07-23 01:04:50.418724] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.252 [2024-07-23 01:04:50.418738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.252 [2024-07-23 01:04:50.418751] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d15530 is same with the state(5) to be set 00:25:06.252 [2024-07-23 01:04:50.418825] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.252 [2024-07-23 01:04:50.418846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.252 [2024-07-23 01:04:50.418861] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.252 [2024-07-23 01:04:50.418874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.252 [2024-07-23 01:04:50.418888] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.252 [2024-07-23 01:04:50.418901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.252 [2024-07-23 01:04:50.418918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.252 [2024-07-23 01:04:50.418931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.252 [2024-07-23 01:04:50.418944] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cbfa30 is same with the state(5) to be set 00:25:06.252 [2024-07-23 01:04:50.418997] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.252 [2024-07-23 01:04:50.419024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.252 [2024-07-23 01:04:50.419047] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.252 [2024-07-23 01:04:50.419061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.252 [2024-07-23 01:04:50.419075] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.252 [2024-07-23 01:04:50.419087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.252 [2024-07-23 01:04:50.419101] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.252 [2024-07-23 01:04:50.419114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.252 [2024-07-23 01:04:50.419128] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebc80 is same with the state(5) to be set 00:25:06.252 [2024-07-23 01:04:50.419317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.252 [2024-07-23 01:04:50.419341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.252 [2024-07-23 01:04:50.419368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.252 [2024-07-23 01:04:50.419384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.252 [2024-07-23 01:04:50.419401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.252 [2024-07-23 01:04:50.419415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.252 [2024-07-23 01:04:50.419431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.252 [2024-07-23 01:04:50.419445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.252 [2024-07-23 01:04:50.419460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.252 [2024-07-23 01:04:50.419474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.252 [2024-07-23 01:04:50.419490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.252 [2024-07-23 01:04:50.419505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.252 [2024-07-23 01:04:50.419520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.252 [2024-07-23 01:04:50.419534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.252 [2024-07-23 01:04:50.419550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.252 [2024-07-23 01:04:50.419563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.252 [2024-07-23 01:04:50.419579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.252 [2024-07-23 01:04:50.419598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.252 [2024-07-23 01:04:50.419638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.252 [2024-07-23 01:04:50.419638] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb1c0 is same with the state(5) to be set 00:25:06.252 [2024-07-23 01:04:50.419653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.252 [2024-07-23 01:04:50.419670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:35840 len:12[2024-07-23 01:04:50.419670] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb1c0 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.252 he state(5) to be set 00:25:06.252 [2024-07-23 01:04:50.419687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-23 01:04:50.419688] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb1c0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.252 he state(5) to be set 00:25:06.252 [2024-07-23 01:04:50.419703] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb1c0 is same with the state(5) to be set 00:25:06.252 [2024-07-23 01:04:50.419705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.252 [2024-07-23 01:04:50.419715] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb1c0 is same with the state(5) to be set 00:25:06.252 [2024-07-23 01:04:50.419720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.252 [2024-07-23 01:04:50.419727] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb1c0 is same with the state(5) to be set 00:25:06.252 [2024-07-23 01:04:50.419735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.252 [2024-07-23 01:04:50.419740] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb1c0 is same with the state(5) to be set 00:25:06.252 [2024-07-23 01:04:50.419749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.252 [2024-07-23 01:04:50.419753] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb1c0 is same with the state(5) to be set 00:25:06.252 [2024-07-23 01:04:50.419766] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb1c0 is same with t[2024-07-23 01:04:50.419765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:29824 len:12he state(5) to be set 00:25:06.252 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.252 [2024-07-23 01:04:50.419780] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb1c0 is same with the state(5) to be set 00:25:06.252 [2024-07-23 01:04:50.419782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.252 [2024-07-23 01:04:50.419792] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb1c0 is same with the state(5) to be set 00:25:06.252 [2024-07-23 01:04:50.419798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.252 [2024-07-23 01:04:50.419805] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb1c0 is same with the state(5) to be set 00:25:06.252 [2024-07-23 01:04:50.419813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.252 [2024-07-23 01:04:50.419818] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb1c0 is same with t[2024-07-23 01:04:50.419829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:30592 len:12he state(5) to be set 00:25:06.252 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.252 [2024-07-23 01:04:50.419845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-23 01:04:50.419844] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb1c0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.252 he state(5) to be set 00:25:06.252 [2024-07-23 01:04:50.419861] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb1c0 is same with the state(5) to be set 00:25:06.252 [2024-07-23 01:04:50.419863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.252 [2024-07-23 01:04:50.419874] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb1c0 is same with the state(5) to be set 00:25:06.252 [2024-07-23 01:04:50.419877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.252 [2024-07-23 01:04:50.419887] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb1c0 is same with the state(5) to be set 00:25:06.252 [2024-07-23 01:04:50.419893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.252 [2024-07-23 01:04:50.419899] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb1c0 is same with the state(5) to be set 00:25:06.252 [2024-07-23 01:04:50.419908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.252 [2024-07-23 01:04:50.419922] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb1c0 is same with the state(5) to be set 00:25:06.252 [2024-07-23 01:04:50.419930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.252 [2024-07-23 01:04:50.419935] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb1c0 is same with the state(5) to be set 00:25:06.252 [2024-07-23 01:04:50.419944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.253 [2024-07-23 01:04:50.419948] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb1c0 is same with the state(5) to be set 00:25:06.253 [2024-07-23 01:04:50.419960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36480 len:1[2024-07-23 01:04:50.419961] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb1c0 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.253 he state(5) to be set 00:25:06.253 [2024-07-23 01:04:50.419975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-23 01:04:50.419975] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb1c0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.253 he state(5) to be set 00:25:06.253 [2024-07-23 01:04:50.419991] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb1c0 is same with the state(5) to be set 00:25:06.253 [2024-07-23 01:04:50.419992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.253 [2024-07-23 01:04:50.420003] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb1c0 is same with the state(5) to be set 00:25:06.253 [2024-07-23 01:04:50.420006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.253 [2024-07-23 01:04:50.420015] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb1c0 is same with the state(5) to be set 00:25:06.253 [2024-07-23 01:04:50.420023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:36608 len:1[2024-07-23 01:04:50.420028] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb1c0 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.253 he state(5) to be set 00:25:06.253 [2024-07-23 01:04:50.420043] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb1c0 is same with t[2024-07-23 01:04:50.420043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:25:06.253 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.253 [2024-07-23 01:04:50.420058] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb1c0 is same with the state(5) to be set 00:25:06.253 [2024-07-23 01:04:50.420062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.253 [2024-07-23 01:04:50.420071] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb1c0 is same with the state(5) to be set 00:25:06.253 [2024-07-23 01:04:50.420076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.253 [2024-07-23 01:04:50.420083] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb1c0 is same with the state(5) to be set 00:25:06.253 [2024-07-23 01:04:50.420092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.253 [2024-07-23 01:04:50.420096] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb1c0 is same with the state(5) to be set 00:25:06.253 [2024-07-23 01:04:50.420106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.253 [2024-07-23 01:04:50.420108] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb1c0 is same with the state(5) to be set 00:25:06.253 [2024-07-23 01:04:50.420121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:36736 len:12[2024-07-23 01:04:50.420122] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb1c0 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.253 he state(5) to be set 00:25:06.253 [2024-07-23 01:04:50.420137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-23 01:04:50.420138] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb1c0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.253 he state(5) to be set 00:25:06.253 [2024-07-23 01:04:50.420153] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb1c0 is same with the state(5) to be set 00:25:06.253 [2024-07-23 01:04:50.420154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.253 [2024-07-23 01:04:50.420165] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb1c0 is same with the state(5) to be set 00:25:06.253 [2024-07-23 01:04:50.420169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.253 [2024-07-23 01:04:50.420178] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb1c0 is same with the state(5) to be set 00:25:06.253 [2024-07-23 01:04:50.420184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.253 [2024-07-23 01:04:50.420191] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb1c0 is same with the state(5) to be set 00:25:06.253 [2024-07-23 01:04:50.420198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.253 [2024-07-23 01:04:50.420204] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb1c0 is same with the state(5) to be set 00:25:06.253 [2024-07-23 01:04:50.420214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.253 [2024-07-23 01:04:50.420217] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb1c0 is same with the state(5) to be set 00:25:06.253 [2024-07-23 01:04:50.420232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-23 01:04:50.420233] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb1c0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.253 he state(5) to be set 00:25:06.253 [2024-07-23 01:04:50.420247] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb1c0 is same with the state(5) to be set 00:25:06.253 [2024-07-23 01:04:50.420249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.253 [2024-07-23 01:04:50.420259] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb1c0 is same with the state(5) to be set 00:25:06.253 [2024-07-23 01:04:50.420264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.253 [2024-07-23 01:04:50.420272] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb1c0 is same with the state(5) to be set 00:25:06.253 [2024-07-23 01:04:50.420280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.253 [2024-07-23 01:04:50.420285] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb1c0 is same with the state(5) to be set 00:25:06.253 [2024-07-23 01:04:50.420293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.253 [2024-07-23 01:04:50.420298] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb1c0 is same with the state(5) to be set 00:25:06.253 [2024-07-23 01:04:50.420309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:32896 len:12[2024-07-23 01:04:50.420311] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb1c0 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.253 he state(5) to be set 00:25:06.253 [2024-07-23 01:04:50.420324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-23 01:04:50.420325] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb1c0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.253 he state(5) to be set 00:25:06.253 [2024-07-23 01:04:50.420340] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb1c0 is same with the state(5) to be set 00:25:06.253 [2024-07-23 01:04:50.420342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.253 [2024-07-23 01:04:50.420352] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb1c0 is same with the state(5) to be set 00:25:06.253 [2024-07-23 01:04:50.420356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.253 [2024-07-23 01:04:50.420364] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb1c0 is same with the state(5) to be set 00:25:06.253 [2024-07-23 01:04:50.420371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.253 [2024-07-23 01:04:50.420378] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb1c0 is same with the state(5) to be set 00:25:06.253 [2024-07-23 01:04:50.420385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.253 [2024-07-23 01:04:50.420390] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb1c0 is same with the state(5) to be set 00:25:06.253 [2024-07-23 01:04:50.420401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.253 [2024-07-23 01:04:50.420404] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb1c0 is same with the state(5) to be set 00:25:06.253 [2024-07-23 01:04:50.420414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.253 [2024-07-23 01:04:50.420421] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb1c0 is same with the state(5) to be set 00:25:06.253 [2024-07-23 01:04:50.420430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.253 [2024-07-23 01:04:50.420434] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb1c0 is same with the state(5) to be set 00:25:06.253 [2024-07-23 01:04:50.420443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.253 [2024-07-23 01:04:50.420446] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb1c0 is same with the state(5) to be set 00:25:06.253 [2024-07-23 01:04:50.420459] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb1c0 is same with t[2024-07-23 01:04:50.420459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:33664 len:12he state(5) to be set 00:25:06.253 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.253 [2024-07-23 01:04:50.420475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-23 01:04:50.420476] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb1c0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.253 he state(5) to be set 00:25:06.253 [2024-07-23 01:04:50.420492] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb1c0 is same with t[2024-07-23 01:04:50.420493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:33920 len:12he state(5) to be set 00:25:06.253 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.253 [2024-07-23 01:04:50.420506] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb1c0 is same with t[2024-07-23 01:04:50.420508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:25:06.253 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.253 [2024-07-23 01:04:50.420521] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb1c0 is same with the state(5) to be set 00:25:06.253 [2024-07-23 01:04:50.420525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.254 [2024-07-23 01:04:50.420540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.254 [2024-07-23 01:04:50.420556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.254 [2024-07-23 01:04:50.420570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.254 [2024-07-23 01:04:50.420586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.254 [2024-07-23 01:04:50.420610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.254 [2024-07-23 01:04:50.420633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.254 [2024-07-23 01:04:50.420648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.254 [2024-07-23 01:04:50.420663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.254 [2024-07-23 01:04:50.420677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.254 [2024-07-23 01:04:50.420696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.254 [2024-07-23 01:04:50.420710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.254 [2024-07-23 01:04:50.420725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.254 [2024-07-23 01:04:50.420739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.254 [2024-07-23 01:04:50.420755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.254 [2024-07-23 01:04:50.420768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.254 [2024-07-23 01:04:50.420784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.254 [2024-07-23 01:04:50.420798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.254 [2024-07-23 01:04:50.420813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.254 [2024-07-23 01:04:50.420827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.254 [2024-07-23 01:04:50.420842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.254 [2024-07-23 01:04:50.420855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.254 [2024-07-23 01:04:50.420870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.254 [2024-07-23 01:04:50.420884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.254 [2024-07-23 01:04:50.420899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.254 [2024-07-23 01:04:50.420920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.254 [2024-07-23 01:04:50.420935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.254 [2024-07-23 01:04:50.420949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.254 [2024-07-23 01:04:50.420964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.254 [2024-07-23 01:04:50.420978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.254 [2024-07-23 01:04:50.420993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.254 [2024-07-23 01:04:50.421007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.254 [2024-07-23 01:04:50.421023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.254 [2024-07-23 01:04:50.421038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.254 [2024-07-23 01:04:50.421053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.254 [2024-07-23 01:04:50.421070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.254 [2024-07-23 01:04:50.421086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.254 [2024-07-23 01:04:50.421100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.254 [2024-07-23 01:04:50.421116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.254 [2024-07-23 01:04:50.421130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.254 [2024-07-23 01:04:50.421145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.254 [2024-07-23 01:04:50.421158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.254 [2024-07-23 01:04:50.421174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.254 [2024-07-23 01:04:50.421188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.254 [2024-07-23 01:04:50.421204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.254 [2024-07-23 01:04:50.421218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.254 [2024-07-23 01:04:50.421234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.254 [2024-07-23 01:04:50.421247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.254 [2024-07-23 01:04:50.421263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.254 [2024-07-23 01:04:50.421276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.254 [2024-07-23 01:04:50.421292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.254 [2024-07-23 01:04:50.421305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.254 [2024-07-23 01:04:50.421321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.254 [2024-07-23 01:04:50.421335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.254 [2024-07-23 01:04:50.421349] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d8daa0 is same with the state(5) to be set 00:25:06.254 [2024-07-23 01:04:50.421418] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1d8daa0 was disconnected and freed. reset controller. 00:25:06.254 [2024-07-23 01:04:50.422459] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb650 is same with the state(5) to be set 00:25:06.254 [2024-07-23 01:04:50.422492] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb650 is same with the state(5) to be set 00:25:06.254 [2024-07-23 01:04:50.422508] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb650 is same with the state(5) to be set 00:25:06.254 [2024-07-23 01:04:50.422522] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb650 is same with the state(5) to be set 00:25:06.254 [2024-07-23 01:04:50.422535] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb650 is same with the state(5) to be set 00:25:06.254 [2024-07-23 01:04:50.422559] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb650 is same with the state(5) to be set 00:25:06.254 [2024-07-23 01:04:50.422573] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb650 is same with the state(5) to be set 00:25:06.254 [2024-07-23 01:04:50.422586] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb650 is same with the state(5) to be set 00:25:06.255 [2024-07-23 01:04:50.422605] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb650 is same with the state(5) to be set 00:25:06.255 [2024-07-23 01:04:50.422629] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb650 is same with the state(5) to be set 00:25:06.255 [2024-07-23 01:04:50.422643] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb650 is same with the state(5) to be set 00:25:06.255 [2024-07-23 01:04:50.422655] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb650 is same with the state(5) to be set 00:25:06.255 [2024-07-23 01:04:50.422667] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb650 is same with the state(5) to be set 00:25:06.255 [2024-07-23 01:04:50.422680] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb650 is same with the state(5) to be set 00:25:06.255 [2024-07-23 01:04:50.422693] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb650 is same with the state(5) to be set 00:25:06.255 [2024-07-23 01:04:50.422705] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb650 is same with the state(5) to be set 00:25:06.255 [2024-07-23 01:04:50.422717] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb650 is same with the state(5) to be set 00:25:06.255 [2024-07-23 01:04:50.422729] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb650 is same with the state(5) to be set 00:25:06.255 [2024-07-23 01:04:50.422742] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb650 is same with the state(5) to be set 00:25:06.255 [2024-07-23 01:04:50.422753] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb650 is same with the state(5) to be set 00:25:06.255 [2024-07-23 01:04:50.422766] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb650 is same with the state(5) to be set 00:25:06.255 [2024-07-23 01:04:50.422778] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb650 is same with the state(5) to be set 00:25:06.255 [2024-07-23 01:04:50.422791] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb650 is same with the state(5) to be set 00:25:06.255 [2024-07-23 01:04:50.422804] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb650 is same with the state(5) to be set 00:25:06.255 [2024-07-23 01:04:50.422815] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb650 is same with the state(5) to be set 00:25:06.255 [2024-07-23 01:04:50.422828] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb650 is same with the state(5) to be set 00:25:06.255 [2024-07-23 01:04:50.422840] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb650 is same with the state(5) to be set 00:25:06.255 [2024-07-23 01:04:50.422852] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb650 is same with the state(5) to be set 00:25:06.255 [2024-07-23 01:04:50.422865] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb650 is same with the state(5) to be set 00:25:06.255 [2024-07-23 01:04:50.422877] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb650 is same with the state(5) to be set 00:25:06.255 [2024-07-23 01:04:50.422889] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb650 is same with the state(5) to be set 00:25:06.255 [2024-07-23 01:04:50.422911] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb650 is same with the state(5) to be set 00:25:06.255 [2024-07-23 01:04:50.422928] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb650 is same with the state(5) to be set 00:25:06.255 [2024-07-23 01:04:50.422941] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb650 is same with the state(5) to be set 00:25:06.255 [2024-07-23 01:04:50.422953] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb650 is same with the state(5) to be set 00:25:06.255 [2024-07-23 01:04:50.422977] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb650 is same with the state(5) to be set 00:25:06.255 [2024-07-23 01:04:50.422989] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb650 is same with the state(5) to be set 00:25:06.255 [2024-07-23 01:04:50.423002] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb650 is same with the state(5) to be set 00:25:06.255 [2024-07-23 01:04:50.423014] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb650 is same with the state(5) to be set 00:25:06.255 [2024-07-23 01:04:50.423026] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb650 is same with the state(5) to be set 00:25:06.255 [2024-07-23 01:04:50.423038] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb650 is same with the state(5) to be set 00:25:06.255 [2024-07-23 01:04:50.423050] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb650 is same with the state(5) to be set 00:25:06.255 [2024-07-23 01:04:50.423063] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb650 is same with the state(5) to be set 00:25:06.255 [2024-07-23 01:04:50.423075] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb650 is same with the state(5) to be set 00:25:06.255 [2024-07-23 01:04:50.423087] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb650 is same with the state(5) to be set 00:25:06.255 [2024-07-23 01:04:50.423100] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb650 is same with the state(5) to be set 00:25:06.255 [2024-07-23 01:04:50.423112] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb650 is same with the state(5) to be set 00:25:06.255 [2024-07-23 01:04:50.423124] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb650 is same with the state(5) to be set 00:25:06.255 [2024-07-23 01:04:50.423137] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb650 is same with the state(5) to be set 00:25:06.255 [2024-07-23 01:04:50.423149] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb650 is same with the state(5) to be set 00:25:06.255 [2024-07-23 01:04:50.423161] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb650 is same with the state(5) to be set 00:25:06.255 [2024-07-23 01:04:50.423173] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb650 is same with the state(5) to be set 00:25:06.255 [2024-07-23 01:04:50.423185] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb650 is same with the state(5) to be set 00:25:06.255 [2024-07-23 01:04:50.423197] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb650 is same with the state(5) to be set 00:25:06.255 [2024-07-23 01:04:50.423209] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb650 is same with the state(5) to be set 00:25:06.255 [2024-07-23 01:04:50.423221] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb650 is same with the state(5) to be set 00:25:06.255 [2024-07-23 01:04:50.423234] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb650 is same with the state(5) to be set 00:25:06.255 [2024-07-23 01:04:50.423246] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb650 is same with the state(5) to be set 00:25:06.255 [2024-07-23 01:04:50.423258] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb650 is same with the state(5) to be set 00:25:06.255 [2024-07-23 01:04:50.423270] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb650 is same with the state(5) to be set 00:25:06.255 [2024-07-23 01:04:50.423286] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb650 is same with the state(5) to be set 00:25:06.255 [2024-07-23 01:04:50.423299] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb650 is same with the state(5) to be set 00:25:06.255 [2024-07-23 01:04:50.423311] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb650 is same with the state(5) to be set 00:25:06.255 [2024-07-23 01:04:50.423924] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:06.255 [2024-07-23 01:04:50.423976] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cbfa30 (9): Bad file descriptor 00:25:06.255 [2024-07-23 01:04:50.425148] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27630 is same with the state(5) to be set 00:25:06.255 [2024-07-23 01:04:50.425187] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27630 is same with the state(5) to be set 00:25:06.255 [2024-07-23 01:04:50.425220] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27630 is same with the state(5) to be set 00:25:06.255 [2024-07-23 01:04:50.425249] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27630 is same with the state(5) to be set 00:25:06.255 [2024-07-23 01:04:50.425294] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27630 is same with the state(5) to be set 00:25:06.255 [2024-07-23 01:04:50.425312] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27630 is same with the state(5) to be set 00:25:06.255 [2024-07-23 01:04:50.425325] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27630 is same with the state(5) to be set 00:25:06.255 [2024-07-23 01:04:50.425337] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27630 is same with the state(5) to be set 00:25:06.255 [2024-07-23 01:04:50.425349] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27630 is same with the state(5) to be set 00:25:06.255 [2024-07-23 01:04:50.425361] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27630 is same with the state(5) to be set 00:25:06.255 [2024-07-23 01:04:50.425373] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27630 is same with t[2024-07-23 01:04:50.425366] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:06.255 he state(5) to be set 00:25:06.255 [2024-07-23 01:04:50.425389] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27630 is same with the state(5) to be set 00:25:06.255 [2024-07-23 01:04:50.425401] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27630 is same with the state(5) to be set 00:25:06.255 [2024-07-23 01:04:50.425413] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27630 is same with the state(5) to be set 00:25:06.255 [2024-07-23 01:04:50.425425] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27630 is same with the state(5) to be set 00:25:06.255 [2024-07-23 01:04:50.425437] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27630 is same with the state(5) to be set 00:25:06.255 [2024-07-23 01:04:50.425450] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27630 is same with the state(5) to be set 00:25:06.255 [2024-07-23 01:04:50.425461] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27630 is same with the state(5) to be set 00:25:06.255 [2024-07-23 01:04:50.425473] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27630 is same with the state(5) to be set 00:25:06.255 [2024-07-23 01:04:50.425485] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27630 is same with the state(5) to be set 00:25:06.255 [2024-07-23 01:04:50.425497] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27630 is same with the state(5) to be set 00:25:06.255 [2024-07-23 01:04:50.425515] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27630 is same with the state(5) to be set 00:25:06.255 [2024-07-23 01:04:50.425528] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27630 is same with the state(5) to be set 00:25:06.255 [2024-07-23 01:04:50.425540] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27630 is same with the state(5) to be set 00:25:06.255 [2024-07-23 01:04:50.425552] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27630 is same with the state(5) to be set 00:25:06.255 [2024-07-23 01:04:50.425564] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27630 is same with the state(5) to be set 00:25:06.256 [2024-07-23 01:04:50.425575] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27630 is same with the state(5) to be set 00:25:06.256 [2024-07-23 01:04:50.425587] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27630 is same with the state(5) to be set 00:25:06.256 [2024-07-23 01:04:50.425605] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27630 is same with the state(5) to be set 00:25:06.256 [2024-07-23 01:04:50.425626] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27630 is same with the state(5) to be set 00:25:06.256 [2024-07-23 01:04:50.425639] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27630 is same with the state(5) to be set 00:25:06.256 [2024-07-23 01:04:50.425646] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:06.256 [2024-07-23 01:04:50.425660] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27630 is same with the state(5) to be set 00:25:06.256 [2024-07-23 01:04:50.425683] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27630 is same with the state(5) to be set 00:25:06.256 [2024-07-23 01:04:50.425704] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27630 is same with the state(5) to be set 00:25:06.256 [2024-07-23 01:04:50.425717] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:06.256 [2024-07-23 01:04:50.425725] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27630 is same with the state(5) to be set 00:25:06.256 [2024-07-23 01:04:50.425747] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27630 is same with the state(5) to be set 00:25:06.256 [2024-07-23 01:04:50.425770] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27630 is same with the state(5) to be set 00:25:06.256 [2024-07-23 01:04:50.425831] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27630 is same with the state(5) to be set 00:25:06.256 [2024-07-23 01:04:50.425850] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27630 is same with the state(5) to be set 00:25:06.256 [2024-07-23 01:04:50.425863] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27630 is same with the state(5) to be set 00:25:06.256 [2024-07-23 01:04:50.425874] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27630 is same with the state(5) to be set 00:25:06.256 [2024-07-23 01:04:50.425887] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27630 is same with the state(5) to be set 00:25:06.256 [2024-07-23 01:04:50.425909] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27630 is same with the state(5) to be set 00:25:06.256 [2024-07-23 01:04:50.425921] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27630 is same with the state(5) to be set 00:25:06.256 [2024-07-23 01:04:50.425937] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27630 is same with the state(5) to be set 00:25:06.256 [2024-07-23 01:04:50.425950] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27630 is same with the state(5) to be set 00:25:06.256 [2024-07-23 01:04:50.425961] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27630 is same with the state(5) to be set 00:25:06.256 [2024-07-23 01:04:50.425980] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27630 is same with the state(5) to be set 00:25:06.256 [2024-07-23 01:04:50.426590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.256 [2024-07-23 01:04:50.426625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.256 [2024-07-23 01:04:50.426652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.256 [2024-07-23 01:04:50.426668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.256 [2024-07-23 01:04:50.426685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.256 [2024-07-23 01:04:50.426699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.256 [2024-07-23 01:04:50.426714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.256 [2024-07-23 01:04:50.426727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.256 [2024-07-23 01:04:50.426742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.256 [2024-07-23 01:04:50.426756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.256 [2024-07-23 01:04:50.426772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.256 [2024-07-23 01:04:50.426785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.256 [2024-07-23 01:04:50.426800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.256 [2024-07-23 01:04:50.426813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.256 [2024-07-23 01:04:50.426828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.256 [2024-07-23 01:04:50.426842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.256 [2024-07-23 01:04:50.426857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.256 [2024-07-23 01:04:50.426870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.256 [2024-07-23 01:04:50.426885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.256 [2024-07-23 01:04:50.426898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.256 [2024-07-23 01:04:50.426922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.256 [2024-07-23 01:04:50.426936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.256 [2024-07-23 01:04:50.426951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.256 [2024-07-23 01:04:50.426964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.256 [2024-07-23 01:04:50.426985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.256 [2024-07-23 01:04:50.427000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.256 [2024-07-23 01:04:50.427015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.256 [2024-07-23 01:04:50.427029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.256 [2024-07-23 01:04:50.427044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.256 [2024-07-23 01:04:50.427057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.256 [2024-07-23 01:04:50.427072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.256 [2024-07-23 01:04:50.427086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.256 [2024-07-23 01:04:50.427101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.256 [2024-07-23 01:04:50.427114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.256 [2024-07-23 01:04:50.427116] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ac0 is same with the state(5) to be set 00:25:06.256 [2024-07-23 01:04:50.427130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.256 [2024-07-23 01:04:50.427143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.256 [2024-07-23 01:04:50.427144] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ac0 is same with the state(5) to be set 00:25:06.256 [2024-07-23 01:04:50.427159] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ac0 is same with t[2024-07-23 01:04:50.427159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:32512 len:12he state(5) to be set 00:25:06.256 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.256 [2024-07-23 01:04:50.427174] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ac0 is same with t[2024-07-23 01:04:50.427175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:25:06.256 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.256 [2024-07-23 01:04:50.427188] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ac0 is same with the state(5) to be set 00:25:06.256 [2024-07-23 01:04:50.427193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.256 [2024-07-23 01:04:50.427201] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ac0 is same with the state(5) to be set 00:25:06.256 [2024-07-23 01:04:50.427207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.256 [2024-07-23 01:04:50.427214] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ac0 is same with the state(5) to be set 00:25:06.256 [2024-07-23 01:04:50.427222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.256 [2024-07-23 01:04:50.427226] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ac0 is same with the state(5) to be set 00:25:06.256 [2024-07-23 01:04:50.427236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-23 01:04:50.427239] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ac0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.256 he state(5) to be set 00:25:06.256 [2024-07-23 01:04:50.427254] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ac0 is same with the state(5) to be set 00:25:06.256 [2024-07-23 01:04:50.427257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.256 [2024-07-23 01:04:50.427266] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ac0 is same with the state(5) to be set 00:25:06.256 [2024-07-23 01:04:50.427271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.256 [2024-07-23 01:04:50.427279] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ac0 is same with the state(5) to be set 00:25:06.256 [2024-07-23 01:04:50.427287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.256 [2024-07-23 01:04:50.427292] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ac0 is same with the state(5) to be set 00:25:06.257 [2024-07-23 01:04:50.427300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.257 [2024-07-23 01:04:50.427305] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ac0 is same with the state(5) to be set 00:25:06.257 [2024-07-23 01:04:50.427316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:33408 len:12[2024-07-23 01:04:50.427317] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ac0 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.257 he state(5) to be set 00:25:06.257 [2024-07-23 01:04:50.427331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-23 01:04:50.427331] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ac0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.257 he state(5) to be set 00:25:06.257 [2024-07-23 01:04:50.427346] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ac0 is same with the state(5) to be set 00:25:06.257 [2024-07-23 01:04:50.427348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.257 [2024-07-23 01:04:50.427358] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ac0 is same with the state(5) to be set 00:25:06.257 [2024-07-23 01:04:50.427362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.257 [2024-07-23 01:04:50.427371] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ac0 is same with the state(5) to be set 00:25:06.257 [2024-07-23 01:04:50.427378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.257 [2024-07-23 01:04:50.427383] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ac0 is same with the state(5) to be set 00:25:06.257 [2024-07-23 01:04:50.427392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.257 [2024-07-23 01:04:50.427396] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ac0 is same with the state(5) to be set 00:25:06.257 [2024-07-23 01:04:50.427408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:35840 len:12[2024-07-23 01:04:50.427409] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ac0 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.257 he state(5) to be set 00:25:06.257 [2024-07-23 01:04:50.427423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-23 01:04:50.427423] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ac0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.257 he state(5) to be set 00:25:06.257 [2024-07-23 01:04:50.427443] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ac0 is same with the state(5) to be set 00:25:06.257 [2024-07-23 01:04:50.427445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.257 [2024-07-23 01:04:50.427456] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ac0 is same with the state(5) to be set 00:25:06.257 [2024-07-23 01:04:50.427460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.257 [2024-07-23 01:04:50.427469] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ac0 is same with the state(5) to be set 00:25:06.257 [2024-07-23 01:04:50.427475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.257 [2024-07-23 01:04:50.427482] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ac0 is same with the state(5) to be set 00:25:06.257 [2024-07-23 01:04:50.427489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.257 [2024-07-23 01:04:50.427495] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ac0 is same with the state(5) to be set 00:25:06.257 [2024-07-23 01:04:50.427505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:36224 len:12[2024-07-23 01:04:50.427507] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ac0 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.257 he state(5) to be set 00:25:06.257 [2024-07-23 01:04:50.427521] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ac0 is same with t[2024-07-23 01:04:50.427520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:25:06.257 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.257 [2024-07-23 01:04:50.427535] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ac0 is same with the state(5) to be set 00:25:06.257 [2024-07-23 01:04:50.427539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.257 [2024-07-23 01:04:50.427547] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ac0 is same with the state(5) to be set 00:25:06.257 [2024-07-23 01:04:50.427553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.257 [2024-07-23 01:04:50.427560] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ac0 is same with the state(5) to be set 00:25:06.257 [2024-07-23 01:04:50.427569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.257 [2024-07-23 01:04:50.427573] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ac0 is same with the state(5) to be set 00:25:06.257 [2024-07-23 01:04:50.427583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.257 [2024-07-23 01:04:50.427585] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ac0 is same with the state(5) to be set 00:25:06.257 [2024-07-23 01:04:50.427604] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ac0 is same with the state(5) to be set 00:25:06.257 [2024-07-23 01:04:50.427607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.257 [2024-07-23 01:04:50.427625] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ac0 is same with the state(5) to be set 00:25:06.257 [2024-07-23 01:04:50.427629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.257 [2024-07-23 01:04:50.427639] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ac0 is same with the state(5) to be set 00:25:06.257 [2024-07-23 01:04:50.427649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:36608 len:1[2024-07-23 01:04:50.427652] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ac0 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.257 he state(5) to be set 00:25:06.257 [2024-07-23 01:04:50.427666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-23 01:04:50.427666] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ac0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.257 he state(5) to be set 00:25:06.257 [2024-07-23 01:04:50.427682] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ac0 is same with the state(5) to be set 00:25:06.257 [2024-07-23 01:04:50.427684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.257 [2024-07-23 01:04:50.427694] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ac0 is same with the state(5) to be set 00:25:06.257 [2024-07-23 01:04:50.427697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.257 [2024-07-23 01:04:50.427708] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ac0 is same with the state(5) to be set 00:25:06.257 [2024-07-23 01:04:50.427713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.257 [2024-07-23 01:04:50.427721] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ac0 is same with the state(5) to be set 00:25:06.257 [2024-07-23 01:04:50.427727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.257 [2024-07-23 01:04:50.427733] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ac0 is same with the state(5) to be set 00:25:06.257 [2024-07-23 01:04:50.427743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.257 [2024-07-23 01:04:50.427746] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ac0 is same with the state(5) to be set 00:25:06.257 [2024-07-23 01:04:50.427757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.257 [2024-07-23 01:04:50.427759] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ac0 is same with the state(5) to be set 00:25:06.257 [2024-07-23 01:04:50.427772] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ac0 is same with t[2024-07-23 01:04:50.427773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:36992 len:12he state(5) to be set 00:25:06.257 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.257 [2024-07-23 01:04:50.427787] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ac0 is same with t[2024-07-23 01:04:50.427788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:25:06.257 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.257 [2024-07-23 01:04:50.427802] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ac0 is same with the state(5) to be set 00:25:06.257 [2024-07-23 01:04:50.427805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.257 [2024-07-23 01:04:50.427815] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ac0 is same with the state(5) to be set 00:25:06.257 [2024-07-23 01:04:50.427819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.257 [2024-07-23 01:04:50.427827] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ac0 is same with the state(5) to be set 00:25:06.257 [2024-07-23 01:04:50.427838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37120 len:1[2024-07-23 01:04:50.427840] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ac0 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.257 he state(5) to be set 00:25:06.257 [2024-07-23 01:04:50.427854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-23 01:04:50.427855] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ac0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.257 he state(5) to be set 00:25:06.257 [2024-07-23 01:04:50.427870] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ac0 is same with the state(5) to be set 00:25:06.257 [2024-07-23 01:04:50.427872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.257 [2024-07-23 01:04:50.427881] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ac0 is same with the state(5) to be set 00:25:06.257 [2024-07-23 01:04:50.427885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.257 [2024-07-23 01:04:50.427894] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ac0 is same with the state(5) to be set 00:25:06.257 [2024-07-23 01:04:50.427912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.258 [2024-07-23 01:04:50.427914] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ac0 is same with the state(5) to be set 00:25:06.258 [2024-07-23 01:04:50.427926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.258 [2024-07-23 01:04:50.427928] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ac0 is same with the state(5) to be set 00:25:06.258 [2024-07-23 01:04:50.427941] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ac0 is same with the state(5) to be set 00:25:06.258 [2024-07-23 01:04:50.427942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.258 [2024-07-23 01:04:50.427953] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ac0 is same with the state(5) to be set 00:25:06.258 [2024-07-23 01:04:50.427956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.258 [2024-07-23 01:04:50.427965] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ac0 is same with the state(5) to be set 00:25:06.258 [2024-07-23 01:04:50.427972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.258 [2024-07-23 01:04:50.427978] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ac0 is same with the state(5) to be set 00:25:06.258 [2024-07-23 01:04:50.427986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.258 [2024-07-23 01:04:50.428002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.258 [2024-07-23 01:04:50.428015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.258 [2024-07-23 01:04:50.428031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.258 [2024-07-23 01:04:50.428045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.258 [2024-07-23 01:04:50.428064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.258 [2024-07-23 01:04:50.428079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.258 [2024-07-23 01:04:50.428094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.258 [2024-07-23 01:04:50.428107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.258 [2024-07-23 01:04:50.428123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.258 [2024-07-23 01:04:50.428136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.258 [2024-07-23 01:04:50.428151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.258 [2024-07-23 01:04:50.428165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.258 [2024-07-23 01:04:50.428180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.258 [2024-07-23 01:04:50.428193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.258 [2024-07-23 01:04:50.428208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.258 [2024-07-23 01:04:50.428221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.258 [2024-07-23 01:04:50.428236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.258 [2024-07-23 01:04:50.428249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.258 [2024-07-23 01:04:50.428264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.258 [2024-07-23 01:04:50.428277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.258 [2024-07-23 01:04:50.428293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.258 [2024-07-23 01:04:50.428306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.258 [2024-07-23 01:04:50.428321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.258 [2024-07-23 01:04:50.428334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.258 [2024-07-23 01:04:50.428350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.258 [2024-07-23 01:04:50.428363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.258 [2024-07-23 01:04:50.428378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.258 [2024-07-23 01:04:50.428391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.258 [2024-07-23 01:04:50.428406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.258 [2024-07-23 01:04:50.428422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.258 [2024-07-23 01:04:50.428438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.258 [2024-07-23 01:04:50.428452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.258 [2024-07-23 01:04:50.428467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.258 [2024-07-23 01:04:50.428480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.258 [2024-07-23 01:04:50.428495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.258 [2024-07-23 01:04:50.428508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.258 [2024-07-23 01:04:50.428524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.258 [2024-07-23 01:04:50.428538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.258 [2024-07-23 01:04:50.428553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.258 [2024-07-23 01:04:50.428566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.258 [2024-07-23 01:04:50.428671] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1d8f2f0 was disconnected and freed. reset controller. 00:25:06.258 [2024-07-23 01:04:50.428982] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d15530 (9): Bad file descriptor 00:25:06.258 [2024-07-23 01:04:50.429048] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.258 [2024-07-23 01:04:50.429070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.258 [2024-07-23 01:04:50.429085] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.258 [2024-07-23 01:04:50.429099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.258 [2024-07-23 01:04:50.429113] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.258 [2024-07-23 01:04:50.429126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.258 [2024-07-23 01:04:50.429140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.258 [2024-07-23 01:04:50.429153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.258 [2024-07-23 01:04:50.429167] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9d80 is same with the state(5) to be set 00:25:06.258 [2024-07-23 01:04:50.429211] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.258 [2024-07-23 01:04:50.429231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.258 [2024-07-23 01:04:50.429246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.258 [2024-07-23 01:04:50.429267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.258 [2024-07-23 01:04:50.429282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.258 [2024-07-23 01:04:50.429281] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27f50 is same with the state(5) to be set 00:25:06.258 [2024-07-23 01:04:50.429295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.258 [2024-07-23 01:04:50.429310] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.258 [2024-07-23 01:04:50.429310] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27f50 is same with the state(5) to be set 00:25:06.258 [2024-07-23 01:04:50.429324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.258 [2024-07-23 01:04:50.429326] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27f50 is same with the state(5) to be set 00:25:06.258 [2024-07-23 01:04:50.429337] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e6e4d0 is same [2024-07-23 01:04:50.429339] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27f50 is same with twith the state(5) to be set 00:25:06.258 he state(5) to be set 00:25:06.258 [2024-07-23 01:04:50.429352] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27f50 is same with the state(5) to be set 00:25:06.258 [2024-07-23 01:04:50.429361] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cebc80 (9): [2024-07-23 01:04:50.429364] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27f50 is same with tBad file descriptor 00:25:06.258 he state(5) to be set 00:25:06.258 [2024-07-23 01:04:50.429378] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27f50 is same with the state(5) to be set 00:25:06.258 [2024-07-23 01:04:50.429390] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27f50 is same with the state(5) to be set 00:25:06.258 [2024-07-23 01:04:50.429402] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27f50 is same with the state(5) to be set 00:25:06.259 [2024-07-23 01:04:50.429407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.259 [2024-07-23 01:04:50.429414] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27f50 is same with the state(5) to be set 00:25:06.259 [2024-07-23 01:04:50.429427] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27f50 is same with t[2024-07-23 01:04:50.429427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 che state(5) to be set 00:25:06.259 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.259 [2024-07-23 01:04:50.429441] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27f50 is same with the state(5) to be set 00:25:06.259 [2024-07-23 01:04:50.429445] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.259 [2024-07-23 01:04:50.429454] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27f50 is same with the state(5) to be set 00:25:06.259 [2024-07-23 01:04:50.429459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.259 [2024-07-23 01:04:50.429472] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27f50 is same with the state(5) to be set 00:25:06.259 [2024-07-23 01:04:50.429484] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27f50 is same with the state(5) to be set 00:25:06.259 [2024-07-23 01:04:50.429483] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.259 [2024-07-23 01:04:50.429497] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27f50 is same with the state(5) to be set 00:25:06.259 [2024-07-23 01:04:50.429510] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27f50 is same with the state(5) to be set 00:25:06.259 [2024-07-23 01:04:50.429512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.259 [2024-07-23 01:04:50.429522] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27f50 is same with the state(5) to be set 00:25:06.259 [2024-07-23 01:04:50.429534] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27f50 is same with the state(5) to be set 00:25:06.259 [2024-07-23 01:04:50.429537] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.259 [2024-07-23 01:04:50.429546] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27f50 is same with the state(5) to be set 00:25:06.259 [2024-07-23 01:04:50.429560] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27f50 is same with the state(5) to be set 00:25:06.259 [2024-07-23 01:04:50.429563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.259 [2024-07-23 01:04:50.429572] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27f50 is same with the state(5) to be set 00:25:06.259 [2024-07-23 01:04:50.429579] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb8410 is same with the state(5) to be set 00:25:06.259 [2024-07-23 01:04:50.429585] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27f50 is same with the state(5) to be set 00:25:06.259 [2024-07-23 01:04:50.429605] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27f50 is same with the state(5) to be set 00:25:06.259 [2024-07-23 01:04:50.429623] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27f50 is same with the state(5) to be set 00:25:06.259 [2024-07-23 01:04:50.429638] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27f50 is same with the state(5) to be set 00:25:06.259 [2024-07-23 01:04:50.429642] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.259 [2024-07-23 01:04:50.429650] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27f50 is same with the state(5) to be set 00:25:06.259 [2024-07-23 01:04:50.429662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.259 [2024-07-23 01:04:50.429663] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27f50 is same with the state(5) to be set 00:25:06.259 [2024-07-23 01:04:50.429678] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 ns[2024-07-23 01:04:50.429679] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27f50 is same with tid:0 cdw10:00000000 cdw11:00000000 00:25:06.259 he state(5) to be set 00:25:06.259 [2024-07-23 01:04:50.429693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-23 01:04:50.429694] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27f50 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.259 he state(5) to be set 00:25:06.259 [2024-07-23 01:04:50.429708] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27f50 is same with t[2024-07-23 01:04:50.429709] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nshe state(5) to be set 00:25:06.259 id:0 cdw10:00000000 cdw11:00000000 00:25:06.259 [2024-07-23 01:04:50.429722] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27f50 is same with t[2024-07-23 01:04:50.429723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 che state(5) to be set 00:25:06.259 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.259 [2024-07-23 01:04:50.429739] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27f50 is same with the state(5) to be set 00:25:06.259 [2024-07-23 01:04:50.429742] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.259 [2024-07-23 01:04:50.429753] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27f50 is same with the state(5) to be set 00:25:06.259 [2024-07-23 01:04:50.429756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.259 [2024-07-23 01:04:50.429766] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27f50 is same with the state(5) to be set 00:25:06.259 [2024-07-23 01:04:50.429774] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e87960 is same with the state(5) to be set 00:25:06.259 [2024-07-23 01:04:50.429779] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27f50 is same with the state(5) to be set 00:25:06.259 [2024-07-23 01:04:50.429792] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27f50 is same with the state(5) to be set 00:25:06.259 [2024-07-23 01:04:50.429804] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27f50 is same with the state(5) to be set 00:25:06.259 [2024-07-23 01:04:50.429816] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27f50 is same with the state(5) to be set 00:25:06.259 [2024-07-23 01:04:50.429828] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27f50 is same with the state(5) to be set 00:25:06.259 [2024-07-23 01:04:50.429840] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27f50 is same with the state(5) to be set 00:25:06.259 [2024-07-23 01:04:50.429852] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27f50 is same with the state(5) to be set 00:25:06.259 [2024-07-23 01:04:50.429864] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27f50 is same with the state(5) to be set 00:25:06.259 [2024-07-23 01:04:50.429865] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:06.259 [2024-07-23 01:04:50.429876] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27f50 is same with the state(5) to be set 00:25:06.259 [2024-07-23 01:04:50.429889] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27f50 is same with the state(5) to be set 00:25:06.259 [2024-07-23 01:04:50.429910] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27f50 is same with the state(5) to be set 00:25:06.259 [2024-07-23 01:04:50.429923] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27f50 is same with the state(5) to be set 00:25:06.259 [2024-07-23 01:04:50.429935] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27f50 is same with the state(5) to be set 00:25:06.259 [2024-07-23 01:04:50.429947] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27f50 is same with the state(5) to be set 00:25:06.259 [2024-07-23 01:04:50.429958] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27f50 is same with the state(5) to be set 00:25:06.259 [2024-07-23 01:04:50.429970] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27f50 is same with the state(5) to be set 00:25:06.259 [2024-07-23 01:04:50.429982] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27f50 is same with the state(5) to be set 00:25:06.259 [2024-07-23 01:04:50.429994] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27f50 is same with the state(5) to be set 00:25:06.259 [2024-07-23 01:04:50.430006] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27f50 is same with the state(5) to be set 00:25:06.259 [2024-07-23 01:04:50.430018] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27f50 is same with the state(5) to be set 00:25:06.259 [2024-07-23 01:04:50.430039] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27f50 is same with the state(5) to be set 00:25:06.259 [2024-07-23 01:04:50.430052] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27f50 is same with the state(5) to be set 00:25:06.259 [2024-07-23 01:04:50.430064] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27f50 is same with the state(5) to be set 00:25:06.259 [2024-07-23 01:04:50.430076] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27f50 is same with the state(5) to be set 00:25:06.259 [2024-07-23 01:04:50.430088] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27f50 is same with the state(5) to be set 00:25:06.259 [2024-07-23 01:04:50.430100] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27f50 is same with the state(5) to be set 00:25:06.259 [2024-07-23 01:04:50.430111] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27f50 is same with the state(5) to be set 00:25:06.260 [2024-07-23 01:04:50.430123] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27f50 is same with the state(5) to be set 00:25:06.260 [2024-07-23 01:04:50.431178] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc283e0 is same with the state(5) to be set 00:25:06.260 [2024-07-23 01:04:50.431207] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc283e0 is same with the state(5) to be set 00:25:06.260 [2024-07-23 01:04:50.431221] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc283e0 is same with the state(5) to be set 00:25:06.260 [2024-07-23 01:04:50.431233] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc283e0 is same with the state(5) to be set 00:25:06.260 [2024-07-23 01:04:50.431231] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:25:06.260 [2024-07-23 01:04:50.431245] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc283e0 is same with the state(5) to be set 00:25:06.260 [2024-07-23 01:04:50.431257] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc283e0 is same with the state(5) to be set 00:25:06.260 [2024-07-23 01:04:50.431264] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e6e4d0 (9): [2024-07-23 01:04:50.431269] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc283e0 is same with tBad file descriptor 00:25:06.260 he state(5) to be set 00:25:06.260 [2024-07-23 01:04:50.431284] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc283e0 is same with the state(5) to be set 00:25:06.260 [2024-07-23 01:04:50.431296] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc283e0 is same with the state(5) to be set 00:25:06.260 [2024-07-23 01:04:50.431308] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc283e0 is same with the state(5) to be set 00:25:06.260 [2024-07-23 01:04:50.431320] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc283e0 is same with the state(5) to be set 00:25:06.260 [2024-07-23 01:04:50.431332] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc283e0 is same with the state(5) to be set 00:25:06.260 [2024-07-23 01:04:50.431437] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc283e0 is same with the state(5) to be set 00:25:06.260 [2024-07-23 01:04:50.431452] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc283e0 is same with the state(5) to be set 00:25:06.260 [2024-07-23 01:04:50.431464] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc283e0 is same with the state(5) to be set 00:25:06.260 [2024-07-23 01:04:50.431477] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc283e0 is same with the state(5) to be set 00:25:06.260 [2024-07-23 01:04:50.431489] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc283e0 is same with the state(5) to be set 00:25:06.260 [2024-07-23 01:04:50.431506] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc283e0 is same with the state(5) to be set 00:25:06.260 [2024-07-23 01:04:50.431518] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc283e0 is same with the state(5) to be set 00:25:06.260 [2024-07-23 01:04:50.431530] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc283e0 is same with the state(5) to be set 00:25:06.260 [2024-07-23 01:04:50.431542] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc283e0 is same with the state(5) to be set 00:25:06.260 [2024-07-23 01:04:50.431553] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc283e0 is same with the state(5) to be set 00:25:06.260 [2024-07-23 01:04:50.431565] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc283e0 is same with the state(5) to be set 00:25:06.260 [2024-07-23 01:04:50.431577] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc283e0 is same with the state(5) to be set 00:25:06.260 [2024-07-23 01:04:50.431589] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc283e0 is same with the state(5) to be set 00:25:06.260 [2024-07-23 01:04:50.431620] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc283e0 is same with the state(5) to be set 00:25:06.260 [2024-07-23 01:04:50.431635] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc283e0 is same with the state(5) to be set 00:25:06.260 [2024-07-23 01:04:50.431647] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc283e0 is same with the state(5) to be set 00:25:06.260 [2024-07-23 01:04:50.431659] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc283e0 is same with the state(5) to be set 00:25:06.260 [2024-07-23 01:04:50.431671] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc283e0 is same with the state(5) to be set 00:25:06.260 [2024-07-23 01:04:50.431683] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc283e0 is same with the state(5) to be set 00:25:06.260 [2024-07-23 01:04:50.431694] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc283e0 is same with the state(5) to be set 00:25:06.260 [2024-07-23 01:04:50.431706] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc283e0 is same with the state(5) to be set 00:25:06.260 [2024-07-23 01:04:50.431718] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc283e0 is same with the state(5) to be set 00:25:06.260 [2024-07-23 01:04:50.431730] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc283e0 is same with the state(5) to be set 00:25:06.260 [2024-07-23 01:04:50.431742] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc283e0 is same with the state(5) to be set 00:25:06.260 [2024-07-23 01:04:50.431753] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc283e0 is same with the state(5) to be set 00:25:06.260 [2024-07-23 01:04:50.431765] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc283e0 is same with the state(5) to be set 00:25:06.260 [2024-07-23 01:04:50.431777] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc283e0 is same with the state(5) to be set 00:25:06.260 [2024-07-23 01:04:50.431789] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc283e0 is same with the state(5) to be set 00:25:06.260 [2024-07-23 01:04:50.431800] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc283e0 is same with the state(5) to be set 00:25:06.260 [2024-07-23 01:04:50.431812] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc283e0 is same with the state(5) to be set 00:25:06.260 [2024-07-23 01:04:50.431824] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc283e0 is same with the state(5) to be set 00:25:06.260 [2024-07-23 01:04:50.431836] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc283e0 is same with the state(5) to be set 00:25:06.260 [2024-07-23 01:04:50.431848] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc283e0 is same with the state(5) to be set 00:25:06.260 [2024-07-23 01:04:50.431864] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc283e0 is same with the state(5) to be set 00:25:06.260 [2024-07-23 01:04:50.431876] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc283e0 is same with the state(5) to be set 00:25:06.260 [2024-07-23 01:04:50.431888] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc283e0 is same with the state(5) to be set 00:25:06.260 [2024-07-23 01:04:50.431932] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc283e0 is same with the state(5) to be set 00:25:06.260 [2024-07-23 01:04:50.431946] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc283e0 is same with the state(5) to be set 00:25:06.260 [2024-07-23 01:04:50.431958] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc283e0 is same with the state(5) to be set 00:25:06.260 [2024-07-23 01:04:50.431970] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc283e0 is same with the state(5) to be set 00:25:06.260 [2024-07-23 01:04:50.431982] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc283e0 is same with the state(5) to be set 00:25:06.260 [2024-07-23 01:04:50.431993] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc283e0 is same with the state(5) to be set 00:25:06.260 [2024-07-23 01:04:50.432005] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc283e0 is same with the state(5) to be set 00:25:06.260 [2024-07-23 01:04:50.432017] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc283e0 is same with the state(5) to be set 00:25:06.260 [2024-07-23 01:04:50.432029] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc283e0 is same with the state(5) to be set 00:25:06.260 [2024-07-23 01:04:50.432041] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc283e0 is same with the state(5) to be set 00:25:06.260 [2024-07-23 01:04:50.432053] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc283e0 is same with the state(5) to be set 00:25:06.260 [2024-07-23 01:04:50.432065] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc283e0 is same with the state(5) to be set 00:25:06.260 [2024-07-23 01:04:50.432077] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc283e0 is same with the state(5) to be set 00:25:06.260 [2024-07-23 01:04:50.432088] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc283e0 is same with the state(5) to be set 00:25:06.260 [2024-07-23 01:04:50.432100] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc283e0 is same with the state(5) to be set 00:25:06.260 [2024-07-23 01:04:50.432296] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:06.260 [2024-07-23 01:04:50.432586] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:06.260 [2024-07-23 01:04:50.432853] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ee50 is same with the state(5) to be set 00:25:06.260 [2024-07-23 01:04:50.432889] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ee50 is same with the state(5) to be set 00:25:06.260 [2024-07-23 01:04:50.432910] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ee50 is same with the state(5) to be set 00:25:06.260 [2024-07-23 01:04:50.432922] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ee50 is same with the state(5) to be set 00:25:06.260 [2024-07-23 01:04:50.432934] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ee50 is same with the state(5) to be set 00:25:06.260 [2024-07-23 01:04:50.432947] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ee50 is same with the state(5) to be set 00:25:06.260 [2024-07-23 01:04:50.432959] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ee50 is same with the state(5) to be set 00:25:06.260 [2024-07-23 01:04:50.432985] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ee50 is same with the state(5) to be set 00:25:06.260 [2024-07-23 01:04:50.433004] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ee50 is same with the state(5) to be set 00:25:06.260 [2024-07-23 01:04:50.433018] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ee50 is same with the state(5) to be set 00:25:06.260 [2024-07-23 01:04:50.433030] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ee50 is same with the state(5) to be set 00:25:06.260 [2024-07-23 01:04:50.433042] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ee50 is same with the state(5) to be set 00:25:06.260 [2024-07-23 01:04:50.433054] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ee50 is same with the state(5) to be set 00:25:06.260 [2024-07-23 01:04:50.433065] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ee50 is same with the state(5) to be set 00:25:06.260 [2024-07-23 01:04:50.433077] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ee50 is same with the state(5) to be set 00:25:06.260 [2024-07-23 01:04:50.433089] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ee50 is same with the state(5) to be set 00:25:06.260 [2024-07-23 01:04:50.433100] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ee50 is same with the state(5) to be set 00:25:06.261 [2024-07-23 01:04:50.433112] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ee50 is same with the state(5) to be set 00:25:06.261 [2024-07-23 01:04:50.433124] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ee50 is same with the state(5) to be set 00:25:06.261 [2024-07-23 01:04:50.433136] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ee50 is same with the state(5) to be set 00:25:06.261 [2024-07-23 01:04:50.433148] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ee50 is same with the state(5) to be set 00:25:06.261 [2024-07-23 01:04:50.433159] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ee50 is same with the state(5) to be set 00:25:06.261 [2024-07-23 01:04:50.433172] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ee50 is same with the state(5) to be set 00:25:06.261 [2024-07-23 01:04:50.433184] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ee50 is same with the state(5) to be set 00:25:06.261 [2024-07-23 01:04:50.433196] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ee50 is same with the state(5) to be set 00:25:06.261 [2024-07-23 01:04:50.433207] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ee50 is same with the state(5) to be set 00:25:06.261 [2024-07-23 01:04:50.433219] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ee50 is same with the state(5) to be set 00:25:06.261 [2024-07-23 01:04:50.433231] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ee50 is same with the state(5) to be set 00:25:06.261 [2024-07-23 01:04:50.433242] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ee50 is same with the state(5) to be set 00:25:06.261 [2024-07-23 01:04:50.433254] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ee50 is same with the state(5) to be set 00:25:06.261 [2024-07-23 01:04:50.433266] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ee50 is same with the state(5) to be set 00:25:06.261 [2024-07-23 01:04:50.433278] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ee50 is same with the state(5) to be set 00:25:06.261 [2024-07-23 01:04:50.433274] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:06.261 [2024-07-23 01:04:50.433291] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ee50 is same with the state(5) to be set 00:25:06.261 [2024-07-23 01:04:50.433304] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ee50 is same with the state(5) to be set 00:25:06.261 [2024-07-23 01:04:50.433319] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ee50 is same with the state(5) to be set 00:25:06.261 [2024-07-23 01:04:50.433332] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ee50 is same with the state(5) to be set 00:25:06.261 [2024-07-23 01:04:50.433344] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ee50 is same with the state(5) to be set 00:25:06.261 [2024-07-23 01:04:50.433356] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ee50 is same with the state(5) to be set 00:25:06.261 [2024-07-23 01:04:50.433368] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ee50 is same with the state(5) to be set 00:25:06.261 [2024-07-23 01:04:50.433380] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ee50 is same with the state(5) to be set 00:25:06.261 [2024-07-23 01:04:50.433392] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ee50 is same with the state(5) to be set 00:25:06.261 [2024-07-23 01:04:50.433404] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ee50 is same with the state(5) to be set 00:25:06.261 [2024-07-23 01:04:50.433433] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ee50 is same with the state(5) to be set 00:25:06.261 [2024-07-23 01:04:50.433447] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ee50 is same with the state(5) to be set 00:25:06.261 [2024-07-23 01:04:50.433459] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ee50 is same with the state(5) to be set 00:25:06.261 [2024-07-23 01:04:50.433472] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ee50 is same with the state(5) to be set 00:25:06.261 [2024-07-23 01:04:50.433484] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ee50 is same with the state(5) to be set 00:25:06.261 [2024-07-23 01:04:50.433495] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ee50 is same with the state(5) to be set 00:25:06.261 [2024-07-23 01:04:50.433507] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ee50 is same with the state(5) to be set 00:25:06.261 [2024-07-23 01:04:50.433519] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ee50 is same with the state(5) to be set 00:25:06.261 [2024-07-23 01:04:50.433531] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ee50 is same with the state(5) to be set 00:25:06.261 [2024-07-23 01:04:50.433542] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ee50 is same with the state(5) to be set 00:25:06.261 [2024-07-23 01:04:50.433554] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ee50 is same with the state(5) to be set 00:25:06.261 [2024-07-23 01:04:50.433566] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ee50 is same with the state(5) to be set 00:25:06.261 [2024-07-23 01:04:50.433577] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ee50 is same with the state(5) to be set 00:25:06.261 [2024-07-23 01:04:50.433589] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ee50 is same with the state(5) to be set 00:25:06.261 [2024-07-23 01:04:50.433608] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ee50 is same with the state(5) to be set 00:25:06.261 [2024-07-23 01:04:50.433627] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ee50 is same with the state(5) to be set 00:25:06.261 [2024-07-23 01:04:50.433640] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ee50 is same with the state(5) to be set 00:25:06.261 [2024-07-23 01:04:50.433652] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ee50 is same with the state(5) to be set 00:25:06.261 [2024-07-23 01:04:50.433664] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ee50 is same with the state(5) to be set 00:25:06.261 [2024-07-23 01:04:50.433676] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ee50 is same with the state(5) to be set 00:25:06.261 [2024-07-23 01:04:50.433692] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ee50 is same with the state(5) to be set 00:25:06.261 [2024-07-23 01:04:50.434415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.261 [2024-07-23 01:04:50.434440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.261 [2024-07-23 01:04:50.434461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.261 [2024-07-23 01:04:50.434476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.261 [2024-07-23 01:04:50.434492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.261 [2024-07-23 01:04:50.434506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.261 [2024-07-23 01:04:50.434521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.261 [2024-07-23 01:04:50.434535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.261 [2024-07-23 01:04:50.434550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.261 [2024-07-23 01:04:50.434564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.261 [2024-07-23 01:04:50.434580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.261 [2024-07-23 01:04:50.434594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.261 [2024-07-23 01:04:50.434619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.261 [2024-07-23 01:04:50.434635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.261 [2024-07-23 01:04:50.434651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.261 [2024-07-23 01:04:50.434665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.261 [2024-07-23 01:04:50.434681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.261 [2024-07-23 01:04:50.434695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.261 [2024-07-23 01:04:50.434711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.261 [2024-07-23 01:04:50.434725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.261 [2024-07-23 01:04:50.434740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.261 [2024-07-23 01:04:50.434753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.261 [2024-07-23 01:04:50.434769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.261 [2024-07-23 01:04:50.434782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.261 [2024-07-23 01:04:50.434803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.261 [2024-07-23 01:04:50.434818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.261 [2024-07-23 01:04:50.434833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.261 [2024-07-23 01:04:50.434847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.261 [2024-07-23 01:04:50.434862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.261 [2024-07-23 01:04:50.434876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.261 [2024-07-23 01:04:50.434891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.261 [2024-07-23 01:04:50.434904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.261 [2024-07-23 01:04:50.434921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.261 [2024-07-23 01:04:50.434935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.261 [2024-07-23 01:04:50.434950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.261 [2024-07-23 01:04:50.434963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.262 [2024-07-23 01:04:50.434979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.262 [2024-07-23 01:04:50.434992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.262 [2024-07-23 01:04:50.435008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.262 [2024-07-23 01:04:50.435021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.262 [2024-07-23 01:04:50.435037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.262 [2024-07-23 01:04:50.435050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.262 [2024-07-23 01:04:50.435066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.262 [2024-07-23 01:04:50.435079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.262 [2024-07-23 01:04:50.435095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.262 [2024-07-23 01:04:50.435108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.262 [2024-07-23 01:04:50.435124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.262 [2024-07-23 01:04:50.435147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.262 [2024-07-23 01:04:50.435162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.262 [2024-07-23 01:04:50.435180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.262 [2024-07-23 01:04:50.435195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.262 [2024-07-23 01:04:50.435209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.262 [2024-07-23 01:04:50.435225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.262 [2024-07-23 01:04:50.435238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.262 [2024-07-23 01:04:50.435254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.262 [2024-07-23 01:04:50.435267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.262 [2024-07-23 01:04:50.435282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.262 [2024-07-23 01:04:50.435296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.262 [2024-07-23 01:04:50.435311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.262 [2024-07-23 01:04:50.435324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.262 [2024-07-23 01:04:50.435339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.262 [2024-07-23 01:04:50.435353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.262 [2024-07-23 01:04:50.435368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.262 [2024-07-23 01:04:50.435381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.262 [2024-07-23 01:04:50.435397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.262 [2024-07-23 01:04:50.435410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.262 [2024-07-23 01:04:50.435425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.262 [2024-07-23 01:04:50.435439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.262 [2024-07-23 01:04:50.435454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.262 [2024-07-23 01:04:50.435467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.262 [2024-07-23 01:04:50.435483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.262 [2024-07-23 01:04:50.435496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.262 [2024-07-23 01:04:50.435511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.262 [2024-07-23 01:04:50.435525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.262 [2024-07-23 01:04:50.435544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.262 [2024-07-23 01:04:50.435558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.262 [2024-07-23 01:04:50.435573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.262 [2024-07-23 01:04:50.435587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.262 [2024-07-23 01:04:50.435609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.262 [2024-07-23 01:04:50.435636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.262 [2024-07-23 01:04:50.435653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.262 [2024-07-23 01:04:50.435667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.262 [2024-07-23 01:04:50.435682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.262 [2024-07-23 01:04:50.435695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.262 [2024-07-23 01:04:50.435711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.262 [2024-07-23 01:04:50.435724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.262 [2024-07-23 01:04:50.435739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.262 [2024-07-23 01:04:50.435753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.262 [2024-07-23 01:04:50.435768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.262 [2024-07-23 01:04:50.435782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.262 [2024-07-23 01:04:50.435797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.262 [2024-07-23 01:04:50.435810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.262 [2024-07-23 01:04:50.435825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.262 [2024-07-23 01:04:50.435839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.262 [2024-07-23 01:04:50.435854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.262 [2024-07-23 01:04:50.435867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.262 [2024-07-23 01:04:50.435882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.262 [2024-07-23 01:04:50.435895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.262 [2024-07-23 01:04:50.435920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.262 [2024-07-23 01:04:50.435937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.262 [2024-07-23 01:04:50.435952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.262 [2024-07-23 01:04:50.435966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.262 [2024-07-23 01:04:50.435981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.262 [2024-07-23 01:04:50.435995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.262 [2024-07-23 01:04:50.436010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.262 [2024-07-23 01:04:50.436024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.262 [2024-07-23 01:04:50.436039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.262 [2024-07-23 01:04:50.436053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.262 [2024-07-23 01:04:50.436068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.262 [2024-07-23 01:04:50.436081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.262 [2024-07-23 01:04:50.436097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.262 [2024-07-23 01:04:50.436110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.262 [2024-07-23 01:04:50.436126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.262 [2024-07-23 01:04:50.436139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.262 [2024-07-23 01:04:50.436154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.263 [2024-07-23 01:04:50.436167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.263 [2024-07-23 01:04:50.436182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.263 [2024-07-23 01:04:50.436195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.263 [2024-07-23 01:04:50.436210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.263 [2024-07-23 01:04:50.436223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.263 [2024-07-23 01:04:50.436237] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d94a30 is same with the state(5) to be set 00:25:06.263 [2024-07-23 01:04:50.436305] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1d94a30 was disconnected and freed. reset controller. 00:25:06.263 [2024-07-23 01:04:50.436373] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:25:06.263 [2024-07-23 01:04:50.436395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.263 [2024-07-23 01:04:50.436413] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e6e4d0 is same with the state(5) to be set 00:25:06.263 [2024-07-23 01:04:50.436454] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:25:06.263 [2024-07-23 01:04:50.436474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.263 [2024-07-23 01:04:50.436487] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cbfa30 is same with the state(5) to be set 00:25:06.263 [2024-07-23 01:04:50.436506] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cbfa30 (9): Bad file descriptor 00:25:06.263 [2024-07-23 01:04:50.437648] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:25:06.263 [2024-07-23 01:04:50.437707] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e6e900 (9): Bad file descriptor 00:25:06.263 [2024-07-23 01:04:50.437733] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e6e4d0 (9): Bad file descriptor 00:25:06.263 [2024-07-23 01:04:50.437826] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:25:06.263 [2024-07-23 01:04:50.437848] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:25:06.263 [2024-07-23 01:04:50.437864] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:25:06.263 [2024-07-23 01:04:50.437884] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:06.263 [2024-07-23 01:04:50.437909] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:06.263 [2024-07-23 01:04:50.437921] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:06.263 [2024-07-23 01:04:50.438231] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:06.263 [2024-07-23 01:04:50.438253] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:06.263 [2024-07-23 01:04:50.438431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.263 [2024-07-23 01:04:50.438599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.263 [2024-07-23 01:04:50.438632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e6e900 with addr=10.0.0.2, port=4420 00:25:06.263 [2024-07-23 01:04:50.438665] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e6e900 is same with the state(5) to be set 00:25:06.263 [2024-07-23 01:04:50.438738] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e6e900 (9): Bad file descriptor 00:25:06.263 [2024-07-23 01:04:50.438810] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:25:06.263 [2024-07-23 01:04:50.438828] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:25:06.263 [2024-07-23 01:04:50.438842] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:25:06.263 [2024-07-23 01:04:50.438899] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:06.263 [2024-07-23 01:04:50.438980] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.263 [2024-07-23 01:04:50.439001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.263 [2024-07-23 01:04:50.439017] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.263 [2024-07-23 01:04:50.439030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.263 [2024-07-23 01:04:50.439049] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.263 [2024-07-23 01:04:50.439063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.263 [2024-07-23 01:04:50.439076] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.263 [2024-07-23 01:04:50.439089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.263 [2024-07-23 01:04:50.439101] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e72110 is same with the state(5) to be set 00:25:06.263 [2024-07-23 01:04:50.439157] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.263 [2024-07-23 01:04:50.439185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.263 [2024-07-23 01:04:50.439204] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.263 [2024-07-23 01:04:50.439218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.263 [2024-07-23 01:04:50.439232] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.263 [2024-07-23 01:04:50.439244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.263 [2024-07-23 01:04:50.439258] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.263 [2024-07-23 01:04:50.439271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.263 [2024-07-23 01:04:50.439283] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e87d90 is same with the state(5) to be set 00:25:06.263 [2024-07-23 01:04:50.439312] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9d80 (9): Bad file descriptor 00:25:06.263 [2024-07-23 01:04:50.439350] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb8410 (9): Bad file descriptor 00:25:06.263 [2024-07-23 01:04:50.439380] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e87960 (9): Bad file descriptor 00:25:06.263 [2024-07-23 01:04:50.439510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.263 [2024-07-23 01:04:50.439532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.263 [2024-07-23 01:04:50.439553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.263 [2024-07-23 01:04:50.439568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.263 [2024-07-23 01:04:50.439585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.263 [2024-07-23 01:04:50.439608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.263 [2024-07-23 01:04:50.439634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.263 [2024-07-23 01:04:50.439649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.263 [2024-07-23 01:04:50.439665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.263 [2024-07-23 01:04:50.439687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.263 [2024-07-23 01:04:50.439704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.263 [2024-07-23 01:04:50.439718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.263 [2024-07-23 01:04:50.439733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.263 [2024-07-23 01:04:50.439747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.263 [2024-07-23 01:04:50.439762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.263 [2024-07-23 01:04:50.439775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.264 [2024-07-23 01:04:50.439791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.264 [2024-07-23 01:04:50.439804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.264 [2024-07-23 01:04:50.439819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.264 [2024-07-23 01:04:50.439833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.264 [2024-07-23 01:04:50.439848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.264 [2024-07-23 01:04:50.439861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.264 [2024-07-23 01:04:50.439877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.264 [2024-07-23 01:04:50.439898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.264 [2024-07-23 01:04:50.439913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.264 [2024-07-23 01:04:50.439927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.264 [2024-07-23 01:04:50.439942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.264 [2024-07-23 01:04:50.439963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.264 [2024-07-23 01:04:50.439978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.264 [2024-07-23 01:04:50.439991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.264 [2024-07-23 01:04:50.440006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.264 [2024-07-23 01:04:50.440020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.264 [2024-07-23 01:04:50.440035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.264 [2024-07-23 01:04:50.440049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.264 [2024-07-23 01:04:50.440069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.264 [2024-07-23 01:04:50.440083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.264 [2024-07-23 01:04:50.440098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.264 [2024-07-23 01:04:50.440111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.264 [2024-07-23 01:04:50.440127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.264 [2024-07-23 01:04:50.440141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.264 [2024-07-23 01:04:50.440155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.264 [2024-07-23 01:04:50.440169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.264 [2024-07-23 01:04:50.440184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.264 [2024-07-23 01:04:50.440198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.264 [2024-07-23 01:04:50.440212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.264 [2024-07-23 01:04:50.440226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.264 [2024-07-23 01:04:50.440241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.264 [2024-07-23 01:04:50.440254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.264 [2024-07-23 01:04:50.440270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.264 [2024-07-23 01:04:50.440283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.264 [2024-07-23 01:04:50.440298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.264 [2024-07-23 01:04:50.440312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.264 [2024-07-23 01:04:50.440327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.264 [2024-07-23 01:04:50.440341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.264 [2024-07-23 01:04:50.440356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.264 [2024-07-23 01:04:50.440369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.264 [2024-07-23 01:04:50.440385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.264 [2024-07-23 01:04:50.440397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.264 [2024-07-23 01:04:50.440412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.264 [2024-07-23 01:04:50.440429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.264 [2024-07-23 01:04:50.440445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.264 [2024-07-23 01:04:50.440458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.264 [2024-07-23 01:04:50.440474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.264 [2024-07-23 01:04:50.440487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.264 [2024-07-23 01:04:50.440503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.264 [2024-07-23 01:04:50.440516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.264 [2024-07-23 01:04:50.440531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.264 [2024-07-23 01:04:50.440545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.264 [2024-07-23 01:04:50.440560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.264 [2024-07-23 01:04:50.440574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.264 [2024-07-23 01:04:50.440590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.264 [2024-07-23 01:04:50.440609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.264 [2024-07-23 01:04:50.440632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.264 [2024-07-23 01:04:50.440646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.264 [2024-07-23 01:04:50.440662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.264 [2024-07-23 01:04:50.440676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.264 [2024-07-23 01:04:50.440691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.264 [2024-07-23 01:04:50.440704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.264 [2024-07-23 01:04:50.440719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.264 [2024-07-23 01:04:50.440733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.264 [2024-07-23 01:04:50.440748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.264 [2024-07-23 01:04:50.440761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.264 [2024-07-23 01:04:50.440776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.264 [2024-07-23 01:04:50.440790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.264 [2024-07-23 01:04:50.440809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.264 [2024-07-23 01:04:50.440823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.264 [2024-07-23 01:04:50.440838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.264 [2024-07-23 01:04:50.440852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.264 [2024-07-23 01:04:50.440867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.264 [2024-07-23 01:04:50.440880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.264 [2024-07-23 01:04:50.440895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.264 [2024-07-23 01:04:50.440910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.264 [2024-07-23 01:04:50.440925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.264 [2024-07-23 01:04:50.440938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.264 [2024-07-23 01:04:50.440955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.265 [2024-07-23 01:04:50.440969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.265 [2024-07-23 01:04:50.440984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.265 [2024-07-23 01:04:50.440998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.265 [2024-07-23 01:04:50.441013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.265 [2024-07-23 01:04:50.441027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.265 [2024-07-23 01:04:50.441043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.265 [2024-07-23 01:04:50.441057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.265 [2024-07-23 01:04:50.441072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.265 [2024-07-23 01:04:50.441086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.265 [2024-07-23 01:04:50.441101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.265 [2024-07-23 01:04:50.441115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.530 [2024-07-23 01:04:50.450680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.530 [2024-07-23 01:04:50.450735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.531 [2024-07-23 01:04:50.450752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.531 [2024-07-23 01:04:50.450777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.531 [2024-07-23 01:04:50.450794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.531 [2024-07-23 01:04:50.450808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.531 [2024-07-23 01:04:50.450824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.531 [2024-07-23 01:04:50.450839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.531 [2024-07-23 01:04:50.450854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.531 [2024-07-23 01:04:50.450869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.531 [2024-07-23 01:04:50.450884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.531 [2024-07-23 01:04:50.450904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.531 [2024-07-23 01:04:50.450919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.531 [2024-07-23 01:04:50.450933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.531 [2024-07-23 01:04:50.450949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.531 [2024-07-23 01:04:50.450962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.531 [2024-07-23 01:04:50.450978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.531 [2024-07-23 01:04:50.450991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.531 [2024-07-23 01:04:50.451007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.531 [2024-07-23 01:04:50.451021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.531 [2024-07-23 01:04:50.451037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.531 [2024-07-23 01:04:50.451051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.531 [2024-07-23 01:04:50.451066] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e55650 is same with the state(5) to be set 00:25:06.531 [2024-07-23 01:04:50.452419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.531 [2024-07-23 01:04:50.452443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.531 [2024-07-23 01:04:50.452467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.531 [2024-07-23 01:04:50.452483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.531 [2024-07-23 01:04:50.452499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.531 [2024-07-23 01:04:50.452519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.531 [2024-07-23 01:04:50.452535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.531 [2024-07-23 01:04:50.452549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.531 [2024-07-23 01:04:50.452565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.531 [2024-07-23 01:04:50.452579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.531 [2024-07-23 01:04:50.452595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.531 [2024-07-23 01:04:50.452609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.531 [2024-07-23 01:04:50.452635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.531 [2024-07-23 01:04:50.452650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.531 [2024-07-23 01:04:50.452666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.531 [2024-07-23 01:04:50.452679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.531 [2024-07-23 01:04:50.452695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.531 [2024-07-23 01:04:50.452709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.531 [2024-07-23 01:04:50.452725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.531 [2024-07-23 01:04:50.452739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.531 [2024-07-23 01:04:50.452754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.531 [2024-07-23 01:04:50.452768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.531 [2024-07-23 01:04:50.452783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.531 [2024-07-23 01:04:50.452797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.531 [2024-07-23 01:04:50.452812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.531 [2024-07-23 01:04:50.452826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.531 [2024-07-23 01:04:50.452841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.531 [2024-07-23 01:04:50.452855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.531 [2024-07-23 01:04:50.452871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.531 [2024-07-23 01:04:50.452884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.531 [2024-07-23 01:04:50.452913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.531 [2024-07-23 01:04:50.452928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.531 [2024-07-23 01:04:50.452945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.531 [2024-07-23 01:04:50.452959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.531 [2024-07-23 01:04:50.452974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.531 [2024-07-23 01:04:50.452988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.531 [2024-07-23 01:04:50.453003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.531 [2024-07-23 01:04:50.453017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.531 [2024-07-23 01:04:50.453033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.531 [2024-07-23 01:04:50.453046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.531 [2024-07-23 01:04:50.453061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.531 [2024-07-23 01:04:50.453075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.531 [2024-07-23 01:04:50.453090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.531 [2024-07-23 01:04:50.453104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.531 [2024-07-23 01:04:50.453119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.531 [2024-07-23 01:04:50.453132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.531 [2024-07-23 01:04:50.453148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.531 [2024-07-23 01:04:50.453161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.531 [2024-07-23 01:04:50.453176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.531 [2024-07-23 01:04:50.453190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.531 [2024-07-23 01:04:50.453205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.531 [2024-07-23 01:04:50.453218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.531 [2024-07-23 01:04:50.453235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.531 [2024-07-23 01:04:50.453248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.531 [2024-07-23 01:04:50.453264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.531 [2024-07-23 01:04:50.453281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.531 [2024-07-23 01:04:50.453296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.532 [2024-07-23 01:04:50.453310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.532 [2024-07-23 01:04:50.453326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.532 [2024-07-23 01:04:50.453339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.532 [2024-07-23 01:04:50.453355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.532 [2024-07-23 01:04:50.453368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.532 [2024-07-23 01:04:50.453383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.532 [2024-07-23 01:04:50.453396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.532 [2024-07-23 01:04:50.453412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.532 [2024-07-23 01:04:50.453425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.532 [2024-07-23 01:04:50.453441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.532 [2024-07-23 01:04:50.453455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.532 [2024-07-23 01:04:50.453471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.532 [2024-07-23 01:04:50.453485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.532 [2024-07-23 01:04:50.453500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.532 [2024-07-23 01:04:50.453514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.532 [2024-07-23 01:04:50.453529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.532 [2024-07-23 01:04:50.453542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.532 [2024-07-23 01:04:50.453558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.532 [2024-07-23 01:04:50.453572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.532 [2024-07-23 01:04:50.453587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.532 [2024-07-23 01:04:50.453607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.532 [2024-07-23 01:04:50.453639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.532 [2024-07-23 01:04:50.453655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.532 [2024-07-23 01:04:50.453674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.532 [2024-07-23 01:04:50.453689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.532 [2024-07-23 01:04:50.453705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.532 [2024-07-23 01:04:50.453718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.532 [2024-07-23 01:04:50.453734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.532 [2024-07-23 01:04:50.453748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.532 [2024-07-23 01:04:50.453763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.532 [2024-07-23 01:04:50.453776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.532 [2024-07-23 01:04:50.453791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.532 [2024-07-23 01:04:50.453805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.532 [2024-07-23 01:04:50.453820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.532 [2024-07-23 01:04:50.453834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.532 [2024-07-23 01:04:50.453850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.532 [2024-07-23 01:04:50.453864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.532 [2024-07-23 01:04:50.453880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.532 [2024-07-23 01:04:50.453901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.532 [2024-07-23 01:04:50.453916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.532 [2024-07-23 01:04:50.453930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.532 [2024-07-23 01:04:50.453945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.532 [2024-07-23 01:04:50.453959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.532 [2024-07-23 01:04:50.453974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.532 [2024-07-23 01:04:50.453987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.532 [2024-07-23 01:04:50.454002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.532 [2024-07-23 01:04:50.454016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.532 [2024-07-23 01:04:50.454032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.532 [2024-07-23 01:04:50.454049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.532 [2024-07-23 01:04:50.454064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.532 [2024-07-23 01:04:50.454079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.532 [2024-07-23 01:04:50.454094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.532 [2024-07-23 01:04:50.454107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.532 [2024-07-23 01:04:50.454122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.532 [2024-07-23 01:04:50.454135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.532 [2024-07-23 01:04:50.454151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.532 [2024-07-23 01:04:50.454164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.532 [2024-07-23 01:04:50.454180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.532 [2024-07-23 01:04:50.454193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.532 [2024-07-23 01:04:50.454208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.532 [2024-07-23 01:04:50.454221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.532 [2024-07-23 01:04:50.454237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.532 [2024-07-23 01:04:50.454250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.532 [2024-07-23 01:04:50.454266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.532 [2024-07-23 01:04:50.454279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.532 [2024-07-23 01:04:50.454294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.532 [2024-07-23 01:04:50.454307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.532 [2024-07-23 01:04:50.454323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.532 [2024-07-23 01:04:50.454336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.532 [2024-07-23 01:04:50.454351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.532 [2024-07-23 01:04:50.454364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.532 [2024-07-23 01:04:50.454379] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d96010 is same with the state(5) to be set 00:25:06.532 [2024-07-23 01:04:50.456367] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:25:06.532 [2024-07-23 01:04:50.456404] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:25:06.532 [2024-07-23 01:04:50.456513] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e72110 (9): Bad file descriptor 00:25:06.532 [2024-07-23 01:04:50.456553] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e87d90 (9): Bad file descriptor 00:25:06.532 [2024-07-23 01:04:50.456979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.532 [2024-07-23 01:04:50.457158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.532 [2024-07-23 01:04:50.457184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cebc80 with addr=10.0.0.2, port=4420 00:25:06.532 [2024-07-23 01:04:50.457201] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebc80 is same with the state(5) to be set 00:25:06.533 [2024-07-23 01:04:50.457343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.533 [2024-07-23 01:04:50.457514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.533 [2024-07-23 01:04:50.457539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d15530 with addr=10.0.0.2, port=4420 00:25:06.533 [2024-07-23 01:04:50.457555] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d15530 is same with the state(5) to be set 00:25:06.533 [2024-07-23 01:04:50.457892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.533 [2024-07-23 01:04:50.457917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.533 [2024-07-23 01:04:50.457939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.533 [2024-07-23 01:04:50.457954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.533 [2024-07-23 01:04:50.457970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.533 [2024-07-23 01:04:50.457984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.533 [2024-07-23 01:04:50.458000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.533 [2024-07-23 01:04:50.458014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.533 [2024-07-23 01:04:50.458029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.533 [2024-07-23 01:04:50.458043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.533 [2024-07-23 01:04:50.458059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.533 [2024-07-23 01:04:50.458072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.533 [2024-07-23 01:04:50.458091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.533 [2024-07-23 01:04:50.458105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.533 [2024-07-23 01:04:50.458120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.533 [2024-07-23 01:04:50.458134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.533 [2024-07-23 01:04:50.458149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.533 [2024-07-23 01:04:50.458167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.533 [2024-07-23 01:04:50.458184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.533 [2024-07-23 01:04:50.458198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.533 [2024-07-23 01:04:50.458213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.533 [2024-07-23 01:04:50.458227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.533 [2024-07-23 01:04:50.458242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.533 [2024-07-23 01:04:50.458255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.533 [2024-07-23 01:04:50.458271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.533 [2024-07-23 01:04:50.458284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.533 [2024-07-23 01:04:50.458300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.533 [2024-07-23 01:04:50.458314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.533 [2024-07-23 01:04:50.458330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.533 [2024-07-23 01:04:50.458343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.533 [2024-07-23 01:04:50.458358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.533 [2024-07-23 01:04:50.458371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.533 [2024-07-23 01:04:50.458387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.533 [2024-07-23 01:04:50.458402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.533 [2024-07-23 01:04:50.458417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.533 [2024-07-23 01:04:50.458431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.533 [2024-07-23 01:04:50.458446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.533 [2024-07-23 01:04:50.458460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.533 [2024-07-23 01:04:50.458475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.533 [2024-07-23 01:04:50.458489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.533 [2024-07-23 01:04:50.458504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.533 [2024-07-23 01:04:50.458518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.533 [2024-07-23 01:04:50.458537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.533 [2024-07-23 01:04:50.458551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.533 [2024-07-23 01:04:50.458567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.533 [2024-07-23 01:04:50.458580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.533 [2024-07-23 01:04:50.458597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.533 [2024-07-23 01:04:50.458627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.533 [2024-07-23 01:04:50.458644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.533 [2024-07-23 01:04:50.458658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.533 [2024-07-23 01:04:50.458674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.533 [2024-07-23 01:04:50.458688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.533 [2024-07-23 01:04:50.458704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.533 [2024-07-23 01:04:50.458718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.533 [2024-07-23 01:04:50.458733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.533 [2024-07-23 01:04:50.458746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.533 [2024-07-23 01:04:50.458762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.533 [2024-07-23 01:04:50.458776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.533 [2024-07-23 01:04:50.458791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.533 [2024-07-23 01:04:50.458804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.533 [2024-07-23 01:04:50.458819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.533 [2024-07-23 01:04:50.458833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.533 [2024-07-23 01:04:50.458849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.533 [2024-07-23 01:04:50.458862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.533 [2024-07-23 01:04:50.458877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.533 [2024-07-23 01:04:50.458901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.533 [2024-07-23 01:04:50.458916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.533 [2024-07-23 01:04:50.458933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.533 [2024-07-23 01:04:50.458950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.533 [2024-07-23 01:04:50.458963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.533 [2024-07-23 01:04:50.458979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.533 [2024-07-23 01:04:50.458992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.533 [2024-07-23 01:04:50.459008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.533 [2024-07-23 01:04:50.459021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.533 [2024-07-23 01:04:50.459036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.533 [2024-07-23 01:04:50.459050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.534 [2024-07-23 01:04:50.459065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.534 [2024-07-23 01:04:50.459079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.534 [2024-07-23 01:04:50.459094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.534 [2024-07-23 01:04:50.459108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.534 [2024-07-23 01:04:50.459124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.534 [2024-07-23 01:04:50.459138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.534 [2024-07-23 01:04:50.459153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.534 [2024-07-23 01:04:50.459167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.534 [2024-07-23 01:04:50.459183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.534 [2024-07-23 01:04:50.459196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.534 [2024-07-23 01:04:50.459211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.534 [2024-07-23 01:04:50.459225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.534 [2024-07-23 01:04:50.459240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.534 [2024-07-23 01:04:50.459254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.534 [2024-07-23 01:04:50.459270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.534 [2024-07-23 01:04:50.459283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.534 [2024-07-23 01:04:50.459302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.534 [2024-07-23 01:04:50.459317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.534 [2024-07-23 01:04:50.459332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.534 [2024-07-23 01:04:50.459346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.534 [2024-07-23 01:04:50.459361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.534 [2024-07-23 01:04:50.459375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.534 [2024-07-23 01:04:50.459390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.534 [2024-07-23 01:04:50.459404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.534 [2024-07-23 01:04:50.459420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.534 [2024-07-23 01:04:50.459433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.534 [2024-07-23 01:04:50.459448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.534 [2024-07-23 01:04:50.459461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.534 [2024-07-23 01:04:50.459476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.534 [2024-07-23 01:04:50.459490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.534 [2024-07-23 01:04:50.459505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.534 [2024-07-23 01:04:50.459519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.534 [2024-07-23 01:04:50.459534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.534 [2024-07-23 01:04:50.459548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.534 [2024-07-23 01:04:50.459563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.534 [2024-07-23 01:04:50.459577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.534 [2024-07-23 01:04:50.459593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.534 [2024-07-23 01:04:50.459608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.534 [2024-07-23 01:04:50.459632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.534 [2024-07-23 01:04:50.459647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.534 [2024-07-23 01:04:50.459663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.534 [2024-07-23 01:04:50.459680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.534 [2024-07-23 01:04:50.459696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.534 [2024-07-23 01:04:50.459710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.534 [2024-07-23 01:04:50.459725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.534 [2024-07-23 01:04:50.459738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.534 [2024-07-23 01:04:50.459754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.534 [2024-07-23 01:04:50.459767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.534 [2024-07-23 01:04:50.459782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.534 [2024-07-23 01:04:50.459796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.534 [2024-07-23 01:04:50.459812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.534 [2024-07-23 01:04:50.459825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.534 [2024-07-23 01:04:50.459839] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e56b00 is same with the state(5) to be set 00:25:06.534 [2024-07-23 01:04:50.461074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.534 [2024-07-23 01:04:50.461097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.534 [2024-07-23 01:04:50.461117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.534 [2024-07-23 01:04:50.461132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.534 [2024-07-23 01:04:50.461148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.534 [2024-07-23 01:04:50.461163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.534 [2024-07-23 01:04:50.461178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.534 [2024-07-23 01:04:50.461192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.534 [2024-07-23 01:04:50.461207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.534 [2024-07-23 01:04:50.461220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.534 [2024-07-23 01:04:50.461236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.534 [2024-07-23 01:04:50.461249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.534 [2024-07-23 01:04:50.461264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.534 [2024-07-23 01:04:50.461282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.534 [2024-07-23 01:04:50.461298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.534 [2024-07-23 01:04:50.461312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.534 [2024-07-23 01:04:50.461328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.534 [2024-07-23 01:04:50.461341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.534 [2024-07-23 01:04:50.461356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.534 [2024-07-23 01:04:50.461370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.534 [2024-07-23 01:04:50.461385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.534 [2024-07-23 01:04:50.461399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.534 [2024-07-23 01:04:50.461414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.534 [2024-07-23 01:04:50.461432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.534 [2024-07-23 01:04:50.461447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.534 [2024-07-23 01:04:50.461461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.535 [2024-07-23 01:04:50.461476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.535 [2024-07-23 01:04:50.461489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.535 [2024-07-23 01:04:50.461505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.535 [2024-07-23 01:04:50.461518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.535 [2024-07-23 01:04:50.461534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.535 [2024-07-23 01:04:50.461549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.535 [2024-07-23 01:04:50.461564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.535 [2024-07-23 01:04:50.461577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.535 [2024-07-23 01:04:50.461593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.535 [2024-07-23 01:04:50.461606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.535 [2024-07-23 01:04:50.461636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.535 [2024-07-23 01:04:50.461652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.535 [2024-07-23 01:04:50.461671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.535 [2024-07-23 01:04:50.461686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.535 [2024-07-23 01:04:50.461702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.535 [2024-07-23 01:04:50.461715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.535 [2024-07-23 01:04:50.461731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.535 [2024-07-23 01:04:50.461744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.535 [2024-07-23 01:04:50.461760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.535 [2024-07-23 01:04:50.461773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.535 [2024-07-23 01:04:50.461788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.535 [2024-07-23 01:04:50.461802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.535 [2024-07-23 01:04:50.461818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.535 [2024-07-23 01:04:50.461832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.535 [2024-07-23 01:04:50.461847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.535 [2024-07-23 01:04:50.461861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.535 [2024-07-23 01:04:50.461876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.535 [2024-07-23 01:04:50.461890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.535 [2024-07-23 01:04:50.461906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.535 [2024-07-23 01:04:50.461920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.535 [2024-07-23 01:04:50.461935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.535 [2024-07-23 01:04:50.461948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.535 [2024-07-23 01:04:50.461964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.535 [2024-07-23 01:04:50.461977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.535 [2024-07-23 01:04:50.461992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.535 [2024-07-23 01:04:50.462006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.535 [2024-07-23 01:04:50.462029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.535 [2024-07-23 01:04:50.462046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.535 [2024-07-23 01:04:50.462062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.535 [2024-07-23 01:04:50.462076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.535 [2024-07-23 01:04:50.462091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.535 [2024-07-23 01:04:50.462105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.535 [2024-07-23 01:04:50.462120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.535 [2024-07-23 01:04:50.462134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.535 [2024-07-23 01:04:50.462149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.535 [2024-07-23 01:04:50.462163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.535 [2024-07-23 01:04:50.462178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.535 [2024-07-23 01:04:50.462191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.535 [2024-07-23 01:04:50.462207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.535 [2024-07-23 01:04:50.462222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.535 [2024-07-23 01:04:50.462237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.535 [2024-07-23 01:04:50.462251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.535 [2024-07-23 01:04:50.462266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.535 [2024-07-23 01:04:50.462280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.535 [2024-07-23 01:04:50.462296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.535 [2024-07-23 01:04:50.462310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.535 [2024-07-23 01:04:50.462325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.535 [2024-07-23 01:04:50.462338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.535 [2024-07-23 01:04:50.462354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.535 [2024-07-23 01:04:50.462367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.535 [2024-07-23 01:04:50.462383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.535 [2024-07-23 01:04:50.462399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.535 [2024-07-23 01:04:50.462417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.536 [2024-07-23 01:04:50.462432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.536 [2024-07-23 01:04:50.462447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.536 [2024-07-23 01:04:50.462461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.536 [2024-07-23 01:04:50.462476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.536 [2024-07-23 01:04:50.462490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.536 [2024-07-23 01:04:50.462507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.536 [2024-07-23 01:04:50.462522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.536 [2024-07-23 01:04:50.462539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.536 [2024-07-23 01:04:50.462553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.536 [2024-07-23 01:04:50.462568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.536 [2024-07-23 01:04:50.462582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.536 [2024-07-23 01:04:50.462597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.536 [2024-07-23 01:04:50.462627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.536 [2024-07-23 01:04:50.462644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.536 [2024-07-23 01:04:50.462658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.536 [2024-07-23 01:04:50.462673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.536 [2024-07-23 01:04:50.462687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.536 [2024-07-23 01:04:50.462702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.536 [2024-07-23 01:04:50.462716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.536 [2024-07-23 01:04:50.462731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.536 [2024-07-23 01:04:50.462744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.536 [2024-07-23 01:04:50.462760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.536 [2024-07-23 01:04:50.462773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.536 [2024-07-23 01:04:50.462789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.536 [2024-07-23 01:04:50.462806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.536 [2024-07-23 01:04:50.462824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.536 [2024-07-23 01:04:50.462838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.536 [2024-07-23 01:04:50.462854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.536 [2024-07-23 01:04:50.462868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.536 [2024-07-23 01:04:50.462885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.536 [2024-07-23 01:04:50.462910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.536 [2024-07-23 01:04:50.462925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.536 [2024-07-23 01:04:50.462938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.536 [2024-07-23 01:04:50.462954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.536 [2024-07-23 01:04:50.462968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.536 [2024-07-23 01:04:50.462984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.536 [2024-07-23 01:04:50.462998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.536 [2024-07-23 01:04:50.463014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.536 [2024-07-23 01:04:50.463028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.536 [2024-07-23 01:04:50.463043] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e576b0 is same with the state(5) to be set 00:25:06.536 [2024-07-23 01:04:50.464256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.536 [2024-07-23 01:04:50.464279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.536 [2024-07-23 01:04:50.464299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.536 [2024-07-23 01:04:50.464314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.536 [2024-07-23 01:04:50.464329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.536 [2024-07-23 01:04:50.464343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.536 [2024-07-23 01:04:50.464359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.536 [2024-07-23 01:04:50.464372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.536 [2024-07-23 01:04:50.464388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.536 [2024-07-23 01:04:50.464407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.536 [2024-07-23 01:04:50.464423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.536 [2024-07-23 01:04:50.464437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.536 [2024-07-23 01:04:50.464453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.536 [2024-07-23 01:04:50.464467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.536 [2024-07-23 01:04:50.464483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.536 [2024-07-23 01:04:50.464496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.536 [2024-07-23 01:04:50.464511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.536 [2024-07-23 01:04:50.464525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.536 [2024-07-23 01:04:50.464540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.536 [2024-07-23 01:04:50.464554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.536 [2024-07-23 01:04:50.464569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.536 [2024-07-23 01:04:50.464582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.536 [2024-07-23 01:04:50.464598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.536 [2024-07-23 01:04:50.464627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.536 [2024-07-23 01:04:50.464644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.536 [2024-07-23 01:04:50.464658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.536 [2024-07-23 01:04:50.464674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.536 [2024-07-23 01:04:50.464688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.536 [2024-07-23 01:04:50.464703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.536 [2024-07-23 01:04:50.464717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.536 [2024-07-23 01:04:50.464732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.536 [2024-07-23 01:04:50.464745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.537 [2024-07-23 01:04:50.464761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.537 [2024-07-23 01:04:50.464774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.537 [2024-07-23 01:04:50.464793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.537 [2024-07-23 01:04:50.464807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.537 [2024-07-23 01:04:50.464822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.537 [2024-07-23 01:04:50.464836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.537 [2024-07-23 01:04:50.464851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.537 [2024-07-23 01:04:50.464865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.537 [2024-07-23 01:04:50.464880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.537 [2024-07-23 01:04:50.464893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.537 [2024-07-23 01:04:50.464909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.537 [2024-07-23 01:04:50.464922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.537 [2024-07-23 01:04:50.464938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.537 [2024-07-23 01:04:50.464952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.537 [2024-07-23 01:04:50.464967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.537 [2024-07-23 01:04:50.464981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.537 [2024-07-23 01:04:50.464996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.537 [2024-07-23 01:04:50.465010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.537 [2024-07-23 01:04:50.465025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.537 [2024-07-23 01:04:50.465038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.537 [2024-07-23 01:04:50.465053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.537 [2024-07-23 01:04:50.465066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.537 [2024-07-23 01:04:50.465082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.537 [2024-07-23 01:04:50.465095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.537 [2024-07-23 01:04:50.465110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.537 [2024-07-23 01:04:50.465124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.537 [2024-07-23 01:04:50.465139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.537 [2024-07-23 01:04:50.465156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.537 [2024-07-23 01:04:50.465172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.537 [2024-07-23 01:04:50.465185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.537 [2024-07-23 01:04:50.465201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.537 [2024-07-23 01:04:50.465214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.537 [2024-07-23 01:04:50.465229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.537 [2024-07-23 01:04:50.465243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.537 [2024-07-23 01:04:50.465258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.537 [2024-07-23 01:04:50.465272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.537 [2024-07-23 01:04:50.465287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.537 [2024-07-23 01:04:50.465300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.537 [2024-07-23 01:04:50.465316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.537 [2024-07-23 01:04:50.465330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.537 [2024-07-23 01:04:50.465346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.537 [2024-07-23 01:04:50.465359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.537 [2024-07-23 01:04:50.465374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.537 [2024-07-23 01:04:50.465388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.537 [2024-07-23 01:04:50.465403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.537 [2024-07-23 01:04:50.465418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.537 [2024-07-23 01:04:50.465433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.537 [2024-07-23 01:04:50.465446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.537 [2024-07-23 01:04:50.465462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.537 [2024-07-23 01:04:50.465475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.537 [2024-07-23 01:04:50.465490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.537 [2024-07-23 01:04:50.465504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.537 [2024-07-23 01:04:50.465526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.537 [2024-07-23 01:04:50.465540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.537 [2024-07-23 01:04:50.465556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.537 [2024-07-23 01:04:50.465569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.537 [2024-07-23 01:04:50.465584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.537 [2024-07-23 01:04:50.465604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.537 [2024-07-23 01:04:50.465625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.537 [2024-07-23 01:04:50.465640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.537 [2024-07-23 01:04:50.465656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.537 [2024-07-23 01:04:50.465670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.537 [2024-07-23 01:04:50.465685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.537 [2024-07-23 01:04:50.465699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.537 [2024-07-23 01:04:50.465714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.537 [2024-07-23 01:04:50.465728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.537 [2024-07-23 01:04:50.465743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.537 [2024-07-23 01:04:50.465756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.537 [2024-07-23 01:04:50.465771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.537 [2024-07-23 01:04:50.465784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.537 [2024-07-23 01:04:50.465799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.537 [2024-07-23 01:04:50.465813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.537 [2024-07-23 01:04:50.465828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.537 [2024-07-23 01:04:50.465842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.537 [2024-07-23 01:04:50.465857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.537 [2024-07-23 01:04:50.465870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.537 [2024-07-23 01:04:50.465886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.537 [2024-07-23 01:04:50.465912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.538 [2024-07-23 01:04:50.465927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.538 [2024-07-23 01:04:50.465941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.538 [2024-07-23 01:04:50.465957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.538 [2024-07-23 01:04:50.465970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.538 [2024-07-23 01:04:50.465985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.538 [2024-07-23 01:04:50.465999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.538 [2024-07-23 01:04:50.466014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.538 [2024-07-23 01:04:50.466027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.538 [2024-07-23 01:04:50.466042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.538 [2024-07-23 01:04:50.466055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.538 [2024-07-23 01:04:50.466071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.538 [2024-07-23 01:04:50.466084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.538 [2024-07-23 01:04:50.466099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.538 [2024-07-23 01:04:50.466113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.538 [2024-07-23 01:04:50.466129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.538 [2024-07-23 01:04:50.466142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.538 [2024-07-23 01:04:50.466157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.538 [2024-07-23 01:04:50.466171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.538 [2024-07-23 01:04:50.466185] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d908d0 is same with the state(5) to be set 00:25:06.538 [2024-07-23 01:04:50.467667] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:25:06.538 [2024-07-23 01:04:50.467699] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:06.538 [2024-07-23 01:04:50.467718] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:25:06.538 [2024-07-23 01:04:50.467735] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:25:06.538 [2024-07-23 01:04:50.467753] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:25:06.538 [2024-07-23 01:04:50.467826] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cebc80 (9): Bad file descriptor 00:25:06.538 [2024-07-23 01:04:50.467855] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d15530 (9): Bad file descriptor 00:25:06.538 [2024-07-23 01:04:50.467940] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:06.538 [2024-07-23 01:04:50.467971] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:06.538 [2024-07-23 01:04:50.467991] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:06.538 [2024-07-23 01:04:50.468085] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:25:06.538 [2024-07-23 01:04:50.468400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.538 [2024-07-23 01:04:50.468554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.538 [2024-07-23 01:04:50.468580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e6e4d0 with addr=10.0.0.2, port=4420 00:25:06.538 [2024-07-23 01:04:50.468607] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e6e4d0 is same with the state(5) to be set 00:25:06.538 [2024-07-23 01:04:50.468753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.538 [2024-07-23 01:04:50.468914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.538 [2024-07-23 01:04:50.468938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cbfa30 with addr=10.0.0.2, port=4420 00:25:06.538 [2024-07-23 01:04:50.468953] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cbfa30 is same with the state(5) to be set 00:25:06.538 [2024-07-23 01:04:50.469112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.538 [2024-07-23 01:04:50.469266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.538 [2024-07-23 01:04:50.469289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e6e900 with addr=10.0.0.2, port=4420 00:25:06.538 [2024-07-23 01:04:50.469304] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e6e900 is same with the state(5) to be set 00:25:06.538 [2024-07-23 01:04:50.469424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.538 [2024-07-23 01:04:50.469571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.538 [2024-07-23 01:04:50.469594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9d80 with addr=10.0.0.2, port=4420 00:25:06.538 [2024-07-23 01:04:50.469609] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9d80 is same with the state(5) to be set 00:25:06.538 [2024-07-23 01:04:50.469760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.538 [2024-07-23 01:04:50.469892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.538 [2024-07-23 01:04:50.469916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb8410 with addr=10.0.0.2, port=4420 00:25:06.538 [2024-07-23 01:04:50.469931] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb8410 is same with the state(5) to be set 00:25:06.538 [2024-07-23 01:04:50.469946] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:25:06.538 [2024-07-23 01:04:50.469959] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:25:06.538 [2024-07-23 01:04:50.469980] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:25:06.538 [2024-07-23 01:04:50.470000] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:25:06.538 [2024-07-23 01:04:50.470014] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:25:06.538 [2024-07-23 01:04:50.470026] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:25:06.538 [2024-07-23 01:04:50.470911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.538 [2024-07-23 01:04:50.470939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.538 [2024-07-23 01:04:50.470966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.538 [2024-07-23 01:04:50.470981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.538 [2024-07-23 01:04:50.470998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.538 [2024-07-23 01:04:50.471012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.538 [2024-07-23 01:04:50.471028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.538 [2024-07-23 01:04:50.471041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.538 [2024-07-23 01:04:50.471057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.538 [2024-07-23 01:04:50.471070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.538 [2024-07-23 01:04:50.471085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.538 [2024-07-23 01:04:50.471099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.538 [2024-07-23 01:04:50.471115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.538 [2024-07-23 01:04:50.471128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.538 [2024-07-23 01:04:50.471144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.538 [2024-07-23 01:04:50.471158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.538 [2024-07-23 01:04:50.471174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.538 [2024-07-23 01:04:50.471188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.538 [2024-07-23 01:04:50.471204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.538 [2024-07-23 01:04:50.471218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.538 [2024-07-23 01:04:50.471233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.538 [2024-07-23 01:04:50.471247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.538 [2024-07-23 01:04:50.471262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.538 [2024-07-23 01:04:50.471277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.538 [2024-07-23 01:04:50.471292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.538 [2024-07-23 01:04:50.471305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.538 [2024-07-23 01:04:50.471326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.538 [2024-07-23 01:04:50.471340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.538 [2024-07-23 01:04:50.471356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.539 [2024-07-23 01:04:50.471369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.539 [2024-07-23 01:04:50.471384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.539 [2024-07-23 01:04:50.471398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.539 [2024-07-23 01:04:50.471414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.539 [2024-07-23 01:04:50.471428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.539 [2024-07-23 01:04:50.471443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.539 [2024-07-23 01:04:50.471457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.539 [2024-07-23 01:04:50.471472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.539 [2024-07-23 01:04:50.471486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.539 [2024-07-23 01:04:50.471501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.539 [2024-07-23 01:04:50.471515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.539 [2024-07-23 01:04:50.471530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.539 [2024-07-23 01:04:50.471543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.539 [2024-07-23 01:04:50.471559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.539 [2024-07-23 01:04:50.471572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.539 [2024-07-23 01:04:50.471588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.539 [2024-07-23 01:04:50.471609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.539 [2024-07-23 01:04:50.471631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.539 [2024-07-23 01:04:50.471646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.539 [2024-07-23 01:04:50.471662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.539 [2024-07-23 01:04:50.471676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.539 [2024-07-23 01:04:50.471692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.539 [2024-07-23 01:04:50.471710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.539 [2024-07-23 01:04:50.471727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.539 [2024-07-23 01:04:50.471741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.539 [2024-07-23 01:04:50.471758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.539 [2024-07-23 01:04:50.471771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.539 [2024-07-23 01:04:50.471787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.539 [2024-07-23 01:04:50.471801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.539 [2024-07-23 01:04:50.471816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.539 [2024-07-23 01:04:50.471829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.539 [2024-07-23 01:04:50.471845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.539 [2024-07-23 01:04:50.471858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.539 [2024-07-23 01:04:50.471874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.539 [2024-07-23 01:04:50.471888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.539 [2024-07-23 01:04:50.471913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.539 [2024-07-23 01:04:50.471927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.539 [2024-07-23 01:04:50.471942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.539 [2024-07-23 01:04:50.471956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.539 [2024-07-23 01:04:50.471971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.539 [2024-07-23 01:04:50.471985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.539 [2024-07-23 01:04:50.472000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.539 [2024-07-23 01:04:50.472014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.539 [2024-07-23 01:04:50.472029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.539 [2024-07-23 01:04:50.472043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.539 [2024-07-23 01:04:50.472068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.539 [2024-07-23 01:04:50.472081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.539 [2024-07-23 01:04:50.472100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.539 [2024-07-23 01:04:50.472114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.539 [2024-07-23 01:04:50.472130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.539 [2024-07-23 01:04:50.472144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.539 [2024-07-23 01:04:50.472160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.539 [2024-07-23 01:04:50.472173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.539 [2024-07-23 01:04:50.472188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.539 [2024-07-23 01:04:50.472202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.539 [2024-07-23 01:04:50.472218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.539 [2024-07-23 01:04:50.472232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.539 [2024-07-23 01:04:50.472247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.539 [2024-07-23 01:04:50.472261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.539 [2024-07-23 01:04:50.472276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.539 [2024-07-23 01:04:50.472290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.539 [2024-07-23 01:04:50.472305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.539 [2024-07-23 01:04:50.472319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.539 [2024-07-23 01:04:50.472334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.539 [2024-07-23 01:04:50.472347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.539 [2024-07-23 01:04:50.472363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.539 [2024-07-23 01:04:50.472380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.539 [2024-07-23 01:04:50.472395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.539 [2024-07-23 01:04:50.472408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.539 [2024-07-23 01:04:50.472423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.539 [2024-07-23 01:04:50.472437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.539 [2024-07-23 01:04:50.472452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.539 [2024-07-23 01:04:50.472469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.539 [2024-07-23 01:04:50.472485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.539 [2024-07-23 01:04:50.472499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.539 [2024-07-23 01:04:50.472514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.539 [2024-07-23 01:04:50.472527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.539 [2024-07-23 01:04:50.472543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.540 [2024-07-23 01:04:50.472556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.540 [2024-07-23 01:04:50.472571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.540 [2024-07-23 01:04:50.472585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.540 [2024-07-23 01:04:50.472609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.540 [2024-07-23 01:04:50.472630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.540 [2024-07-23 01:04:50.472647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.540 [2024-07-23 01:04:50.472661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.540 [2024-07-23 01:04:50.472677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.540 [2024-07-23 01:04:50.472690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.540 [2024-07-23 01:04:50.472705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.540 [2024-07-23 01:04:50.472718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.540 [2024-07-23 01:04:50.472733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.540 [2024-07-23 01:04:50.472746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.540 [2024-07-23 01:04:50.472762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.540 [2024-07-23 01:04:50.472776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.540 [2024-07-23 01:04:50.472791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.540 [2024-07-23 01:04:50.472805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.540 [2024-07-23 01:04:50.472820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.540 [2024-07-23 01:04:50.472833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.540 [2024-07-23 01:04:50.472852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.540 [2024-07-23 01:04:50.472866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.540 [2024-07-23 01:04:50.472880] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d91eb0 is same with the state(5) to be set 00:25:06.540 [2024-07-23 01:04:50.474108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.540 [2024-07-23 01:04:50.474131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.540 [2024-07-23 01:04:50.474153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.540 [2024-07-23 01:04:50.474169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.540 [2024-07-23 01:04:50.474184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.540 [2024-07-23 01:04:50.474198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.540 [2024-07-23 01:04:50.474213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.540 [2024-07-23 01:04:50.474227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.540 [2024-07-23 01:04:50.474242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.540 [2024-07-23 01:04:50.474255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.540 [2024-07-23 01:04:50.474271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.540 [2024-07-23 01:04:50.474284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.540 [2024-07-23 01:04:50.474299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.540 [2024-07-23 01:04:50.474313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.540 [2024-07-23 01:04:50.474328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.540 [2024-07-23 01:04:50.474344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.540 [2024-07-23 01:04:50.474360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.540 [2024-07-23 01:04:50.474373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.540 [2024-07-23 01:04:50.474389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.540 [2024-07-23 01:04:50.474402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.540 [2024-07-23 01:04:50.474417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.540 [2024-07-23 01:04:50.474431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.540 [2024-07-23 01:04:50.474451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.540 [2024-07-23 01:04:50.474465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.540 [2024-07-23 01:04:50.474481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.540 [2024-07-23 01:04:50.474494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.540 [2024-07-23 01:04:50.474510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.540 [2024-07-23 01:04:50.474523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.540 [2024-07-23 01:04:50.474539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.540 [2024-07-23 01:04:50.474553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.540 [2024-07-23 01:04:50.474568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.540 [2024-07-23 01:04:50.474582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.540 [2024-07-23 01:04:50.474597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.540 [2024-07-23 01:04:50.474627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.540 [2024-07-23 01:04:50.474643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.540 [2024-07-23 01:04:50.474657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.540 [2024-07-23 01:04:50.474673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.540 [2024-07-23 01:04:50.474687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.540 [2024-07-23 01:04:50.474702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.540 [2024-07-23 01:04:50.474715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.540 [2024-07-23 01:04:50.474730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.540 [2024-07-23 01:04:50.474744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.540 [2024-07-23 01:04:50.474760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.540 [2024-07-23 01:04:50.474773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.540 [2024-07-23 01:04:50.474788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.541 [2024-07-23 01:04:50.474801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.541 [2024-07-23 01:04:50.474817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.541 [2024-07-23 01:04:50.474834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.541 [2024-07-23 01:04:50.474850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.541 [2024-07-23 01:04:50.474864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.541 [2024-07-23 01:04:50.474880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.541 [2024-07-23 01:04:50.474893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.541 [2024-07-23 01:04:50.474916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.541 [2024-07-23 01:04:50.474929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.541 [2024-07-23 01:04:50.474944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.541 [2024-07-23 01:04:50.474957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.541 [2024-07-23 01:04:50.474972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.541 [2024-07-23 01:04:50.474986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.541 [2024-07-23 01:04:50.475002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.541 [2024-07-23 01:04:50.475016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.541 [2024-07-23 01:04:50.475031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.541 [2024-07-23 01:04:50.475053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.541 [2024-07-23 01:04:50.475069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.541 [2024-07-23 01:04:50.475082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.541 [2024-07-23 01:04:50.475098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.541 [2024-07-23 01:04:50.475111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.541 [2024-07-23 01:04:50.475126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.541 [2024-07-23 01:04:50.475140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.541 [2024-07-23 01:04:50.475155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.541 [2024-07-23 01:04:50.475168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.541 [2024-07-23 01:04:50.475183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.541 [2024-07-23 01:04:50.475197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.541 [2024-07-23 01:04:50.475217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.541 [2024-07-23 01:04:50.475231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.541 [2024-07-23 01:04:50.475246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.541 [2024-07-23 01:04:50.475259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.541 [2024-07-23 01:04:50.475274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.541 [2024-07-23 01:04:50.475288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.541 [2024-07-23 01:04:50.475303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.541 [2024-07-23 01:04:50.475317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.541 [2024-07-23 01:04:50.475332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.541 [2024-07-23 01:04:50.475345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.541 [2024-07-23 01:04:50.475361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.541 [2024-07-23 01:04:50.475375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.541 [2024-07-23 01:04:50.475390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.541 [2024-07-23 01:04:50.475404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.541 [2024-07-23 01:04:50.475419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.541 [2024-07-23 01:04:50.475433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.541 [2024-07-23 01:04:50.475449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.541 [2024-07-23 01:04:50.475463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.541 [2024-07-23 01:04:50.475478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.541 [2024-07-23 01:04:50.475491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.541 [2024-07-23 01:04:50.475507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.541 [2024-07-23 01:04:50.475520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.541 [2024-07-23 01:04:50.475536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.541 [2024-07-23 01:04:50.475549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.541 [2024-07-23 01:04:50.475564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.541 [2024-07-23 01:04:50.475581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.541 [2024-07-23 01:04:50.475597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.541 [2024-07-23 01:04:50.475611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.541 [2024-07-23 01:04:50.475633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.541 [2024-07-23 01:04:50.475647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.541 [2024-07-23 01:04:50.475662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.541 [2024-07-23 01:04:50.475676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.541 [2024-07-23 01:04:50.475692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.541 [2024-07-23 01:04:50.475706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.541 [2024-07-23 01:04:50.475721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.541 [2024-07-23 01:04:50.475735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.541 [2024-07-23 01:04:50.475750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.541 [2024-07-23 01:04:50.475763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.541 [2024-07-23 01:04:50.475778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.541 [2024-07-23 01:04:50.475793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.541 [2024-07-23 01:04:50.475808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.541 [2024-07-23 01:04:50.475821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.541 [2024-07-23 01:04:50.475837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.541 [2024-07-23 01:04:50.475850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.541 [2024-07-23 01:04:50.475865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.541 [2024-07-23 01:04:50.475879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.541 [2024-07-23 01:04:50.475894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.541 [2024-07-23 01:04:50.475914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.541 [2024-07-23 01:04:50.475929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.541 [2024-07-23 01:04:50.475942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.541 [2024-07-23 01:04:50.475961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.542 [2024-07-23 01:04:50.475975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.542 [2024-07-23 01:04:50.475989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.542 [2024-07-23 01:04:50.476003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.542 [2024-07-23 01:04:50.476018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.542 [2024-07-23 01:04:50.476031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.542 [2024-07-23 01:04:50.476045] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93490 is same with the state(5) to be set 00:25:06.542 [2024-07-23 01:04:50.478265] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:06.542 [2024-07-23 01:04:50.478291] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:06.542 [2024-07-23 01:04:50.478307] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:25:06.542 task offset: 34432 on job bdev=Nvme1n1 fails 00:25:06.542 00:25:06.542 Latency(us) 00:25:06.542 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:06.542 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:06.542 Job: Nvme1n1 ended in about 0.73 seconds with error 00:25:06.542 Verification LBA range: start 0x0 length 0x400 00:25:06.542 Nvme1n1 : 0.73 343.15 21.45 87.50 0.00 147603.14 4878.79 162335.10 00:25:06.542 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:06.542 Job: Nvme2n1 ended in about 0.76 seconds with error 00:25:06.542 Verification LBA range: start 0x0 length 0x400 00:25:06.542 Nvme2n1 : 0.76 330.13 20.63 84.18 0.00 151940.57 88546.42 120392.06 00:25:06.542 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:06.542 Job: Nvme3n1 ended in about 0.77 seconds with error 00:25:06.542 Verification LBA range: start 0x0 length 0x400 00:25:06.542 Nvme3n1 : 0.77 326.38 20.40 83.22 0.00 152202.56 89323.14 120392.06 00:25:06.542 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:06.542 Job: Nvme4n1 ended in about 0.77 seconds with error 00:25:06.542 Verification LBA range: start 0x0 length 0x400 00:25:06.542 Nvme4n1 : 0.77 325.04 20.31 82.88 0.00 151292.69 90099.86 118061.89 00:25:06.542 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:06.542 Job: Nvme5n1 ended in about 0.74 seconds with error 00:25:06.542 Verification LBA range: start 0x0 length 0x400 00:25:06.542 Nvme5n1 : 0.74 339.59 21.22 86.59 0.00 143111.05 24660.95 135926.52 00:25:06.542 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:06.542 Job: Nvme6n1 ended in about 0.78 seconds with error 00:25:06.542 Verification LBA range: start 0x0 length 0x400 00:25:06.542 Nvme6n1 : 0.78 323.72 20.23 82.54 0.00 148964.37 81167.55 131266.18 00:25:06.542 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:06.542 Job: Nvme7n1 ended in about 0.78 seconds with error 00:25:06.542 Verification LBA range: start 0x0 length 0x400 00:25:06.542 Nvme7n1 : 0.78 320.95 20.06 81.84 0.00 148840.46 87769.69 118838.61 00:25:06.542 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:06.542 Job: Nvme8n1 ended in about 0.79 seconds with error 00:25:06.542 Verification LBA range: start 0x0 length 0x400 00:25:06.542 Nvme8n1 : 0.79 319.67 19.98 81.51 0.00 148011.96 79225.74 117285.17 00:25:06.542 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:06.542 Job: Nvme9n1 ended in about 0.75 seconds with error 00:25:06.542 Verification LBA range: start 0x0 length 0x400 00:25:06.542 Nvme9n1 : 0.75 284.27 17.77 80.45 0.00 160409.60 2585.03 128159.29 00:25:06.542 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:06.542 Job: Nvme10n1 ended in about 0.76 seconds with error 00:25:06.542 Verification LBA range: start 0x0 length 0x400 00:25:06.542 Nvme10n1 : 0.76 272.40 17.02 83.82 0.00 162971.90 99420.54 147577.36 00:25:06.542 =================================================================================================================== 00:25:06.542 Total : 3185.30 199.08 834.52 0.00 151249.77 2585.03 162335.10 00:25:06.542 [2024-07-23 01:04:50.505996] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:06.542 [2024-07-23 01:04:50.506073] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:25:06.542 [2024-07-23 01:04:50.506441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.542 [2024-07-23 01:04:50.506599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.542 [2024-07-23 01:04:50.506652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e87960 with addr=10.0.0.2, port=4420 00:25:06.542 [2024-07-23 01:04:50.506673] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e87960 is same with the state(5) to be set 00:25:06.542 [2024-07-23 01:04:50.506701] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e6e4d0 (9): Bad file descriptor 00:25:06.542 [2024-07-23 01:04:50.506724] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cbfa30 (9): Bad file descriptor 00:25:06.542 [2024-07-23 01:04:50.506742] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e6e900 (9): Bad file descriptor 00:25:06.542 [2024-07-23 01:04:50.506760] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9d80 (9): Bad file descriptor 00:25:06.542 [2024-07-23 01:04:50.506778] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb8410 (9): Bad file descriptor 00:25:06.542 [2024-07-23 01:04:50.506845] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:06.542 [2024-07-23 01:04:50.506870] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:06.542 [2024-07-23 01:04:50.506889] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:06.542 [2024-07-23 01:04:50.506909] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:06.542 [2024-07-23 01:04:50.506929] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:06.542 [2024-07-23 01:04:50.506949] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e87960 (9): Bad file descriptor 00:25:06.542 [2024-07-23 01:04:50.507254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.542 [2024-07-23 01:04:50.507395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.542 [2024-07-23 01:04:50.507421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e72110 with addr=10.0.0.2, port=4420 00:25:06.542 [2024-07-23 01:04:50.507438] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e72110 is same with the state(5) to be set 00:25:06.542 [2024-07-23 01:04:50.507578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.542 [2024-07-23 01:04:50.507727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.542 [2024-07-23 01:04:50.507753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e87d90 with addr=10.0.0.2, port=4420 00:25:06.542 [2024-07-23 01:04:50.507769] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e87d90 is same with the state(5) to be set 00:25:06.542 [2024-07-23 01:04:50.507796] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:25:06.542 [2024-07-23 01:04:50.507810] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:25:06.542 [2024-07-23 01:04:50.507825] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:25:06.542 [2024-07-23 01:04:50.507846] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:06.542 [2024-07-23 01:04:50.507860] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:06.542 [2024-07-23 01:04:50.507873] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:06.542 [2024-07-23 01:04:50.507890] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:25:06.542 [2024-07-23 01:04:50.507904] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:25:06.542 [2024-07-23 01:04:50.507916] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:25:06.542 [2024-07-23 01:04:50.507933] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:25:06.542 [2024-07-23 01:04:50.507947] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:25:06.542 [2024-07-23 01:04:50.507959] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:25:06.542 [2024-07-23 01:04:50.507976] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:25:06.542 [2024-07-23 01:04:50.507989] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:25:06.542 [2024-07-23 01:04:50.508001] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:25:06.542 [2024-07-23 01:04:50.508036] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:06.542 [2024-07-23 01:04:50.508058] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:06.542 [2024-07-23 01:04:50.508077] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:06.542 [2024-07-23 01:04:50.508095] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:06.542 [2024-07-23 01:04:50.508112] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:06.542 [2024-07-23 01:04:50.508130] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:06.542 [2024-07-23 01:04:50.508147] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:06.542 [2024-07-23 01:04:50.508164] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:06.542 [2024-07-23 01:04:50.508747] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:25:06.542 [2024-07-23 01:04:50.508775] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:25:06.542 [2024-07-23 01:04:50.508812] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:06.542 [2024-07-23 01:04:50.508829] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:06.542 [2024-07-23 01:04:50.508841] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:06.543 [2024-07-23 01:04:50.508852] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:06.543 [2024-07-23 01:04:50.508863] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:06.543 [2024-07-23 01:04:50.508899] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e72110 (9): Bad file descriptor 00:25:06.543 [2024-07-23 01:04:50.508921] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e87d90 (9): Bad file descriptor 00:25:06.543 [2024-07-23 01:04:50.508937] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:25:06.543 [2024-07-23 01:04:50.508949] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:25:06.543 [2024-07-23 01:04:50.508962] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:25:06.543 [2024-07-23 01:04:50.509023] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:06.543 [2024-07-23 01:04:50.509178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.543 [2024-07-23 01:04:50.509333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.543 [2024-07-23 01:04:50.509359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d15530 with addr=10.0.0.2, port=4420 00:25:06.543 [2024-07-23 01:04:50.509375] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d15530 is same with the state(5) to be set 00:25:06.543 [2024-07-23 01:04:50.509498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.543 [2024-07-23 01:04:50.509631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.543 [2024-07-23 01:04:50.509658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cebc80 with addr=10.0.0.2, port=4420 00:25:06.543 [2024-07-23 01:04:50.509674] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebc80 is same with the state(5) to be set 00:25:06.543 [2024-07-23 01:04:50.509688] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:25:06.543 [2024-07-23 01:04:50.509701] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:25:06.543 [2024-07-23 01:04:50.509713] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:25:06.543 [2024-07-23 01:04:50.509731] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:25:06.543 [2024-07-23 01:04:50.509745] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:25:06.543 [2024-07-23 01:04:50.509757] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:25:06.543 [2024-07-23 01:04:50.509811] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:06.543 [2024-07-23 01:04:50.509830] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:06.543 [2024-07-23 01:04:50.509847] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d15530 (9): Bad file descriptor 00:25:06.543 [2024-07-23 01:04:50.509866] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cebc80 (9): Bad file descriptor 00:25:06.543 [2024-07-23 01:04:50.509906] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:25:06.543 [2024-07-23 01:04:50.509923] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:25:06.543 [2024-07-23 01:04:50.509937] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:25:06.543 [2024-07-23 01:04:50.509953] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:25:06.543 [2024-07-23 01:04:50.509967] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:25:06.543 [2024-07-23 01:04:50.509980] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:25:06.543 [2024-07-23 01:04:50.510017] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:06.543 [2024-07-23 01:04:50.510039] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:06.803 01:04:50 -- target/shutdown.sh@135 -- # nvmfpid= 00:25:06.803 01:04:50 -- target/shutdown.sh@138 -- # sleep 1 00:25:07.739 01:04:51 -- target/shutdown.sh@141 -- # kill -9 3474910 00:25:07.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 141: kill: (3474910) - No such process 00:25:07.739 01:04:51 -- target/shutdown.sh@141 -- # true 00:25:07.739 01:04:51 -- target/shutdown.sh@143 -- # stoptarget 00:25:07.739 01:04:51 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:25:07.739 01:04:51 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:25:07.739 01:04:51 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:07.997 01:04:51 -- target/shutdown.sh@45 -- # nvmftestfini 00:25:07.997 01:04:51 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:07.997 01:04:51 -- nvmf/common.sh@116 -- # sync 00:25:07.997 01:04:51 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:07.997 01:04:51 -- nvmf/common.sh@119 -- # set +e 00:25:07.997 01:04:51 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:07.997 01:04:51 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:07.997 rmmod nvme_tcp 00:25:07.997 rmmod nvme_fabrics 00:25:07.997 rmmod nvme_keyring 00:25:07.997 01:04:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:07.997 01:04:51 -- nvmf/common.sh@123 -- # set -e 00:25:07.997 01:04:51 -- nvmf/common.sh@124 -- # return 0 00:25:07.997 01:04:51 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:25:07.997 01:04:51 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:07.997 01:04:51 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:07.997 01:04:51 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:07.997 01:04:51 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:07.997 01:04:51 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:07.997 01:04:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:07.997 01:04:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:07.998 01:04:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:09.900 01:04:54 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:09.901 00:25:09.901 real 0m7.949s 00:25:09.901 user 0m20.254s 00:25:09.901 sys 0m1.609s 00:25:09.901 01:04:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:09.901 01:04:54 -- common/autotest_common.sh@10 -- # set +x 00:25:09.901 ************************************ 00:25:09.901 END TEST nvmf_shutdown_tc3 00:25:09.901 ************************************ 00:25:09.901 01:04:54 -- target/shutdown.sh@150 -- # trap - SIGINT SIGTERM EXIT 00:25:09.901 00:25:09.901 real 0m28.248s 00:25:09.901 user 1m19.951s 00:25:09.901 sys 0m6.548s 00:25:09.901 01:04:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:09.901 01:04:54 -- common/autotest_common.sh@10 -- # set +x 00:25:09.901 ************************************ 00:25:09.901 END TEST nvmf_shutdown 00:25:09.901 ************************************ 00:25:09.901 01:04:54 -- nvmf/nvmf.sh@86 -- # timing_exit target 00:25:09.901 01:04:54 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:09.901 01:04:54 -- common/autotest_common.sh@10 -- # set +x 00:25:09.901 01:04:54 -- nvmf/nvmf.sh@88 -- # timing_enter host 00:25:09.901 01:04:54 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:09.901 01:04:54 -- common/autotest_common.sh@10 -- # set +x 00:25:09.901 01:04:54 -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:25:09.901 01:04:54 -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:25:09.901 01:04:54 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:09.901 01:04:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:09.901 01:04:54 -- common/autotest_common.sh@10 -- # set +x 00:25:09.901 ************************************ 00:25:09.901 START TEST nvmf_multicontroller 00:25:09.901 ************************************ 00:25:09.901 01:04:54 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:25:10.159 * Looking for test storage... 00:25:10.159 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:10.159 01:04:54 -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:10.159 01:04:54 -- nvmf/common.sh@7 -- # uname -s 00:25:10.159 01:04:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:10.159 01:04:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:10.159 01:04:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:10.159 01:04:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:10.159 01:04:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:10.159 01:04:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:10.159 01:04:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:10.159 01:04:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:10.159 01:04:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:10.159 01:04:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:10.159 01:04:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:10.159 01:04:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:10.159 01:04:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:10.159 01:04:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:10.159 01:04:54 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:10.159 01:04:54 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:10.159 01:04:54 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:10.159 01:04:54 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:10.159 01:04:54 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:10.159 01:04:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.159 01:04:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.159 01:04:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.159 01:04:54 -- paths/export.sh@5 -- # export PATH 00:25:10.159 01:04:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.159 01:04:54 -- nvmf/common.sh@46 -- # : 0 00:25:10.159 01:04:54 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:10.159 01:04:54 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:10.159 01:04:54 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:10.159 01:04:54 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:10.159 01:04:54 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:10.159 01:04:54 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:10.159 01:04:54 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:10.159 01:04:54 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:10.159 01:04:54 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:10.159 01:04:54 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:10.159 01:04:54 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:25:10.159 01:04:54 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:25:10.159 01:04:54 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:10.159 01:04:54 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:25:10.159 01:04:54 -- host/multicontroller.sh@23 -- # nvmftestinit 00:25:10.160 01:04:54 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:10.160 01:04:54 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:10.160 01:04:54 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:10.160 01:04:54 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:10.160 01:04:54 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:10.160 01:04:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:10.160 01:04:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:10.160 01:04:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:10.160 01:04:54 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:10.160 01:04:54 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:10.160 01:04:54 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:10.160 01:04:54 -- common/autotest_common.sh@10 -- # set +x 00:25:12.064 01:04:56 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:12.064 01:04:56 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:12.064 01:04:56 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:12.064 01:04:56 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:12.064 01:04:56 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:12.064 01:04:56 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:12.064 01:04:56 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:12.064 01:04:56 -- nvmf/common.sh@294 -- # net_devs=() 00:25:12.064 01:04:56 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:12.064 01:04:56 -- nvmf/common.sh@295 -- # e810=() 00:25:12.064 01:04:56 -- nvmf/common.sh@295 -- # local -ga e810 00:25:12.064 01:04:56 -- nvmf/common.sh@296 -- # x722=() 00:25:12.064 01:04:56 -- nvmf/common.sh@296 -- # local -ga x722 00:25:12.064 01:04:56 -- nvmf/common.sh@297 -- # mlx=() 00:25:12.064 01:04:56 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:12.064 01:04:56 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:12.064 01:04:56 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:12.064 01:04:56 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:12.064 01:04:56 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:12.064 01:04:56 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:12.064 01:04:56 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:12.064 01:04:56 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:12.064 01:04:56 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:12.064 01:04:56 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:12.064 01:04:56 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:12.064 01:04:56 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:12.064 01:04:56 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:12.064 01:04:56 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:12.064 01:04:56 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:12.064 01:04:56 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:12.064 01:04:56 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:12.064 01:04:56 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:12.064 01:04:56 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:12.064 01:04:56 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:12.064 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:12.064 01:04:56 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:12.064 01:04:56 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:12.064 01:04:56 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:12.064 01:04:56 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:12.064 01:04:56 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:12.064 01:04:56 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:12.064 01:04:56 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:12.064 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:12.064 01:04:56 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:12.064 01:04:56 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:12.064 01:04:56 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:12.064 01:04:56 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:12.064 01:04:56 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:12.064 01:04:56 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:12.064 01:04:56 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:12.064 01:04:56 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:12.064 01:04:56 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:12.064 01:04:56 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:12.064 01:04:56 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:12.064 01:04:56 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:12.064 01:04:56 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:12.064 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:12.064 01:04:56 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:12.064 01:04:56 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:12.064 01:04:56 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:12.064 01:04:56 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:12.064 01:04:56 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:12.064 01:04:56 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:12.064 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:12.064 01:04:56 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:12.064 01:04:56 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:12.064 01:04:56 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:12.064 01:04:56 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:12.064 01:04:56 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:12.064 01:04:56 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:12.064 01:04:56 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:12.064 01:04:56 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:12.064 01:04:56 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:12.064 01:04:56 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:12.064 01:04:56 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:12.064 01:04:56 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:12.064 01:04:56 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:12.064 01:04:56 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:12.064 01:04:56 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:12.064 01:04:56 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:12.065 01:04:56 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:12.065 01:04:56 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:12.065 01:04:56 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:12.065 01:04:56 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:12.065 01:04:56 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:12.065 01:04:56 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:12.065 01:04:56 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:12.065 01:04:56 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:12.065 01:04:56 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:12.065 01:04:56 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:12.065 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:12.065 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.122 ms 00:25:12.065 00:25:12.065 --- 10.0.0.2 ping statistics --- 00:25:12.065 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:12.065 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:25:12.065 01:04:56 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:12.065 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:12.065 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.085 ms 00:25:12.065 00:25:12.065 --- 10.0.0.1 ping statistics --- 00:25:12.065 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:12.065 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:25:12.065 01:04:56 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:12.065 01:04:56 -- nvmf/common.sh@410 -- # return 0 00:25:12.065 01:04:56 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:12.065 01:04:56 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:12.065 01:04:56 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:12.065 01:04:56 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:12.065 01:04:56 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:12.065 01:04:56 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:12.065 01:04:56 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:12.065 01:04:56 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:25:12.065 01:04:56 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:12.065 01:04:56 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:12.065 01:04:56 -- common/autotest_common.sh@10 -- # set +x 00:25:12.065 01:04:56 -- nvmf/common.sh@469 -- # nvmfpid=3477454 00:25:12.065 01:04:56 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:12.065 01:04:56 -- nvmf/common.sh@470 -- # waitforlisten 3477454 00:25:12.065 01:04:56 -- common/autotest_common.sh@819 -- # '[' -z 3477454 ']' 00:25:12.065 01:04:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:12.065 01:04:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:12.065 01:04:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:12.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:12.065 01:04:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:12.065 01:04:56 -- common/autotest_common.sh@10 -- # set +x 00:25:12.065 [2024-07-23 01:04:56.231365] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:25:12.065 [2024-07-23 01:04:56.231460] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:12.065 EAL: No free 2048 kB hugepages reported on node 1 00:25:12.324 [2024-07-23 01:04:56.300733] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:12.324 [2024-07-23 01:04:56.389118] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:12.325 [2024-07-23 01:04:56.389295] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:12.325 [2024-07-23 01:04:56.389315] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:12.325 [2024-07-23 01:04:56.389330] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:12.325 [2024-07-23 01:04:56.389425] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:12.325 [2024-07-23 01:04:56.389540] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:12.325 [2024-07-23 01:04:56.389543] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:13.261 01:04:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:13.261 01:04:57 -- common/autotest_common.sh@852 -- # return 0 00:25:13.261 01:04:57 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:13.261 01:04:57 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:13.261 01:04:57 -- common/autotest_common.sh@10 -- # set +x 00:25:13.261 01:04:57 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:13.261 01:04:57 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:13.261 01:04:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:13.261 01:04:57 -- common/autotest_common.sh@10 -- # set +x 00:25:13.261 [2024-07-23 01:04:57.216753] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:13.261 01:04:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:13.261 01:04:57 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:13.261 01:04:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:13.261 01:04:57 -- common/autotest_common.sh@10 -- # set +x 00:25:13.261 Malloc0 00:25:13.261 01:04:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:13.261 01:04:57 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:13.261 01:04:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:13.261 01:04:57 -- common/autotest_common.sh@10 -- # set +x 00:25:13.261 01:04:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:13.261 01:04:57 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:13.261 01:04:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:13.261 01:04:57 -- common/autotest_common.sh@10 -- # set +x 00:25:13.261 01:04:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:13.261 01:04:57 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:13.261 01:04:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:13.261 01:04:57 -- common/autotest_common.sh@10 -- # set +x 00:25:13.261 [2024-07-23 01:04:57.271348] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:13.261 01:04:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:13.261 01:04:57 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:13.261 01:04:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:13.261 01:04:57 -- common/autotest_common.sh@10 -- # set +x 00:25:13.261 [2024-07-23 01:04:57.279255] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:13.261 01:04:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:13.261 01:04:57 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:13.261 01:04:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:13.261 01:04:57 -- common/autotest_common.sh@10 -- # set +x 00:25:13.261 Malloc1 00:25:13.261 01:04:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:13.261 01:04:57 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:25:13.261 01:04:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:13.261 01:04:57 -- common/autotest_common.sh@10 -- # set +x 00:25:13.261 01:04:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:13.261 01:04:57 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:25:13.261 01:04:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:13.261 01:04:57 -- common/autotest_common.sh@10 -- # set +x 00:25:13.261 01:04:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:13.261 01:04:57 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:13.261 01:04:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:13.261 01:04:57 -- common/autotest_common.sh@10 -- # set +x 00:25:13.261 01:04:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:13.261 01:04:57 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:25:13.261 01:04:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:13.261 01:04:57 -- common/autotest_common.sh@10 -- # set +x 00:25:13.261 01:04:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:13.261 01:04:57 -- host/multicontroller.sh@44 -- # bdevperf_pid=3477613 00:25:13.261 01:04:57 -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:25:13.261 01:04:57 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:13.261 01:04:57 -- host/multicontroller.sh@47 -- # waitforlisten 3477613 /var/tmp/bdevperf.sock 00:25:13.261 01:04:57 -- common/autotest_common.sh@819 -- # '[' -z 3477613 ']' 00:25:13.261 01:04:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:13.261 01:04:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:13.261 01:04:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:13.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:13.261 01:04:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:13.261 01:04:57 -- common/autotest_common.sh@10 -- # set +x 00:25:14.225 01:04:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:14.225 01:04:58 -- common/autotest_common.sh@852 -- # return 0 00:25:14.225 01:04:58 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:25:14.225 01:04:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:14.225 01:04:58 -- common/autotest_common.sh@10 -- # set +x 00:25:14.485 NVMe0n1 00:25:14.485 01:04:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:14.485 01:04:58 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:14.485 01:04:58 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:25:14.485 01:04:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:14.485 01:04:58 -- common/autotest_common.sh@10 -- # set +x 00:25:14.485 01:04:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:14.485 1 00:25:14.485 01:04:58 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:25:14.485 01:04:58 -- common/autotest_common.sh@640 -- # local es=0 00:25:14.485 01:04:58 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:25:14.485 01:04:58 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:25:14.485 01:04:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:14.485 01:04:58 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:25:14.485 01:04:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:14.485 01:04:58 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:25:14.485 01:04:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:14.485 01:04:58 -- common/autotest_common.sh@10 -- # set +x 00:25:14.485 request: 00:25:14.485 { 00:25:14.485 "name": "NVMe0", 00:25:14.485 "trtype": "tcp", 00:25:14.485 "traddr": "10.0.0.2", 00:25:14.485 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:25:14.485 "hostaddr": "10.0.0.2", 00:25:14.485 "hostsvcid": "60000", 00:25:14.485 "adrfam": "ipv4", 00:25:14.486 "trsvcid": "4420", 00:25:14.486 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:14.486 "method": "bdev_nvme_attach_controller", 00:25:14.486 "req_id": 1 00:25:14.486 } 00:25:14.486 Got JSON-RPC error response 00:25:14.486 response: 00:25:14.486 { 00:25:14.486 "code": -114, 00:25:14.486 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:25:14.486 } 00:25:14.486 01:04:58 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:25:14.486 01:04:58 -- common/autotest_common.sh@643 -- # es=1 00:25:14.486 01:04:58 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:14.486 01:04:58 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:25:14.486 01:04:58 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:14.486 01:04:58 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:25:14.486 01:04:58 -- common/autotest_common.sh@640 -- # local es=0 00:25:14.486 01:04:58 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:25:14.486 01:04:58 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:25:14.486 01:04:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:14.486 01:04:58 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:25:14.486 01:04:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:14.486 01:04:58 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:25:14.486 01:04:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:14.486 01:04:58 -- common/autotest_common.sh@10 -- # set +x 00:25:14.486 request: 00:25:14.486 { 00:25:14.486 "name": "NVMe0", 00:25:14.486 "trtype": "tcp", 00:25:14.486 "traddr": "10.0.0.2", 00:25:14.486 "hostaddr": "10.0.0.2", 00:25:14.486 "hostsvcid": "60000", 00:25:14.486 "adrfam": "ipv4", 00:25:14.486 "trsvcid": "4420", 00:25:14.486 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:14.486 "method": "bdev_nvme_attach_controller", 00:25:14.486 "req_id": 1 00:25:14.486 } 00:25:14.486 Got JSON-RPC error response 00:25:14.486 response: 00:25:14.486 { 00:25:14.486 "code": -114, 00:25:14.486 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:25:14.486 } 00:25:14.486 01:04:58 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:25:14.486 01:04:58 -- common/autotest_common.sh@643 -- # es=1 00:25:14.486 01:04:58 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:14.486 01:04:58 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:25:14.486 01:04:58 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:14.486 01:04:58 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:25:14.486 01:04:58 -- common/autotest_common.sh@640 -- # local es=0 00:25:14.486 01:04:58 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:25:14.486 01:04:58 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:25:14.486 01:04:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:14.486 01:04:58 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:25:14.486 01:04:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:14.486 01:04:58 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:25:14.486 01:04:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:14.486 01:04:58 -- common/autotest_common.sh@10 -- # set +x 00:25:14.486 request: 00:25:14.486 { 00:25:14.486 "name": "NVMe0", 00:25:14.486 "trtype": "tcp", 00:25:14.486 "traddr": "10.0.0.2", 00:25:14.486 "hostaddr": "10.0.0.2", 00:25:14.486 "hostsvcid": "60000", 00:25:14.486 "adrfam": "ipv4", 00:25:14.486 "trsvcid": "4420", 00:25:14.486 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:14.486 "multipath": "disable", 00:25:14.486 "method": "bdev_nvme_attach_controller", 00:25:14.486 "req_id": 1 00:25:14.486 } 00:25:14.486 Got JSON-RPC error response 00:25:14.486 response: 00:25:14.486 { 00:25:14.486 "code": -114, 00:25:14.486 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:25:14.486 } 00:25:14.486 01:04:58 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:25:14.486 01:04:58 -- common/autotest_common.sh@643 -- # es=1 00:25:14.486 01:04:58 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:14.486 01:04:58 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:25:14.486 01:04:58 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:14.486 01:04:58 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:25:14.486 01:04:58 -- common/autotest_common.sh@640 -- # local es=0 00:25:14.486 01:04:58 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:25:14.486 01:04:58 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:25:14.486 01:04:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:14.486 01:04:58 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:25:14.486 01:04:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:14.486 01:04:58 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:25:14.486 01:04:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:14.486 01:04:58 -- common/autotest_common.sh@10 -- # set +x 00:25:14.486 request: 00:25:14.486 { 00:25:14.486 "name": "NVMe0", 00:25:14.486 "trtype": "tcp", 00:25:14.486 "traddr": "10.0.0.2", 00:25:14.486 "hostaddr": "10.0.0.2", 00:25:14.486 "hostsvcid": "60000", 00:25:14.486 "adrfam": "ipv4", 00:25:14.486 "trsvcid": "4420", 00:25:14.486 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:14.486 "multipath": "failover", 00:25:14.486 "method": "bdev_nvme_attach_controller", 00:25:14.486 "req_id": 1 00:25:14.486 } 00:25:14.486 Got JSON-RPC error response 00:25:14.486 response: 00:25:14.486 { 00:25:14.486 "code": -114, 00:25:14.486 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:25:14.486 } 00:25:14.486 01:04:58 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:25:14.486 01:04:58 -- common/autotest_common.sh@643 -- # es=1 00:25:14.486 01:04:58 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:14.486 01:04:58 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:25:14.486 01:04:58 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:14.486 01:04:58 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:14.486 01:04:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:14.486 01:04:58 -- common/autotest_common.sh@10 -- # set +x 00:25:14.745 00:25:14.745 01:04:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:14.745 01:04:58 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:14.745 01:04:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:14.745 01:04:58 -- common/autotest_common.sh@10 -- # set +x 00:25:14.745 01:04:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:14.745 01:04:58 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:25:14.745 01:04:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:14.745 01:04:58 -- common/autotest_common.sh@10 -- # set +x 00:25:14.745 00:25:14.745 01:04:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:14.745 01:04:58 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:14.745 01:04:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:14.745 01:04:58 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:25:14.745 01:04:58 -- common/autotest_common.sh@10 -- # set +x 00:25:14.745 01:04:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:14.745 01:04:58 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:25:14.745 01:04:58 -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:16.120 0 00:25:16.120 01:05:00 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:25:16.120 01:05:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:16.120 01:05:00 -- common/autotest_common.sh@10 -- # set +x 00:25:16.120 01:05:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:16.120 01:05:00 -- host/multicontroller.sh@100 -- # killprocess 3477613 00:25:16.120 01:05:00 -- common/autotest_common.sh@926 -- # '[' -z 3477613 ']' 00:25:16.120 01:05:00 -- common/autotest_common.sh@930 -- # kill -0 3477613 00:25:16.120 01:05:00 -- common/autotest_common.sh@931 -- # uname 00:25:16.120 01:05:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:16.120 01:05:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3477613 00:25:16.120 01:05:00 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:16.120 01:05:00 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:16.120 01:05:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3477613' 00:25:16.120 killing process with pid 3477613 00:25:16.120 01:05:00 -- common/autotest_common.sh@945 -- # kill 3477613 00:25:16.120 01:05:00 -- common/autotest_common.sh@950 -- # wait 3477613 00:25:16.120 01:05:00 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:16.120 01:05:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:16.120 01:05:00 -- common/autotest_common.sh@10 -- # set +x 00:25:16.120 01:05:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:16.120 01:05:00 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:16.120 01:05:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:16.120 01:05:00 -- common/autotest_common.sh@10 -- # set +x 00:25:16.120 01:05:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:16.120 01:05:00 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:25:16.120 01:05:00 -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:16.120 01:05:00 -- common/autotest_common.sh@1597 -- # read -r file 00:25:16.120 01:05:00 -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:25:16.120 01:05:00 -- common/autotest_common.sh@1596 -- # sort -u 00:25:16.378 01:05:00 -- common/autotest_common.sh@1598 -- # cat 00:25:16.378 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:25:16.378 [2024-07-23 01:04:57.374034] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:25:16.378 [2024-07-23 01:04:57.374132] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3477613 ] 00:25:16.378 EAL: No free 2048 kB hugepages reported on node 1 00:25:16.378 [2024-07-23 01:04:57.434127] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:16.378 [2024-07-23 01:04:57.518721] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:16.378 [2024-07-23 01:04:58.892946] bdev.c:4553:bdev_name_add: *ERROR*: Bdev name fc8b780c-d587-400a-b720-9e1b9beeb654 already exists 00:25:16.378 [2024-07-23 01:04:58.892987] bdev.c:7603:bdev_register: *ERROR*: Unable to add uuid:fc8b780c-d587-400a-b720-9e1b9beeb654 alias for bdev NVMe1n1 00:25:16.378 [2024-07-23 01:04:58.893004] bdev_nvme.c:4236:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:25:16.378 Running I/O for 1 seconds... 00:25:16.378 00:25:16.378 Latency(us) 00:25:16.378 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:16.378 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:25:16.378 NVMe0n1 : 1.01 16258.30 63.51 0.00 0.00 7836.06 1990.35 10048.85 00:25:16.378 =================================================================================================================== 00:25:16.378 Total : 16258.30 63.51 0.00 0.00 7836.06 1990.35 10048.85 00:25:16.378 Received shutdown signal, test time was about 1.000000 seconds 00:25:16.378 00:25:16.378 Latency(us) 00:25:16.378 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:16.378 =================================================================================================================== 00:25:16.378 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:16.378 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:25:16.378 01:05:00 -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:16.378 01:05:00 -- common/autotest_common.sh@1597 -- # read -r file 00:25:16.378 01:05:00 -- host/multicontroller.sh@108 -- # nvmftestfini 00:25:16.378 01:05:00 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:16.378 01:05:00 -- nvmf/common.sh@116 -- # sync 00:25:16.378 01:05:00 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:16.378 01:05:00 -- nvmf/common.sh@119 -- # set +e 00:25:16.378 01:05:00 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:16.378 01:05:00 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:16.378 rmmod nvme_tcp 00:25:16.378 rmmod nvme_fabrics 00:25:16.378 rmmod nvme_keyring 00:25:16.378 01:05:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:16.378 01:05:00 -- nvmf/common.sh@123 -- # set -e 00:25:16.378 01:05:00 -- nvmf/common.sh@124 -- # return 0 00:25:16.378 01:05:00 -- nvmf/common.sh@477 -- # '[' -n 3477454 ']' 00:25:16.378 01:05:00 -- nvmf/common.sh@478 -- # killprocess 3477454 00:25:16.378 01:05:00 -- common/autotest_common.sh@926 -- # '[' -z 3477454 ']' 00:25:16.378 01:05:00 -- common/autotest_common.sh@930 -- # kill -0 3477454 00:25:16.378 01:05:00 -- common/autotest_common.sh@931 -- # uname 00:25:16.378 01:05:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:16.378 01:05:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3477454 00:25:16.378 01:05:00 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:25:16.378 01:05:00 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:25:16.378 01:05:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3477454' 00:25:16.378 killing process with pid 3477454 00:25:16.378 01:05:00 -- common/autotest_common.sh@945 -- # kill 3477454 00:25:16.378 01:05:00 -- common/autotest_common.sh@950 -- # wait 3477454 00:25:16.636 01:05:00 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:16.636 01:05:00 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:16.636 01:05:00 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:16.636 01:05:00 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:16.636 01:05:00 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:16.636 01:05:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:16.636 01:05:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:16.636 01:05:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:18.538 01:05:02 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:18.538 00:25:18.538 real 0m8.631s 00:25:18.538 user 0m16.776s 00:25:18.538 sys 0m2.256s 00:25:18.538 01:05:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:18.538 01:05:02 -- common/autotest_common.sh@10 -- # set +x 00:25:18.538 ************************************ 00:25:18.538 END TEST nvmf_multicontroller 00:25:18.538 ************************************ 00:25:18.796 01:05:02 -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:25:18.796 01:05:02 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:18.796 01:05:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:18.796 01:05:02 -- common/autotest_common.sh@10 -- # set +x 00:25:18.796 ************************************ 00:25:18.796 START TEST nvmf_aer 00:25:18.796 ************************************ 00:25:18.796 01:05:02 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:25:18.796 * Looking for test storage... 00:25:18.796 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:18.796 01:05:02 -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:18.796 01:05:02 -- nvmf/common.sh@7 -- # uname -s 00:25:18.796 01:05:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:18.796 01:05:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:18.796 01:05:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:18.796 01:05:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:18.796 01:05:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:18.796 01:05:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:18.796 01:05:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:18.796 01:05:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:18.796 01:05:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:18.796 01:05:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:18.796 01:05:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:18.796 01:05:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:18.796 01:05:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:18.796 01:05:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:18.796 01:05:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:18.796 01:05:02 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:18.796 01:05:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:18.796 01:05:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:18.796 01:05:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:18.796 01:05:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:18.796 01:05:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:18.796 01:05:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:18.796 01:05:02 -- paths/export.sh@5 -- # export PATH 00:25:18.796 01:05:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:18.796 01:05:02 -- nvmf/common.sh@46 -- # : 0 00:25:18.796 01:05:02 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:18.796 01:05:02 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:18.796 01:05:02 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:18.796 01:05:02 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:18.796 01:05:02 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:18.796 01:05:02 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:18.796 01:05:02 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:18.796 01:05:02 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:18.796 01:05:02 -- host/aer.sh@11 -- # nvmftestinit 00:25:18.796 01:05:02 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:18.796 01:05:02 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:18.796 01:05:02 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:18.796 01:05:02 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:18.796 01:05:02 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:18.796 01:05:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:18.796 01:05:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:18.796 01:05:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:18.796 01:05:02 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:18.796 01:05:02 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:18.796 01:05:02 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:18.796 01:05:02 -- common/autotest_common.sh@10 -- # set +x 00:25:20.701 01:05:04 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:20.701 01:05:04 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:20.701 01:05:04 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:20.701 01:05:04 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:20.701 01:05:04 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:20.701 01:05:04 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:20.701 01:05:04 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:20.701 01:05:04 -- nvmf/common.sh@294 -- # net_devs=() 00:25:20.701 01:05:04 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:20.701 01:05:04 -- nvmf/common.sh@295 -- # e810=() 00:25:20.701 01:05:04 -- nvmf/common.sh@295 -- # local -ga e810 00:25:20.701 01:05:04 -- nvmf/common.sh@296 -- # x722=() 00:25:20.701 01:05:04 -- nvmf/common.sh@296 -- # local -ga x722 00:25:20.701 01:05:04 -- nvmf/common.sh@297 -- # mlx=() 00:25:20.701 01:05:04 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:20.701 01:05:04 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:20.701 01:05:04 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:20.701 01:05:04 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:20.701 01:05:04 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:20.701 01:05:04 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:20.701 01:05:04 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:20.701 01:05:04 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:20.701 01:05:04 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:20.701 01:05:04 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:20.701 01:05:04 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:20.701 01:05:04 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:20.701 01:05:04 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:20.701 01:05:04 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:20.701 01:05:04 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:20.701 01:05:04 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:20.701 01:05:04 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:20.701 01:05:04 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:20.701 01:05:04 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:20.701 01:05:04 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:20.701 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:20.701 01:05:04 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:20.701 01:05:04 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:20.701 01:05:04 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:20.701 01:05:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:20.701 01:05:04 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:20.701 01:05:04 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:20.701 01:05:04 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:20.701 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:20.701 01:05:04 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:20.701 01:05:04 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:20.701 01:05:04 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:20.701 01:05:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:20.701 01:05:04 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:20.701 01:05:04 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:20.701 01:05:04 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:20.701 01:05:04 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:20.701 01:05:04 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:20.701 01:05:04 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:20.701 01:05:04 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:20.701 01:05:04 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:20.701 01:05:04 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:20.701 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:20.701 01:05:04 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:20.701 01:05:04 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:20.701 01:05:04 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:20.701 01:05:04 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:20.701 01:05:04 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:20.701 01:05:04 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:20.701 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:20.701 01:05:04 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:20.701 01:05:04 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:20.701 01:05:04 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:20.701 01:05:04 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:20.701 01:05:04 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:20.701 01:05:04 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:20.701 01:05:04 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:20.701 01:05:04 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:20.701 01:05:04 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:20.701 01:05:04 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:20.701 01:05:04 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:20.701 01:05:04 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:20.701 01:05:04 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:20.701 01:05:04 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:20.701 01:05:04 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:20.701 01:05:04 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:20.701 01:05:04 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:20.701 01:05:04 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:20.701 01:05:04 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:20.701 01:05:04 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:20.701 01:05:04 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:20.701 01:05:04 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:20.701 01:05:04 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:20.701 01:05:04 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:20.701 01:05:04 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:20.701 01:05:04 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:20.701 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:20.701 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:25:20.701 00:25:20.701 --- 10.0.0.2 ping statistics --- 00:25:20.701 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:20.701 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:25:20.701 01:05:04 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:20.701 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:20.701 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:25:20.701 00:25:20.701 --- 10.0.0.1 ping statistics --- 00:25:20.701 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:20.701 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:25:20.701 01:05:04 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:20.702 01:05:04 -- nvmf/common.sh@410 -- # return 0 00:25:20.702 01:05:04 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:20.702 01:05:04 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:20.702 01:05:04 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:20.702 01:05:04 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:20.702 01:05:04 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:20.702 01:05:04 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:20.702 01:05:04 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:20.702 01:05:04 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:25:20.702 01:05:04 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:20.702 01:05:04 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:20.702 01:05:04 -- common/autotest_common.sh@10 -- # set +x 00:25:20.702 01:05:04 -- nvmf/common.sh@469 -- # nvmfpid=3479864 00:25:20.702 01:05:04 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:20.702 01:05:04 -- nvmf/common.sh@470 -- # waitforlisten 3479864 00:25:20.702 01:05:04 -- common/autotest_common.sh@819 -- # '[' -z 3479864 ']' 00:25:20.702 01:05:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:20.702 01:05:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:20.702 01:05:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:20.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:20.702 01:05:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:20.702 01:05:04 -- common/autotest_common.sh@10 -- # set +x 00:25:20.702 [2024-07-23 01:05:04.898774] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:25:20.702 [2024-07-23 01:05:04.898863] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:20.961 EAL: No free 2048 kB hugepages reported on node 1 00:25:20.961 [2024-07-23 01:05:04.969524] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:20.961 [2024-07-23 01:05:05.060730] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:20.961 [2024-07-23 01:05:05.060903] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:20.961 [2024-07-23 01:05:05.060924] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:20.961 [2024-07-23 01:05:05.060939] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:20.961 [2024-07-23 01:05:05.061047] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:20.961 [2024-07-23 01:05:05.061104] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:20.961 [2024-07-23 01:05:05.061221] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:20.961 [2024-07-23 01:05:05.061223] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:21.897 01:05:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:21.897 01:05:05 -- common/autotest_common.sh@852 -- # return 0 00:25:21.897 01:05:05 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:21.897 01:05:05 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:21.897 01:05:05 -- common/autotest_common.sh@10 -- # set +x 00:25:21.897 01:05:05 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:21.897 01:05:05 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:21.897 01:05:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:21.897 01:05:05 -- common/autotest_common.sh@10 -- # set +x 00:25:21.897 [2024-07-23 01:05:05.897231] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:21.897 01:05:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:21.897 01:05:05 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:25:21.897 01:05:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:21.897 01:05:05 -- common/autotest_common.sh@10 -- # set +x 00:25:21.897 Malloc0 00:25:21.897 01:05:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:21.897 01:05:05 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:25:21.897 01:05:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:21.897 01:05:05 -- common/autotest_common.sh@10 -- # set +x 00:25:21.897 01:05:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:21.897 01:05:05 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:21.897 01:05:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:21.897 01:05:05 -- common/autotest_common.sh@10 -- # set +x 00:25:21.897 01:05:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:21.897 01:05:05 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:21.897 01:05:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:21.897 01:05:05 -- common/autotest_common.sh@10 -- # set +x 00:25:21.897 [2024-07-23 01:05:05.948238] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:21.897 01:05:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:21.897 01:05:05 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:25:21.897 01:05:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:21.897 01:05:05 -- common/autotest_common.sh@10 -- # set +x 00:25:21.897 [2024-07-23 01:05:05.955964] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:25:21.897 [ 00:25:21.897 { 00:25:21.897 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:21.897 "subtype": "Discovery", 00:25:21.897 "listen_addresses": [], 00:25:21.897 "allow_any_host": true, 00:25:21.897 "hosts": [] 00:25:21.897 }, 00:25:21.897 { 00:25:21.897 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:21.897 "subtype": "NVMe", 00:25:21.897 "listen_addresses": [ 00:25:21.897 { 00:25:21.897 "transport": "TCP", 00:25:21.897 "trtype": "TCP", 00:25:21.897 "adrfam": "IPv4", 00:25:21.897 "traddr": "10.0.0.2", 00:25:21.897 "trsvcid": "4420" 00:25:21.897 } 00:25:21.897 ], 00:25:21.897 "allow_any_host": true, 00:25:21.897 "hosts": [], 00:25:21.897 "serial_number": "SPDK00000000000001", 00:25:21.897 "model_number": "SPDK bdev Controller", 00:25:21.897 "max_namespaces": 2, 00:25:21.897 "min_cntlid": 1, 00:25:21.897 "max_cntlid": 65519, 00:25:21.897 "namespaces": [ 00:25:21.897 { 00:25:21.897 "nsid": 1, 00:25:21.897 "bdev_name": "Malloc0", 00:25:21.897 "name": "Malloc0", 00:25:21.897 "nguid": "B98C600184164E4EBCB4F08081EDDEFE", 00:25:21.897 "uuid": "b98c6001-8416-4e4e-bcb4-f08081eddefe" 00:25:21.897 } 00:25:21.897 ] 00:25:21.897 } 00:25:21.897 ] 00:25:21.897 01:05:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:21.897 01:05:05 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:25:21.897 01:05:05 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:25:21.897 01:05:05 -- host/aer.sh@33 -- # aerpid=3480031 00:25:21.897 01:05:05 -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:25:21.897 01:05:05 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:25:21.897 01:05:05 -- common/autotest_common.sh@1244 -- # local i=0 00:25:21.897 01:05:05 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:21.897 01:05:05 -- common/autotest_common.sh@1246 -- # '[' 0 -lt 200 ']' 00:25:21.897 01:05:05 -- common/autotest_common.sh@1247 -- # i=1 00:25:21.897 01:05:05 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:25:21.897 EAL: No free 2048 kB hugepages reported on node 1 00:25:21.897 01:05:06 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:21.897 01:05:06 -- common/autotest_common.sh@1246 -- # '[' 1 -lt 200 ']' 00:25:21.897 01:05:06 -- common/autotest_common.sh@1247 -- # i=2 00:25:21.897 01:05:06 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:25:22.155 01:05:06 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:22.155 01:05:06 -- common/autotest_common.sh@1251 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:22.155 01:05:06 -- common/autotest_common.sh@1255 -- # return 0 00:25:22.155 01:05:06 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:25:22.155 01:05:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:22.155 01:05:06 -- common/autotest_common.sh@10 -- # set +x 00:25:22.155 Malloc1 00:25:22.155 01:05:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:22.155 01:05:06 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:25:22.155 01:05:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:22.155 01:05:06 -- common/autotest_common.sh@10 -- # set +x 00:25:22.155 01:05:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:22.155 01:05:06 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:25:22.155 01:05:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:22.155 01:05:06 -- common/autotest_common.sh@10 -- # set +x 00:25:22.155 Asynchronous Event Request test 00:25:22.155 Attaching to 10.0.0.2 00:25:22.155 Attached to 10.0.0.2 00:25:22.155 Registering asynchronous event callbacks... 00:25:22.155 Starting namespace attribute notice tests for all controllers... 00:25:22.155 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:25:22.155 aer_cb - Changed Namespace 00:25:22.155 Cleaning up... 00:25:22.155 [ 00:25:22.155 { 00:25:22.155 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:22.155 "subtype": "Discovery", 00:25:22.155 "listen_addresses": [], 00:25:22.155 "allow_any_host": true, 00:25:22.155 "hosts": [] 00:25:22.155 }, 00:25:22.155 { 00:25:22.155 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:22.155 "subtype": "NVMe", 00:25:22.155 "listen_addresses": [ 00:25:22.155 { 00:25:22.155 "transport": "TCP", 00:25:22.155 "trtype": "TCP", 00:25:22.155 "adrfam": "IPv4", 00:25:22.155 "traddr": "10.0.0.2", 00:25:22.155 "trsvcid": "4420" 00:25:22.155 } 00:25:22.155 ], 00:25:22.155 "allow_any_host": true, 00:25:22.155 "hosts": [], 00:25:22.155 "serial_number": "SPDK00000000000001", 00:25:22.156 "model_number": "SPDK bdev Controller", 00:25:22.156 "max_namespaces": 2, 00:25:22.156 "min_cntlid": 1, 00:25:22.156 "max_cntlid": 65519, 00:25:22.156 "namespaces": [ 00:25:22.156 { 00:25:22.156 "nsid": 1, 00:25:22.156 "bdev_name": "Malloc0", 00:25:22.156 "name": "Malloc0", 00:25:22.156 "nguid": "B98C600184164E4EBCB4F08081EDDEFE", 00:25:22.156 "uuid": "b98c6001-8416-4e4e-bcb4-f08081eddefe" 00:25:22.156 }, 00:25:22.156 { 00:25:22.156 "nsid": 2, 00:25:22.156 "bdev_name": "Malloc1", 00:25:22.156 "name": "Malloc1", 00:25:22.156 "nguid": "C9C9B0A6B17A40AE916C9C71C8DF62A5", 00:25:22.156 "uuid": "c9c9b0a6-b17a-40ae-916c-9c71c8df62a5" 00:25:22.156 } 00:25:22.156 ] 00:25:22.156 } 00:25:22.156 ] 00:25:22.156 01:05:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:22.156 01:05:06 -- host/aer.sh@43 -- # wait 3480031 00:25:22.156 01:05:06 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:25:22.156 01:05:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:22.156 01:05:06 -- common/autotest_common.sh@10 -- # set +x 00:25:22.156 01:05:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:22.156 01:05:06 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:25:22.156 01:05:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:22.156 01:05:06 -- common/autotest_common.sh@10 -- # set +x 00:25:22.156 01:05:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:22.156 01:05:06 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:22.156 01:05:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:22.156 01:05:06 -- common/autotest_common.sh@10 -- # set +x 00:25:22.156 01:05:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:22.156 01:05:06 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:25:22.156 01:05:06 -- host/aer.sh@51 -- # nvmftestfini 00:25:22.156 01:05:06 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:22.156 01:05:06 -- nvmf/common.sh@116 -- # sync 00:25:22.156 01:05:06 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:22.156 01:05:06 -- nvmf/common.sh@119 -- # set +e 00:25:22.156 01:05:06 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:22.156 01:05:06 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:22.156 rmmod nvme_tcp 00:25:22.156 rmmod nvme_fabrics 00:25:22.156 rmmod nvme_keyring 00:25:22.415 01:05:06 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:22.415 01:05:06 -- nvmf/common.sh@123 -- # set -e 00:25:22.415 01:05:06 -- nvmf/common.sh@124 -- # return 0 00:25:22.415 01:05:06 -- nvmf/common.sh@477 -- # '[' -n 3479864 ']' 00:25:22.415 01:05:06 -- nvmf/common.sh@478 -- # killprocess 3479864 00:25:22.415 01:05:06 -- common/autotest_common.sh@926 -- # '[' -z 3479864 ']' 00:25:22.415 01:05:06 -- common/autotest_common.sh@930 -- # kill -0 3479864 00:25:22.415 01:05:06 -- common/autotest_common.sh@931 -- # uname 00:25:22.415 01:05:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:22.415 01:05:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3479864 00:25:22.415 01:05:06 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:22.415 01:05:06 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:22.415 01:05:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3479864' 00:25:22.415 killing process with pid 3479864 00:25:22.415 01:05:06 -- common/autotest_common.sh@945 -- # kill 3479864 00:25:22.415 [2024-07-23 01:05:06.391114] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:25:22.415 01:05:06 -- common/autotest_common.sh@950 -- # wait 3479864 00:25:22.415 01:05:06 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:22.415 01:05:06 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:22.415 01:05:06 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:22.415 01:05:06 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:22.415 01:05:06 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:22.415 01:05:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:22.415 01:05:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:22.415 01:05:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:24.952 01:05:08 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:24.952 00:25:24.952 real 0m5.897s 00:25:24.952 user 0m6.965s 00:25:24.952 sys 0m1.853s 00:25:24.952 01:05:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:24.952 01:05:08 -- common/autotest_common.sh@10 -- # set +x 00:25:24.952 ************************************ 00:25:24.952 END TEST nvmf_aer 00:25:24.952 ************************************ 00:25:24.952 01:05:08 -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:25:24.952 01:05:08 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:24.952 01:05:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:24.952 01:05:08 -- common/autotest_common.sh@10 -- # set +x 00:25:24.952 ************************************ 00:25:24.952 START TEST nvmf_async_init 00:25:24.952 ************************************ 00:25:24.952 01:05:08 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:25:24.952 * Looking for test storage... 00:25:24.952 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:24.952 01:05:08 -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:24.952 01:05:08 -- nvmf/common.sh@7 -- # uname -s 00:25:24.952 01:05:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:24.952 01:05:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:24.952 01:05:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:24.952 01:05:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:24.952 01:05:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:24.952 01:05:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:24.952 01:05:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:24.952 01:05:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:24.952 01:05:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:24.952 01:05:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:24.952 01:05:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:24.952 01:05:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:24.952 01:05:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:24.952 01:05:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:24.952 01:05:08 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:24.952 01:05:08 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:24.952 01:05:08 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:24.952 01:05:08 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:24.952 01:05:08 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:24.952 01:05:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.952 01:05:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.952 01:05:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.952 01:05:08 -- paths/export.sh@5 -- # export PATH 00:25:24.952 01:05:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.952 01:05:08 -- nvmf/common.sh@46 -- # : 0 00:25:24.952 01:05:08 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:24.952 01:05:08 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:24.952 01:05:08 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:24.952 01:05:08 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:24.952 01:05:08 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:24.952 01:05:08 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:24.952 01:05:08 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:24.952 01:05:08 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:24.952 01:05:08 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:25:24.952 01:05:08 -- host/async_init.sh@14 -- # null_block_size=512 00:25:24.952 01:05:08 -- host/async_init.sh@15 -- # null_bdev=null0 00:25:24.952 01:05:08 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:25:24.952 01:05:08 -- host/async_init.sh@20 -- # uuidgen 00:25:24.952 01:05:08 -- host/async_init.sh@20 -- # tr -d - 00:25:24.952 01:05:08 -- host/async_init.sh@20 -- # nguid=b15ca4b476fa4f98aed05e11fa8b5d72 00:25:24.952 01:05:08 -- host/async_init.sh@22 -- # nvmftestinit 00:25:24.952 01:05:08 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:24.952 01:05:08 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:24.952 01:05:08 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:24.952 01:05:08 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:24.952 01:05:08 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:24.952 01:05:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:24.952 01:05:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:24.952 01:05:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:24.952 01:05:08 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:24.952 01:05:08 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:24.952 01:05:08 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:24.952 01:05:08 -- common/autotest_common.sh@10 -- # set +x 00:25:26.854 01:05:10 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:26.854 01:05:10 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:26.854 01:05:10 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:26.854 01:05:10 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:26.854 01:05:10 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:26.854 01:05:10 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:26.854 01:05:10 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:26.854 01:05:10 -- nvmf/common.sh@294 -- # net_devs=() 00:25:26.854 01:05:10 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:26.854 01:05:10 -- nvmf/common.sh@295 -- # e810=() 00:25:26.854 01:05:10 -- nvmf/common.sh@295 -- # local -ga e810 00:25:26.854 01:05:10 -- nvmf/common.sh@296 -- # x722=() 00:25:26.854 01:05:10 -- nvmf/common.sh@296 -- # local -ga x722 00:25:26.854 01:05:10 -- nvmf/common.sh@297 -- # mlx=() 00:25:26.854 01:05:10 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:26.854 01:05:10 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:26.854 01:05:10 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:26.854 01:05:10 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:26.854 01:05:10 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:26.854 01:05:10 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:26.854 01:05:10 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:26.854 01:05:10 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:26.854 01:05:10 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:26.854 01:05:10 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:26.854 01:05:10 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:26.854 01:05:10 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:26.854 01:05:10 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:26.854 01:05:10 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:26.854 01:05:10 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:26.854 01:05:10 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:26.854 01:05:10 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:26.854 01:05:10 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:26.854 01:05:10 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:26.854 01:05:10 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:26.854 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:26.854 01:05:10 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:26.854 01:05:10 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:26.854 01:05:10 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:26.854 01:05:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:26.854 01:05:10 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:26.854 01:05:10 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:26.854 01:05:10 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:26.854 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:26.854 01:05:10 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:26.854 01:05:10 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:26.854 01:05:10 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:26.854 01:05:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:26.854 01:05:10 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:26.854 01:05:10 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:26.854 01:05:10 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:26.854 01:05:10 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:26.854 01:05:10 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:26.854 01:05:10 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:26.854 01:05:10 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:26.854 01:05:10 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:26.854 01:05:10 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:26.854 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:26.854 01:05:10 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:26.854 01:05:10 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:26.854 01:05:10 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:26.854 01:05:10 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:26.854 01:05:10 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:26.854 01:05:10 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:26.854 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:26.854 01:05:10 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:26.854 01:05:10 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:26.854 01:05:10 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:26.854 01:05:10 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:26.854 01:05:10 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:26.854 01:05:10 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:26.854 01:05:10 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:26.854 01:05:10 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:26.854 01:05:10 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:26.854 01:05:10 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:26.854 01:05:10 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:26.854 01:05:10 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:26.854 01:05:10 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:26.854 01:05:10 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:26.854 01:05:10 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:26.854 01:05:10 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:26.854 01:05:10 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:26.854 01:05:10 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:26.854 01:05:10 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:26.854 01:05:10 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:26.854 01:05:10 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:26.854 01:05:10 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:26.854 01:05:10 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:26.854 01:05:10 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:26.854 01:05:10 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:26.854 01:05:10 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:26.854 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:26.854 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.217 ms 00:25:26.854 00:25:26.854 --- 10.0.0.2 ping statistics --- 00:25:26.854 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:26.854 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:25:26.854 01:05:10 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:26.854 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:26.854 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:25:26.854 00:25:26.854 --- 10.0.0.1 ping statistics --- 00:25:26.854 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:26.854 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:25:26.854 01:05:10 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:26.854 01:05:10 -- nvmf/common.sh@410 -- # return 0 00:25:26.854 01:05:10 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:26.854 01:05:10 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:26.854 01:05:10 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:26.854 01:05:10 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:26.854 01:05:10 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:26.855 01:05:10 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:26.855 01:05:10 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:26.855 01:05:10 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:25:26.855 01:05:10 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:26.855 01:05:10 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:26.855 01:05:10 -- common/autotest_common.sh@10 -- # set +x 00:25:26.855 01:05:10 -- nvmf/common.sh@469 -- # nvmfpid=3482087 00:25:26.855 01:05:10 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:25:26.855 01:05:10 -- nvmf/common.sh@470 -- # waitforlisten 3482087 00:25:26.855 01:05:10 -- common/autotest_common.sh@819 -- # '[' -z 3482087 ']' 00:25:26.855 01:05:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:26.855 01:05:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:26.855 01:05:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:26.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:26.855 01:05:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:26.855 01:05:10 -- common/autotest_common.sh@10 -- # set +x 00:25:26.855 [2024-07-23 01:05:10.957681] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:25:26.855 [2024-07-23 01:05:10.957767] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:26.855 EAL: No free 2048 kB hugepages reported on node 1 00:25:26.855 [2024-07-23 01:05:11.027046] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:27.112 [2024-07-23 01:05:11.116254] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:27.112 [2024-07-23 01:05:11.116442] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:27.112 [2024-07-23 01:05:11.116462] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:27.112 [2024-07-23 01:05:11.116477] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:27.112 [2024-07-23 01:05:11.116526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:28.048 01:05:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:28.048 01:05:11 -- common/autotest_common.sh@852 -- # return 0 00:25:28.048 01:05:11 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:28.048 01:05:11 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:28.048 01:05:11 -- common/autotest_common.sh@10 -- # set +x 00:25:28.048 01:05:11 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:28.048 01:05:11 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:25:28.048 01:05:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:28.048 01:05:11 -- common/autotest_common.sh@10 -- # set +x 00:25:28.048 [2024-07-23 01:05:11.959786] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:28.048 01:05:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:28.048 01:05:11 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:25:28.048 01:05:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:28.048 01:05:11 -- common/autotest_common.sh@10 -- # set +x 00:25:28.048 null0 00:25:28.048 01:05:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:28.048 01:05:11 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:25:28.048 01:05:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:28.048 01:05:11 -- common/autotest_common.sh@10 -- # set +x 00:25:28.048 01:05:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:28.048 01:05:11 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:25:28.048 01:05:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:28.048 01:05:11 -- common/autotest_common.sh@10 -- # set +x 00:25:28.048 01:05:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:28.048 01:05:11 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g b15ca4b476fa4f98aed05e11fa8b5d72 00:25:28.048 01:05:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:28.048 01:05:11 -- common/autotest_common.sh@10 -- # set +x 00:25:28.048 01:05:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:28.048 01:05:11 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:28.048 01:05:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:28.048 01:05:11 -- common/autotest_common.sh@10 -- # set +x 00:25:28.048 [2024-07-23 01:05:12.000001] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:28.048 01:05:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:28.048 01:05:12 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:25:28.048 01:05:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:28.048 01:05:12 -- common/autotest_common.sh@10 -- # set +x 00:25:28.048 nvme0n1 00:25:28.048 01:05:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:28.048 01:05:12 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:28.048 01:05:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:28.048 01:05:12 -- common/autotest_common.sh@10 -- # set +x 00:25:28.048 [ 00:25:28.048 { 00:25:28.048 "name": "nvme0n1", 00:25:28.048 "aliases": [ 00:25:28.048 "b15ca4b4-76fa-4f98-aed0-5e11fa8b5d72" 00:25:28.048 ], 00:25:28.048 "product_name": "NVMe disk", 00:25:28.048 "block_size": 512, 00:25:28.048 "num_blocks": 2097152, 00:25:28.048 "uuid": "b15ca4b4-76fa-4f98-aed0-5e11fa8b5d72", 00:25:28.048 "assigned_rate_limits": { 00:25:28.048 "rw_ios_per_sec": 0, 00:25:28.048 "rw_mbytes_per_sec": 0, 00:25:28.048 "r_mbytes_per_sec": 0, 00:25:28.048 "w_mbytes_per_sec": 0 00:25:28.048 }, 00:25:28.048 "claimed": false, 00:25:28.048 "zoned": false, 00:25:28.048 "supported_io_types": { 00:25:28.048 "read": true, 00:25:28.048 "write": true, 00:25:28.048 "unmap": false, 00:25:28.048 "write_zeroes": true, 00:25:28.048 "flush": true, 00:25:28.048 "reset": true, 00:25:28.048 "compare": true, 00:25:28.048 "compare_and_write": true, 00:25:28.048 "abort": true, 00:25:28.048 "nvme_admin": true, 00:25:28.048 "nvme_io": true 00:25:28.048 }, 00:25:28.048 "driver_specific": { 00:25:28.048 "nvme": [ 00:25:28.048 { 00:25:28.048 "trid": { 00:25:28.048 "trtype": "TCP", 00:25:28.048 "adrfam": "IPv4", 00:25:28.048 "traddr": "10.0.0.2", 00:25:28.048 "trsvcid": "4420", 00:25:28.048 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:28.048 }, 00:25:28.048 "ctrlr_data": { 00:25:28.048 "cntlid": 1, 00:25:28.048 "vendor_id": "0x8086", 00:25:28.048 "model_number": "SPDK bdev Controller", 00:25:28.048 "serial_number": "00000000000000000000", 00:25:28.048 "firmware_revision": "24.01.1", 00:25:28.048 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:28.048 "oacs": { 00:25:28.048 "security": 0, 00:25:28.048 "format": 0, 00:25:28.048 "firmware": 0, 00:25:28.048 "ns_manage": 0 00:25:28.048 }, 00:25:28.048 "multi_ctrlr": true, 00:25:28.048 "ana_reporting": false 00:25:28.048 }, 00:25:28.048 "vs": { 00:25:28.048 "nvme_version": "1.3" 00:25:28.048 }, 00:25:28.048 "ns_data": { 00:25:28.048 "id": 1, 00:25:28.048 "can_share": true 00:25:28.048 } 00:25:28.048 } 00:25:28.048 ], 00:25:28.048 "mp_policy": "active_passive" 00:25:28.048 } 00:25:28.048 } 00:25:28.048 ] 00:25:28.048 01:05:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:28.048 01:05:12 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:25:28.048 01:05:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:28.048 01:05:12 -- common/autotest_common.sh@10 -- # set +x 00:25:28.048 [2024-07-23 01:05:12.248681] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:28.048 [2024-07-23 01:05:12.248758] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e6d480 (9): Bad file descriptor 00:25:28.308 [2024-07-23 01:05:12.380759] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:28.308 01:05:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:28.308 01:05:12 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:28.308 01:05:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:28.308 01:05:12 -- common/autotest_common.sh@10 -- # set +x 00:25:28.308 [ 00:25:28.308 { 00:25:28.308 "name": "nvme0n1", 00:25:28.308 "aliases": [ 00:25:28.308 "b15ca4b4-76fa-4f98-aed0-5e11fa8b5d72" 00:25:28.308 ], 00:25:28.308 "product_name": "NVMe disk", 00:25:28.308 "block_size": 512, 00:25:28.308 "num_blocks": 2097152, 00:25:28.308 "uuid": "b15ca4b4-76fa-4f98-aed0-5e11fa8b5d72", 00:25:28.308 "assigned_rate_limits": { 00:25:28.308 "rw_ios_per_sec": 0, 00:25:28.308 "rw_mbytes_per_sec": 0, 00:25:28.308 "r_mbytes_per_sec": 0, 00:25:28.308 "w_mbytes_per_sec": 0 00:25:28.308 }, 00:25:28.308 "claimed": false, 00:25:28.308 "zoned": false, 00:25:28.308 "supported_io_types": { 00:25:28.308 "read": true, 00:25:28.308 "write": true, 00:25:28.308 "unmap": false, 00:25:28.308 "write_zeroes": true, 00:25:28.308 "flush": true, 00:25:28.308 "reset": true, 00:25:28.308 "compare": true, 00:25:28.308 "compare_and_write": true, 00:25:28.308 "abort": true, 00:25:28.308 "nvme_admin": true, 00:25:28.308 "nvme_io": true 00:25:28.308 }, 00:25:28.308 "driver_specific": { 00:25:28.308 "nvme": [ 00:25:28.308 { 00:25:28.309 "trid": { 00:25:28.309 "trtype": "TCP", 00:25:28.309 "adrfam": "IPv4", 00:25:28.309 "traddr": "10.0.0.2", 00:25:28.309 "trsvcid": "4420", 00:25:28.309 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:28.309 }, 00:25:28.309 "ctrlr_data": { 00:25:28.309 "cntlid": 2, 00:25:28.309 "vendor_id": "0x8086", 00:25:28.309 "model_number": "SPDK bdev Controller", 00:25:28.309 "serial_number": "00000000000000000000", 00:25:28.309 "firmware_revision": "24.01.1", 00:25:28.309 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:28.309 "oacs": { 00:25:28.309 "security": 0, 00:25:28.309 "format": 0, 00:25:28.309 "firmware": 0, 00:25:28.309 "ns_manage": 0 00:25:28.309 }, 00:25:28.309 "multi_ctrlr": true, 00:25:28.309 "ana_reporting": false 00:25:28.309 }, 00:25:28.309 "vs": { 00:25:28.309 "nvme_version": "1.3" 00:25:28.309 }, 00:25:28.309 "ns_data": { 00:25:28.309 "id": 1, 00:25:28.309 "can_share": true 00:25:28.309 } 00:25:28.309 } 00:25:28.309 ], 00:25:28.309 "mp_policy": "active_passive" 00:25:28.309 } 00:25:28.309 } 00:25:28.309 ] 00:25:28.309 01:05:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:28.309 01:05:12 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:28.309 01:05:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:28.309 01:05:12 -- common/autotest_common.sh@10 -- # set +x 00:25:28.309 01:05:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:28.309 01:05:12 -- host/async_init.sh@53 -- # mktemp 00:25:28.309 01:05:12 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.1eh8Npl0TH 00:25:28.309 01:05:12 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:25:28.309 01:05:12 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.1eh8Npl0TH 00:25:28.309 01:05:12 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:25:28.309 01:05:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:28.309 01:05:12 -- common/autotest_common.sh@10 -- # set +x 00:25:28.309 01:05:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:28.309 01:05:12 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:25:28.309 01:05:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:28.309 01:05:12 -- common/autotest_common.sh@10 -- # set +x 00:25:28.309 [2024-07-23 01:05:12.425246] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:28.309 [2024-07-23 01:05:12.425371] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:28.309 01:05:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:28.309 01:05:12 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.1eh8Npl0TH 00:25:28.309 01:05:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:28.309 01:05:12 -- common/autotest_common.sh@10 -- # set +x 00:25:28.309 01:05:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:28.309 01:05:12 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.1eh8Npl0TH 00:25:28.309 01:05:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:28.309 01:05:12 -- common/autotest_common.sh@10 -- # set +x 00:25:28.309 [2024-07-23 01:05:12.441283] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:28.309 nvme0n1 00:25:28.309 01:05:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:28.309 01:05:12 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:28.309 01:05:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:28.309 01:05:12 -- common/autotest_common.sh@10 -- # set +x 00:25:28.568 [ 00:25:28.568 { 00:25:28.568 "name": "nvme0n1", 00:25:28.568 "aliases": [ 00:25:28.568 "b15ca4b4-76fa-4f98-aed0-5e11fa8b5d72" 00:25:28.568 ], 00:25:28.568 "product_name": "NVMe disk", 00:25:28.568 "block_size": 512, 00:25:28.568 "num_blocks": 2097152, 00:25:28.568 "uuid": "b15ca4b4-76fa-4f98-aed0-5e11fa8b5d72", 00:25:28.568 "assigned_rate_limits": { 00:25:28.568 "rw_ios_per_sec": 0, 00:25:28.568 "rw_mbytes_per_sec": 0, 00:25:28.568 "r_mbytes_per_sec": 0, 00:25:28.568 "w_mbytes_per_sec": 0 00:25:28.568 }, 00:25:28.568 "claimed": false, 00:25:28.568 "zoned": false, 00:25:28.568 "supported_io_types": { 00:25:28.568 "read": true, 00:25:28.568 "write": true, 00:25:28.568 "unmap": false, 00:25:28.568 "write_zeroes": true, 00:25:28.568 "flush": true, 00:25:28.568 "reset": true, 00:25:28.568 "compare": true, 00:25:28.568 "compare_and_write": true, 00:25:28.568 "abort": true, 00:25:28.568 "nvme_admin": true, 00:25:28.568 "nvme_io": true 00:25:28.568 }, 00:25:28.568 "driver_specific": { 00:25:28.568 "nvme": [ 00:25:28.568 { 00:25:28.568 "trid": { 00:25:28.568 "trtype": "TCP", 00:25:28.568 "adrfam": "IPv4", 00:25:28.568 "traddr": "10.0.0.2", 00:25:28.568 "trsvcid": "4421", 00:25:28.568 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:28.568 }, 00:25:28.568 "ctrlr_data": { 00:25:28.568 "cntlid": 3, 00:25:28.568 "vendor_id": "0x8086", 00:25:28.568 "model_number": "SPDK bdev Controller", 00:25:28.568 "serial_number": "00000000000000000000", 00:25:28.568 "firmware_revision": "24.01.1", 00:25:28.568 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:28.568 "oacs": { 00:25:28.568 "security": 0, 00:25:28.568 "format": 0, 00:25:28.569 "firmware": 0, 00:25:28.569 "ns_manage": 0 00:25:28.569 }, 00:25:28.569 "multi_ctrlr": true, 00:25:28.569 "ana_reporting": false 00:25:28.569 }, 00:25:28.569 "vs": { 00:25:28.569 "nvme_version": "1.3" 00:25:28.569 }, 00:25:28.569 "ns_data": { 00:25:28.569 "id": 1, 00:25:28.569 "can_share": true 00:25:28.569 } 00:25:28.569 } 00:25:28.569 ], 00:25:28.569 "mp_policy": "active_passive" 00:25:28.569 } 00:25:28.569 } 00:25:28.569 ] 00:25:28.569 01:05:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:28.569 01:05:12 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:28.569 01:05:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:28.569 01:05:12 -- common/autotest_common.sh@10 -- # set +x 00:25:28.569 01:05:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:28.569 01:05:12 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.1eh8Npl0TH 00:25:28.569 01:05:12 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:25:28.569 01:05:12 -- host/async_init.sh@78 -- # nvmftestfini 00:25:28.569 01:05:12 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:28.569 01:05:12 -- nvmf/common.sh@116 -- # sync 00:25:28.569 01:05:12 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:28.569 01:05:12 -- nvmf/common.sh@119 -- # set +e 00:25:28.569 01:05:12 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:28.569 01:05:12 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:28.569 rmmod nvme_tcp 00:25:28.569 rmmod nvme_fabrics 00:25:28.569 rmmod nvme_keyring 00:25:28.569 01:05:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:28.569 01:05:12 -- nvmf/common.sh@123 -- # set -e 00:25:28.569 01:05:12 -- nvmf/common.sh@124 -- # return 0 00:25:28.569 01:05:12 -- nvmf/common.sh@477 -- # '[' -n 3482087 ']' 00:25:28.569 01:05:12 -- nvmf/common.sh@478 -- # killprocess 3482087 00:25:28.569 01:05:12 -- common/autotest_common.sh@926 -- # '[' -z 3482087 ']' 00:25:28.569 01:05:12 -- common/autotest_common.sh@930 -- # kill -0 3482087 00:25:28.569 01:05:12 -- common/autotest_common.sh@931 -- # uname 00:25:28.569 01:05:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:28.569 01:05:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3482087 00:25:28.569 01:05:12 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:28.569 01:05:12 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:28.569 01:05:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3482087' 00:25:28.569 killing process with pid 3482087 00:25:28.569 01:05:12 -- common/autotest_common.sh@945 -- # kill 3482087 00:25:28.569 01:05:12 -- common/autotest_common.sh@950 -- # wait 3482087 00:25:28.827 01:05:12 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:28.827 01:05:12 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:28.827 01:05:12 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:28.827 01:05:12 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:28.827 01:05:12 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:28.827 01:05:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:28.827 01:05:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:28.827 01:05:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:30.729 01:05:14 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:30.729 00:25:30.729 real 0m6.196s 00:25:30.729 user 0m2.973s 00:25:30.729 sys 0m1.849s 00:25:30.729 01:05:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:30.729 01:05:14 -- common/autotest_common.sh@10 -- # set +x 00:25:30.729 ************************************ 00:25:30.729 END TEST nvmf_async_init 00:25:30.729 ************************************ 00:25:30.729 01:05:14 -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:25:30.729 01:05:14 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:30.729 01:05:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:30.729 01:05:14 -- common/autotest_common.sh@10 -- # set +x 00:25:30.729 ************************************ 00:25:30.729 START TEST dma 00:25:30.729 ************************************ 00:25:30.729 01:05:14 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:25:30.988 * Looking for test storage... 00:25:30.988 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:30.988 01:05:14 -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:30.988 01:05:14 -- nvmf/common.sh@7 -- # uname -s 00:25:30.988 01:05:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:30.988 01:05:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:30.988 01:05:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:30.988 01:05:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:30.988 01:05:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:30.988 01:05:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:30.988 01:05:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:30.988 01:05:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:30.988 01:05:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:30.988 01:05:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:30.988 01:05:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:30.988 01:05:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:30.988 01:05:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:30.988 01:05:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:30.988 01:05:14 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:30.988 01:05:14 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:30.988 01:05:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:30.988 01:05:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:30.988 01:05:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:30.988 01:05:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.988 01:05:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.989 01:05:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.989 01:05:14 -- paths/export.sh@5 -- # export PATH 00:25:30.989 01:05:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.989 01:05:14 -- nvmf/common.sh@46 -- # : 0 00:25:30.989 01:05:14 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:30.989 01:05:14 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:30.989 01:05:14 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:30.989 01:05:14 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:30.989 01:05:14 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:30.989 01:05:14 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:30.989 01:05:14 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:30.989 01:05:14 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:30.989 01:05:14 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:25:30.989 01:05:14 -- host/dma.sh@13 -- # exit 0 00:25:30.989 00:25:30.989 real 0m0.067s 00:25:30.989 user 0m0.034s 00:25:30.989 sys 0m0.038s 00:25:30.989 01:05:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:30.989 01:05:14 -- common/autotest_common.sh@10 -- # set +x 00:25:30.989 ************************************ 00:25:30.989 END TEST dma 00:25:30.989 ************************************ 00:25:30.989 01:05:14 -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:25:30.989 01:05:14 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:30.989 01:05:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:30.989 01:05:14 -- common/autotest_common.sh@10 -- # set +x 00:25:30.989 ************************************ 00:25:30.989 START TEST nvmf_identify 00:25:30.989 ************************************ 00:25:30.989 01:05:14 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:25:30.989 * Looking for test storage... 00:25:30.989 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:30.989 01:05:15 -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:30.989 01:05:15 -- nvmf/common.sh@7 -- # uname -s 00:25:30.989 01:05:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:30.989 01:05:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:30.989 01:05:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:30.989 01:05:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:30.989 01:05:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:30.989 01:05:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:30.989 01:05:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:30.989 01:05:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:30.989 01:05:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:30.989 01:05:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:30.989 01:05:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:30.989 01:05:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:30.989 01:05:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:30.989 01:05:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:30.989 01:05:15 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:30.989 01:05:15 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:30.989 01:05:15 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:30.989 01:05:15 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:30.989 01:05:15 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:30.989 01:05:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.989 01:05:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.989 01:05:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.989 01:05:15 -- paths/export.sh@5 -- # export PATH 00:25:30.989 01:05:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.989 01:05:15 -- nvmf/common.sh@46 -- # : 0 00:25:30.989 01:05:15 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:30.989 01:05:15 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:30.989 01:05:15 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:30.989 01:05:15 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:30.989 01:05:15 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:30.989 01:05:15 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:30.989 01:05:15 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:30.989 01:05:15 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:30.989 01:05:15 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:30.989 01:05:15 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:30.989 01:05:15 -- host/identify.sh@14 -- # nvmftestinit 00:25:30.989 01:05:15 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:30.989 01:05:15 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:30.989 01:05:15 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:30.989 01:05:15 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:30.989 01:05:15 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:30.989 01:05:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:30.989 01:05:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:30.989 01:05:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:30.989 01:05:15 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:30.989 01:05:15 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:30.989 01:05:15 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:30.989 01:05:15 -- common/autotest_common.sh@10 -- # set +x 00:25:32.896 01:05:16 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:32.896 01:05:16 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:32.896 01:05:16 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:32.896 01:05:16 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:32.896 01:05:16 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:32.896 01:05:16 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:32.896 01:05:16 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:32.896 01:05:16 -- nvmf/common.sh@294 -- # net_devs=() 00:25:32.896 01:05:16 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:32.896 01:05:16 -- nvmf/common.sh@295 -- # e810=() 00:25:32.896 01:05:16 -- nvmf/common.sh@295 -- # local -ga e810 00:25:32.896 01:05:16 -- nvmf/common.sh@296 -- # x722=() 00:25:32.896 01:05:16 -- nvmf/common.sh@296 -- # local -ga x722 00:25:32.896 01:05:16 -- nvmf/common.sh@297 -- # mlx=() 00:25:32.896 01:05:16 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:32.896 01:05:16 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:32.896 01:05:16 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:32.896 01:05:16 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:32.896 01:05:16 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:32.896 01:05:16 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:32.896 01:05:16 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:32.896 01:05:16 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:32.896 01:05:16 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:32.896 01:05:16 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:32.896 01:05:16 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:32.896 01:05:16 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:32.896 01:05:16 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:32.896 01:05:16 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:32.896 01:05:16 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:32.896 01:05:16 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:32.896 01:05:16 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:32.896 01:05:16 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:32.896 01:05:16 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:32.896 01:05:16 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:32.896 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:32.896 01:05:16 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:32.896 01:05:16 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:32.896 01:05:16 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:32.896 01:05:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:32.896 01:05:16 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:32.896 01:05:16 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:32.896 01:05:16 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:32.896 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:32.896 01:05:16 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:32.896 01:05:16 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:32.896 01:05:16 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:32.896 01:05:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:32.897 01:05:16 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:32.897 01:05:16 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:32.897 01:05:16 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:32.897 01:05:16 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:32.897 01:05:16 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:32.897 01:05:16 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:32.897 01:05:16 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:32.897 01:05:16 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:32.897 01:05:16 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:32.897 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:32.897 01:05:16 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:32.897 01:05:16 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:32.897 01:05:16 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:32.897 01:05:16 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:32.897 01:05:16 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:32.897 01:05:16 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:32.897 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:32.897 01:05:16 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:32.897 01:05:16 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:32.897 01:05:16 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:32.897 01:05:16 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:32.897 01:05:16 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:32.897 01:05:16 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:32.897 01:05:16 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:32.897 01:05:16 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:32.897 01:05:16 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:32.897 01:05:16 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:32.897 01:05:16 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:32.897 01:05:16 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:32.897 01:05:16 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:32.897 01:05:16 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:32.897 01:05:16 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:32.897 01:05:16 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:32.897 01:05:16 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:32.897 01:05:16 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:32.897 01:05:16 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:32.897 01:05:16 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:32.897 01:05:16 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:32.897 01:05:16 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:32.897 01:05:16 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:32.897 01:05:17 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:32.897 01:05:17 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:32.897 01:05:17 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:32.897 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:32.897 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:25:32.897 00:25:32.897 --- 10.0.0.2 ping statistics --- 00:25:32.897 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:32.897 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:25:32.897 01:05:17 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:32.897 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:32.897 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.087 ms 00:25:32.897 00:25:32.897 --- 10.0.0.1 ping statistics --- 00:25:32.897 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:32.897 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:25:32.897 01:05:17 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:32.897 01:05:17 -- nvmf/common.sh@410 -- # return 0 00:25:32.897 01:05:17 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:32.897 01:05:17 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:32.897 01:05:17 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:32.897 01:05:17 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:32.897 01:05:17 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:32.897 01:05:17 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:32.897 01:05:17 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:32.897 01:05:17 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:25:32.897 01:05:17 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:32.897 01:05:17 -- common/autotest_common.sh@10 -- # set +x 00:25:32.897 01:05:17 -- host/identify.sh@19 -- # nvmfpid=3484236 00:25:32.897 01:05:17 -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:32.897 01:05:17 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:32.897 01:05:17 -- host/identify.sh@23 -- # waitforlisten 3484236 00:25:32.897 01:05:17 -- common/autotest_common.sh@819 -- # '[' -z 3484236 ']' 00:25:32.897 01:05:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:32.897 01:05:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:32.897 01:05:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:32.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:32.897 01:05:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:32.897 01:05:17 -- common/autotest_common.sh@10 -- # set +x 00:25:33.156 [2024-07-23 01:05:17.105814] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:25:33.156 [2024-07-23 01:05:17.105887] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:33.156 EAL: No free 2048 kB hugepages reported on node 1 00:25:33.156 [2024-07-23 01:05:17.172342] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:33.156 [2024-07-23 01:05:17.263148] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:33.156 [2024-07-23 01:05:17.263313] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:33.156 [2024-07-23 01:05:17.263334] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:33.156 [2024-07-23 01:05:17.263350] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:33.156 [2024-07-23 01:05:17.263430] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:33.156 [2024-07-23 01:05:17.263483] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:33.157 [2024-07-23 01:05:17.263547] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:33.157 [2024-07-23 01:05:17.263549] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:34.093 01:05:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:34.093 01:05:18 -- common/autotest_common.sh@852 -- # return 0 00:25:34.093 01:05:18 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:34.093 01:05:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:34.093 01:05:18 -- common/autotest_common.sh@10 -- # set +x 00:25:34.093 [2024-07-23 01:05:18.043101] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:34.093 01:05:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:34.093 01:05:18 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:25:34.093 01:05:18 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:34.093 01:05:18 -- common/autotest_common.sh@10 -- # set +x 00:25:34.093 01:05:18 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:34.093 01:05:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:34.093 01:05:18 -- common/autotest_common.sh@10 -- # set +x 00:25:34.093 Malloc0 00:25:34.093 01:05:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:34.093 01:05:18 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:34.093 01:05:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:34.093 01:05:18 -- common/autotest_common.sh@10 -- # set +x 00:25:34.093 01:05:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:34.093 01:05:18 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:25:34.093 01:05:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:34.093 01:05:18 -- common/autotest_common.sh@10 -- # set +x 00:25:34.093 01:05:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:34.093 01:05:18 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:34.093 01:05:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:34.093 01:05:18 -- common/autotest_common.sh@10 -- # set +x 00:25:34.093 [2024-07-23 01:05:18.123946] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:34.093 01:05:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:34.093 01:05:18 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:34.093 01:05:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:34.093 01:05:18 -- common/autotest_common.sh@10 -- # set +x 00:25:34.093 01:05:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:34.093 01:05:18 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:25:34.093 01:05:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:34.093 01:05:18 -- common/autotest_common.sh@10 -- # set +x 00:25:34.093 [2024-07-23 01:05:18.139690] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:25:34.093 [ 00:25:34.093 { 00:25:34.093 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:34.093 "subtype": "Discovery", 00:25:34.093 "listen_addresses": [ 00:25:34.094 { 00:25:34.094 "transport": "TCP", 00:25:34.094 "trtype": "TCP", 00:25:34.094 "adrfam": "IPv4", 00:25:34.094 "traddr": "10.0.0.2", 00:25:34.094 "trsvcid": "4420" 00:25:34.094 } 00:25:34.094 ], 00:25:34.094 "allow_any_host": true, 00:25:34.094 "hosts": [] 00:25:34.094 }, 00:25:34.094 { 00:25:34.094 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:34.094 "subtype": "NVMe", 00:25:34.094 "listen_addresses": [ 00:25:34.094 { 00:25:34.094 "transport": "TCP", 00:25:34.094 "trtype": "TCP", 00:25:34.094 "adrfam": "IPv4", 00:25:34.094 "traddr": "10.0.0.2", 00:25:34.094 "trsvcid": "4420" 00:25:34.094 } 00:25:34.094 ], 00:25:34.094 "allow_any_host": true, 00:25:34.094 "hosts": [], 00:25:34.094 "serial_number": "SPDK00000000000001", 00:25:34.094 "model_number": "SPDK bdev Controller", 00:25:34.094 "max_namespaces": 32, 00:25:34.094 "min_cntlid": 1, 00:25:34.094 "max_cntlid": 65519, 00:25:34.094 "namespaces": [ 00:25:34.094 { 00:25:34.094 "nsid": 1, 00:25:34.094 "bdev_name": "Malloc0", 00:25:34.094 "name": "Malloc0", 00:25:34.094 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:25:34.094 "eui64": "ABCDEF0123456789", 00:25:34.094 "uuid": "99fe6f23-e75a-4f8f-a20f-5eddb38097a0" 00:25:34.094 } 00:25:34.094 ] 00:25:34.094 } 00:25:34.094 ] 00:25:34.094 01:05:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:34.094 01:05:18 -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:25:34.094 [2024-07-23 01:05:18.163541] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:25:34.094 [2024-07-23 01:05:18.163584] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3484399 ] 00:25:34.094 EAL: No free 2048 kB hugepages reported on node 1 00:25:34.094 [2024-07-23 01:05:18.197711] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:25:34.094 [2024-07-23 01:05:18.197764] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:25:34.094 [2024-07-23 01:05:18.197774] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:25:34.094 [2024-07-23 01:05:18.197794] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:25:34.094 [2024-07-23 01:05:18.197807] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:25:34.094 [2024-07-23 01:05:18.198108] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:25:34.094 [2024-07-23 01:05:18.198168] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x2403eb0 0 00:25:34.094 [2024-07-23 01:05:18.204648] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:25:34.094 [2024-07-23 01:05:18.204668] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:25:34.094 [2024-07-23 01:05:18.204676] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:25:34.094 [2024-07-23 01:05:18.204682] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:25:34.094 [2024-07-23 01:05:18.204743] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.094 [2024-07-23 01:05:18.204755] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.094 [2024-07-23 01:05:18.204762] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2403eb0) 00:25:34.094 [2024-07-23 01:05:18.204779] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:25:34.094 [2024-07-23 01:05:18.204805] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x245cf80, cid 0, qid 0 00:25:34.094 [2024-07-23 01:05:18.212631] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:34.094 [2024-07-23 01:05:18.212648] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:34.094 [2024-07-23 01:05:18.212655] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:34.094 [2024-07-23 01:05:18.212662] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x245cf80) on tqpair=0x2403eb0 00:25:34.094 [2024-07-23 01:05:18.212678] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:25:34.094 [2024-07-23 01:05:18.212703] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:25:34.094 [2024-07-23 01:05:18.212712] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:25:34.094 [2024-07-23 01:05:18.212731] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.094 [2024-07-23 01:05:18.212740] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.094 [2024-07-23 01:05:18.212746] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2403eb0) 00:25:34.094 [2024-07-23 01:05:18.212757] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.094 [2024-07-23 01:05:18.212781] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x245cf80, cid 0, qid 0 00:25:34.094 [2024-07-23 01:05:18.212967] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:34.094 [2024-07-23 01:05:18.212979] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:34.094 [2024-07-23 01:05:18.212986] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:34.094 [2024-07-23 01:05:18.212993] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x245cf80) on tqpair=0x2403eb0 00:25:34.094 [2024-07-23 01:05:18.213003] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:25:34.094 [2024-07-23 01:05:18.213016] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:25:34.094 [2024-07-23 01:05:18.213028] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.094 [2024-07-23 01:05:18.213035] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.094 [2024-07-23 01:05:18.213042] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2403eb0) 00:25:34.094 [2024-07-23 01:05:18.213052] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.094 [2024-07-23 01:05:18.213093] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x245cf80, cid 0, qid 0 00:25:34.094 [2024-07-23 01:05:18.213248] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:34.094 [2024-07-23 01:05:18.213264] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:34.094 [2024-07-23 01:05:18.213271] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:34.094 [2024-07-23 01:05:18.213277] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x245cf80) on tqpair=0x2403eb0 00:25:34.094 [2024-07-23 01:05:18.213287] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:25:34.094 [2024-07-23 01:05:18.213301] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:25:34.094 [2024-07-23 01:05:18.213314] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.094 [2024-07-23 01:05:18.213321] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.094 [2024-07-23 01:05:18.213327] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2403eb0) 00:25:34.094 [2024-07-23 01:05:18.213337] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.094 [2024-07-23 01:05:18.213358] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x245cf80, cid 0, qid 0 00:25:34.094 [2024-07-23 01:05:18.213538] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:34.094 [2024-07-23 01:05:18.213554] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:34.094 [2024-07-23 01:05:18.213560] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:34.094 [2024-07-23 01:05:18.213566] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x245cf80) on tqpair=0x2403eb0 00:25:34.094 [2024-07-23 01:05:18.213576] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:25:34.094 [2024-07-23 01:05:18.213593] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.094 [2024-07-23 01:05:18.213602] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.094 [2024-07-23 01:05:18.213608] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2403eb0) 00:25:34.094 [2024-07-23 01:05:18.213627] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.094 [2024-07-23 01:05:18.213650] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x245cf80, cid 0, qid 0 00:25:34.094 [2024-07-23 01:05:18.213781] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:34.094 [2024-07-23 01:05:18.213792] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:34.094 [2024-07-23 01:05:18.213799] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:34.094 [2024-07-23 01:05:18.213805] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x245cf80) on tqpair=0x2403eb0 00:25:34.094 [2024-07-23 01:05:18.213815] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:25:34.094 [2024-07-23 01:05:18.213823] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:25:34.094 [2024-07-23 01:05:18.213836] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:25:34.094 [2024-07-23 01:05:18.213959] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:25:34.094 [2024-07-23 01:05:18.213968] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:25:34.094 [2024-07-23 01:05:18.213982] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.094 [2024-07-23 01:05:18.213993] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.095 [2024-07-23 01:05:18.214000] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2403eb0) 00:25:34.095 [2024-07-23 01:05:18.214010] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.095 [2024-07-23 01:05:18.214031] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x245cf80, cid 0, qid 0 00:25:34.095 [2024-07-23 01:05:18.214226] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:34.095 [2024-07-23 01:05:18.214241] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:34.095 [2024-07-23 01:05:18.214248] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:34.095 [2024-07-23 01:05:18.214254] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x245cf80) on tqpair=0x2403eb0 00:25:34.095 [2024-07-23 01:05:18.214264] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:25:34.095 [2024-07-23 01:05:18.214281] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.095 [2024-07-23 01:05:18.214289] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.095 [2024-07-23 01:05:18.214296] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2403eb0) 00:25:34.095 [2024-07-23 01:05:18.214306] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.095 [2024-07-23 01:05:18.214327] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x245cf80, cid 0, qid 0 00:25:34.095 [2024-07-23 01:05:18.214452] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:34.095 [2024-07-23 01:05:18.214467] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:34.095 [2024-07-23 01:05:18.214474] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:34.095 [2024-07-23 01:05:18.214480] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x245cf80) on tqpair=0x2403eb0 00:25:34.095 [2024-07-23 01:05:18.214489] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:25:34.095 [2024-07-23 01:05:18.214498] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:25:34.095 [2024-07-23 01:05:18.214511] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:25:34.095 [2024-07-23 01:05:18.214526] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:25:34.095 [2024-07-23 01:05:18.214540] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.095 [2024-07-23 01:05:18.214547] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.095 [2024-07-23 01:05:18.214553] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2403eb0) 00:25:34.095 [2024-07-23 01:05:18.214564] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.095 [2024-07-23 01:05:18.214585] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x245cf80, cid 0, qid 0 00:25:34.095 [2024-07-23 01:05:18.214776] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:34.095 [2024-07-23 01:05:18.214792] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:34.095 [2024-07-23 01:05:18.214799] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:34.095 [2024-07-23 01:05:18.214805] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2403eb0): datao=0, datal=4096, cccid=0 00:25:34.095 [2024-07-23 01:05:18.214813] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x245cf80) on tqpair(0x2403eb0): expected_datao=0, payload_size=4096 00:25:34.095 [2024-07-23 01:05:18.214825] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:34.095 [2024-07-23 01:05:18.214837] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:34.095 [2024-07-23 01:05:18.214898] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:34.095 [2024-07-23 01:05:18.214909] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:34.095 [2024-07-23 01:05:18.214915] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:34.095 [2024-07-23 01:05:18.214922] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x245cf80) on tqpair=0x2403eb0 00:25:34.095 [2024-07-23 01:05:18.214934] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:25:34.095 [2024-07-23 01:05:18.214943] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:25:34.095 [2024-07-23 01:05:18.214951] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:25:34.095 [2024-07-23 01:05:18.214959] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:25:34.095 [2024-07-23 01:05:18.214967] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:25:34.095 [2024-07-23 01:05:18.214975] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:25:34.095 [2024-07-23 01:05:18.214994] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:25:34.095 [2024-07-23 01:05:18.215007] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.095 [2024-07-23 01:05:18.215014] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.095 [2024-07-23 01:05:18.215021] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2403eb0) 00:25:34.095 [2024-07-23 01:05:18.215031] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:34.095 [2024-07-23 01:05:18.215068] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x245cf80, cid 0, qid 0 00:25:34.095 [2024-07-23 01:05:18.215249] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:34.095 [2024-07-23 01:05:18.215265] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:34.095 [2024-07-23 01:05:18.215272] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:34.095 [2024-07-23 01:05:18.215278] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x245cf80) on tqpair=0x2403eb0 00:25:34.095 [2024-07-23 01:05:18.215292] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.095 [2024-07-23 01:05:18.215299] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.095 [2024-07-23 01:05:18.215305] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2403eb0) 00:25:34.095 [2024-07-23 01:05:18.215315] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:34.095 [2024-07-23 01:05:18.215325] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.095 [2024-07-23 01:05:18.215332] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.095 [2024-07-23 01:05:18.215338] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x2403eb0) 00:25:34.095 [2024-07-23 01:05:18.215346] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:34.095 [2024-07-23 01:05:18.215356] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.095 [2024-07-23 01:05:18.215362] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.095 [2024-07-23 01:05:18.215368] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x2403eb0) 00:25:34.095 [2024-07-23 01:05:18.215377] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:34.095 [2024-07-23 01:05:18.215386] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.095 [2024-07-23 01:05:18.215397] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.095 [2024-07-23 01:05:18.215403] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2403eb0) 00:25:34.095 [2024-07-23 01:05:18.215428] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:34.095 [2024-07-23 01:05:18.215436] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:25:34.095 [2024-07-23 01:05:18.215455] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:25:34.095 [2024-07-23 01:05:18.215467] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.095 [2024-07-23 01:05:18.215474] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.095 [2024-07-23 01:05:18.215480] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2403eb0) 00:25:34.095 [2024-07-23 01:05:18.215490] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.095 [2024-07-23 01:05:18.215511] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x245cf80, cid 0, qid 0 00:25:34.095 [2024-07-23 01:05:18.215536] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x245d0e0, cid 1, qid 0 00:25:34.095 [2024-07-23 01:05:18.215544] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x245d240, cid 2, qid 0 00:25:34.095 [2024-07-23 01:05:18.215552] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x245d3a0, cid 3, qid 0 00:25:34.095 [2024-07-23 01:05:18.215559] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x245d500, cid 4, qid 0 00:25:34.095 [2024-07-23 01:05:18.215738] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:34.095 [2024-07-23 01:05:18.215754] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:34.095 [2024-07-23 01:05:18.215760] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:34.095 [2024-07-23 01:05:18.215767] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x245d500) on tqpair=0x2403eb0 00:25:34.095 [2024-07-23 01:05:18.215777] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:25:34.095 [2024-07-23 01:05:18.215786] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:25:34.095 [2024-07-23 01:05:18.215803] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.095 [2024-07-23 01:05:18.215812] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.095 [2024-07-23 01:05:18.215819] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2403eb0) 00:25:34.095 [2024-07-23 01:05:18.215829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.095 [2024-07-23 01:05:18.215850] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x245d500, cid 4, qid 0 00:25:34.095 [2024-07-23 01:05:18.215995] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:34.095 [2024-07-23 01:05:18.216010] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:34.095 [2024-07-23 01:05:18.216017] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:34.095 [2024-07-23 01:05:18.216023] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2403eb0): datao=0, datal=4096, cccid=4 00:25:34.096 [2024-07-23 01:05:18.216031] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x245d500) on tqpair(0x2403eb0): expected_datao=0, payload_size=4096 00:25:34.096 [2024-07-23 01:05:18.216075] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:34.096 [2024-07-23 01:05:18.216084] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:34.096 [2024-07-23 01:05:18.260639] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:34.096 [2024-07-23 01:05:18.260661] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:34.096 [2024-07-23 01:05:18.260670] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:34.096 [2024-07-23 01:05:18.260677] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x245d500) on tqpair=0x2403eb0 00:25:34.096 [2024-07-23 01:05:18.260697] nvme_ctrlr.c:4024:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:25:34.096 [2024-07-23 01:05:18.260733] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.096 [2024-07-23 01:05:18.260744] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.096 [2024-07-23 01:05:18.260750] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2403eb0) 00:25:34.096 [2024-07-23 01:05:18.260761] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.096 [2024-07-23 01:05:18.260773] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.096 [2024-07-23 01:05:18.260780] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.096 [2024-07-23 01:05:18.260786] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2403eb0) 00:25:34.096 [2024-07-23 01:05:18.260795] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:34.096 [2024-07-23 01:05:18.260822] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x245d500, cid 4, qid 0 00:25:34.096 [2024-07-23 01:05:18.260834] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x245d660, cid 5, qid 0 00:25:34.096 [2024-07-23 01:05:18.261069] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:34.096 [2024-07-23 01:05:18.261085] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:34.096 [2024-07-23 01:05:18.261091] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:34.096 [2024-07-23 01:05:18.261097] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2403eb0): datao=0, datal=1024, cccid=4 00:25:34.096 [2024-07-23 01:05:18.261105] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x245d500) on tqpair(0x2403eb0): expected_datao=0, payload_size=1024 00:25:34.096 [2024-07-23 01:05:18.261131] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:34.096 [2024-07-23 01:05:18.261139] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:34.096 [2024-07-23 01:05:18.261147] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:34.096 [2024-07-23 01:05:18.261156] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:34.096 [2024-07-23 01:05:18.261162] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:34.096 [2024-07-23 01:05:18.261168] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x245d660) on tqpair=0x2403eb0 00:25:34.360 [2024-07-23 01:05:18.303629] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:34.360 [2024-07-23 01:05:18.303649] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:34.360 [2024-07-23 01:05:18.303656] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:34.360 [2024-07-23 01:05:18.303663] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x245d500) on tqpair=0x2403eb0 00:25:34.360 [2024-07-23 01:05:18.303686] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.360 [2024-07-23 01:05:18.303696] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.360 [2024-07-23 01:05:18.303702] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2403eb0) 00:25:34.360 [2024-07-23 01:05:18.303713] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.360 [2024-07-23 01:05:18.303743] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x245d500, cid 4, qid 0 00:25:34.360 [2024-07-23 01:05:18.303949] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:34.360 [2024-07-23 01:05:18.303961] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:34.360 [2024-07-23 01:05:18.303975] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:34.360 [2024-07-23 01:05:18.303982] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2403eb0): datao=0, datal=3072, cccid=4 00:25:34.360 [2024-07-23 01:05:18.303990] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x245d500) on tqpair(0x2403eb0): expected_datao=0, payload_size=3072 00:25:34.360 [2024-07-23 01:05:18.304001] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:34.360 [2024-07-23 01:05:18.304008] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:34.360 [2024-07-23 01:05:18.304042] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:34.360 [2024-07-23 01:05:18.304054] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:34.360 [2024-07-23 01:05:18.304060] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:34.360 [2024-07-23 01:05:18.304067] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x245d500) on tqpair=0x2403eb0 00:25:34.360 [2024-07-23 01:05:18.304081] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.360 [2024-07-23 01:05:18.304089] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.360 [2024-07-23 01:05:18.304096] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2403eb0) 00:25:34.360 [2024-07-23 01:05:18.304106] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.360 [2024-07-23 01:05:18.304133] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x245d500, cid 4, qid 0 00:25:34.360 [2024-07-23 01:05:18.304290] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:34.360 [2024-07-23 01:05:18.304305] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:34.360 [2024-07-23 01:05:18.304312] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:34.360 [2024-07-23 01:05:18.304318] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2403eb0): datao=0, datal=8, cccid=4 00:25:34.360 [2024-07-23 01:05:18.304326] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x245d500) on tqpair(0x2403eb0): expected_datao=0, payload_size=8 00:25:34.360 [2024-07-23 01:05:18.304336] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:34.360 [2024-07-23 01:05:18.304343] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:34.360 [2024-07-23 01:05:18.344757] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:34.360 [2024-07-23 01:05:18.344776] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:34.360 [2024-07-23 01:05:18.344784] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:34.360 [2024-07-23 01:05:18.344791] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x245d500) on tqpair=0x2403eb0 00:25:34.360 ===================================================== 00:25:34.360 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:34.360 ===================================================== 00:25:34.360 Controller Capabilities/Features 00:25:34.360 ================================ 00:25:34.360 Vendor ID: 0000 00:25:34.360 Subsystem Vendor ID: 0000 00:25:34.360 Serial Number: .................... 00:25:34.360 Model Number: ........................................ 00:25:34.360 Firmware Version: 24.01.1 00:25:34.360 Recommended Arb Burst: 0 00:25:34.360 IEEE OUI Identifier: 00 00 00 00:25:34.360 Multi-path I/O 00:25:34.360 May have multiple subsystem ports: No 00:25:34.360 May have multiple controllers: No 00:25:34.360 Associated with SR-IOV VF: No 00:25:34.360 Max Data Transfer Size: 131072 00:25:34.360 Max Number of Namespaces: 0 00:25:34.360 Max Number of I/O Queues: 1024 00:25:34.360 NVMe Specification Version (VS): 1.3 00:25:34.360 NVMe Specification Version (Identify): 1.3 00:25:34.360 Maximum Queue Entries: 128 00:25:34.360 Contiguous Queues Required: Yes 00:25:34.360 Arbitration Mechanisms Supported 00:25:34.360 Weighted Round Robin: Not Supported 00:25:34.360 Vendor Specific: Not Supported 00:25:34.360 Reset Timeout: 15000 ms 00:25:34.360 Doorbell Stride: 4 bytes 00:25:34.360 NVM Subsystem Reset: Not Supported 00:25:34.360 Command Sets Supported 00:25:34.360 NVM Command Set: Supported 00:25:34.360 Boot Partition: Not Supported 00:25:34.360 Memory Page Size Minimum: 4096 bytes 00:25:34.360 Memory Page Size Maximum: 4096 bytes 00:25:34.360 Persistent Memory Region: Not Supported 00:25:34.360 Optional Asynchronous Events Supported 00:25:34.360 Namespace Attribute Notices: Not Supported 00:25:34.360 Firmware Activation Notices: Not Supported 00:25:34.360 ANA Change Notices: Not Supported 00:25:34.360 PLE Aggregate Log Change Notices: Not Supported 00:25:34.360 LBA Status Info Alert Notices: Not Supported 00:25:34.360 EGE Aggregate Log Change Notices: Not Supported 00:25:34.360 Normal NVM Subsystem Shutdown event: Not Supported 00:25:34.360 Zone Descriptor Change Notices: Not Supported 00:25:34.360 Discovery Log Change Notices: Supported 00:25:34.360 Controller Attributes 00:25:34.360 128-bit Host Identifier: Not Supported 00:25:34.360 Non-Operational Permissive Mode: Not Supported 00:25:34.360 NVM Sets: Not Supported 00:25:34.360 Read Recovery Levels: Not Supported 00:25:34.360 Endurance Groups: Not Supported 00:25:34.360 Predictable Latency Mode: Not Supported 00:25:34.360 Traffic Based Keep ALive: Not Supported 00:25:34.360 Namespace Granularity: Not Supported 00:25:34.360 SQ Associations: Not Supported 00:25:34.360 UUID List: Not Supported 00:25:34.360 Multi-Domain Subsystem: Not Supported 00:25:34.360 Fixed Capacity Management: Not Supported 00:25:34.360 Variable Capacity Management: Not Supported 00:25:34.360 Delete Endurance Group: Not Supported 00:25:34.360 Delete NVM Set: Not Supported 00:25:34.360 Extended LBA Formats Supported: Not Supported 00:25:34.360 Flexible Data Placement Supported: Not Supported 00:25:34.360 00:25:34.360 Controller Memory Buffer Support 00:25:34.360 ================================ 00:25:34.360 Supported: No 00:25:34.360 00:25:34.360 Persistent Memory Region Support 00:25:34.360 ================================ 00:25:34.360 Supported: No 00:25:34.360 00:25:34.360 Admin Command Set Attributes 00:25:34.360 ============================ 00:25:34.360 Security Send/Receive: Not Supported 00:25:34.360 Format NVM: Not Supported 00:25:34.360 Firmware Activate/Download: Not Supported 00:25:34.360 Namespace Management: Not Supported 00:25:34.360 Device Self-Test: Not Supported 00:25:34.360 Directives: Not Supported 00:25:34.360 NVMe-MI: Not Supported 00:25:34.360 Virtualization Management: Not Supported 00:25:34.360 Doorbell Buffer Config: Not Supported 00:25:34.360 Get LBA Status Capability: Not Supported 00:25:34.360 Command & Feature Lockdown Capability: Not Supported 00:25:34.361 Abort Command Limit: 1 00:25:34.361 Async Event Request Limit: 4 00:25:34.361 Number of Firmware Slots: N/A 00:25:34.361 Firmware Slot 1 Read-Only: N/A 00:25:34.361 Firmware Activation Without Reset: N/A 00:25:34.361 Multiple Update Detection Support: N/A 00:25:34.361 Firmware Update Granularity: No Information Provided 00:25:34.361 Per-Namespace SMART Log: No 00:25:34.361 Asymmetric Namespace Access Log Page: Not Supported 00:25:34.361 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:34.361 Command Effects Log Page: Not Supported 00:25:34.361 Get Log Page Extended Data: Supported 00:25:34.361 Telemetry Log Pages: Not Supported 00:25:34.361 Persistent Event Log Pages: Not Supported 00:25:34.361 Supported Log Pages Log Page: May Support 00:25:34.361 Commands Supported & Effects Log Page: Not Supported 00:25:34.361 Feature Identifiers & Effects Log Page:May Support 00:25:34.361 NVMe-MI Commands & Effects Log Page: May Support 00:25:34.361 Data Area 4 for Telemetry Log: Not Supported 00:25:34.361 Error Log Page Entries Supported: 128 00:25:34.361 Keep Alive: Not Supported 00:25:34.361 00:25:34.361 NVM Command Set Attributes 00:25:34.361 ========================== 00:25:34.361 Submission Queue Entry Size 00:25:34.361 Max: 1 00:25:34.361 Min: 1 00:25:34.361 Completion Queue Entry Size 00:25:34.361 Max: 1 00:25:34.361 Min: 1 00:25:34.361 Number of Namespaces: 0 00:25:34.361 Compare Command: Not Supported 00:25:34.361 Write Uncorrectable Command: Not Supported 00:25:34.361 Dataset Management Command: Not Supported 00:25:34.361 Write Zeroes Command: Not Supported 00:25:34.361 Set Features Save Field: Not Supported 00:25:34.361 Reservations: Not Supported 00:25:34.361 Timestamp: Not Supported 00:25:34.361 Copy: Not Supported 00:25:34.361 Volatile Write Cache: Not Present 00:25:34.361 Atomic Write Unit (Normal): 1 00:25:34.361 Atomic Write Unit (PFail): 1 00:25:34.361 Atomic Compare & Write Unit: 1 00:25:34.361 Fused Compare & Write: Supported 00:25:34.361 Scatter-Gather List 00:25:34.361 SGL Command Set: Supported 00:25:34.361 SGL Keyed: Supported 00:25:34.361 SGL Bit Bucket Descriptor: Not Supported 00:25:34.361 SGL Metadata Pointer: Not Supported 00:25:34.361 Oversized SGL: Not Supported 00:25:34.361 SGL Metadata Address: Not Supported 00:25:34.361 SGL Offset: Supported 00:25:34.361 Transport SGL Data Block: Not Supported 00:25:34.361 Replay Protected Memory Block: Not Supported 00:25:34.361 00:25:34.361 Firmware Slot Information 00:25:34.361 ========================= 00:25:34.361 Active slot: 0 00:25:34.361 00:25:34.361 00:25:34.361 Error Log 00:25:34.361 ========= 00:25:34.361 00:25:34.361 Active Namespaces 00:25:34.361 ================= 00:25:34.361 Discovery Log Page 00:25:34.361 ================== 00:25:34.361 Generation Counter: 2 00:25:34.361 Number of Records: 2 00:25:34.361 Record Format: 0 00:25:34.361 00:25:34.361 Discovery Log Entry 0 00:25:34.361 ---------------------- 00:25:34.361 Transport Type: 3 (TCP) 00:25:34.361 Address Family: 1 (IPv4) 00:25:34.361 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:34.361 Entry Flags: 00:25:34.361 Duplicate Returned Information: 1 00:25:34.361 Explicit Persistent Connection Support for Discovery: 1 00:25:34.361 Transport Requirements: 00:25:34.361 Secure Channel: Not Required 00:25:34.361 Port ID: 0 (0x0000) 00:25:34.361 Controller ID: 65535 (0xffff) 00:25:34.361 Admin Max SQ Size: 128 00:25:34.361 Transport Service Identifier: 4420 00:25:34.361 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:34.361 Transport Address: 10.0.0.2 00:25:34.361 Discovery Log Entry 1 00:25:34.361 ---------------------- 00:25:34.361 Transport Type: 3 (TCP) 00:25:34.361 Address Family: 1 (IPv4) 00:25:34.361 Subsystem Type: 2 (NVM Subsystem) 00:25:34.361 Entry Flags: 00:25:34.361 Duplicate Returned Information: 0 00:25:34.361 Explicit Persistent Connection Support for Discovery: 0 00:25:34.361 Transport Requirements: 00:25:34.361 Secure Channel: Not Required 00:25:34.361 Port ID: 0 (0x0000) 00:25:34.361 Controller ID: 65535 (0xffff) 00:25:34.361 Admin Max SQ Size: 128 00:25:34.361 Transport Service Identifier: 4420 00:25:34.361 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:25:34.361 Transport Address: 10.0.0.2 [2024-07-23 01:05:18.344904] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:25:34.361 [2024-07-23 01:05:18.344929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.361 [2024-07-23 01:05:18.344941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.361 [2024-07-23 01:05:18.344950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.361 [2024-07-23 01:05:18.344960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.361 [2024-07-23 01:05:18.344973] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.361 [2024-07-23 01:05:18.344981] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.361 [2024-07-23 01:05:18.344988] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2403eb0) 00:25:34.361 [2024-07-23 01:05:18.344999] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.361 [2024-07-23 01:05:18.345038] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x245d3a0, cid 3, qid 0 00:25:34.361 [2024-07-23 01:05:18.345226] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:34.361 [2024-07-23 01:05:18.345241] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:34.361 [2024-07-23 01:05:18.345248] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:34.361 [2024-07-23 01:05:18.345255] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x245d3a0) on tqpair=0x2403eb0 00:25:34.361 [2024-07-23 01:05:18.345268] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.361 [2024-07-23 01:05:18.345276] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.361 [2024-07-23 01:05:18.345282] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2403eb0) 00:25:34.361 [2024-07-23 01:05:18.345292] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.361 [2024-07-23 01:05:18.345319] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x245d3a0, cid 3, qid 0 00:25:34.361 [2024-07-23 01:05:18.345480] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:34.361 [2024-07-23 01:05:18.345496] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:34.361 [2024-07-23 01:05:18.345502] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:34.361 [2024-07-23 01:05:18.345509] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x245d3a0) on tqpair=0x2403eb0 00:25:34.361 [2024-07-23 01:05:18.345518] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:25:34.361 [2024-07-23 01:05:18.345526] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:25:34.361 [2024-07-23 01:05:18.345542] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.361 [2024-07-23 01:05:18.345550] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.361 [2024-07-23 01:05:18.345557] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2403eb0) 00:25:34.361 [2024-07-23 01:05:18.345567] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.361 [2024-07-23 01:05:18.345587] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x245d3a0, cid 3, qid 0 00:25:34.361 [2024-07-23 01:05:18.345778] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:34.361 [2024-07-23 01:05:18.345794] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:34.361 [2024-07-23 01:05:18.345800] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:34.361 [2024-07-23 01:05:18.345807] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x245d3a0) on tqpair=0x2403eb0 00:25:34.361 [2024-07-23 01:05:18.345825] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.361 [2024-07-23 01:05:18.345834] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.361 [2024-07-23 01:05:18.345841] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2403eb0) 00:25:34.361 [2024-07-23 01:05:18.345851] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.361 [2024-07-23 01:05:18.345872] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x245d3a0, cid 3, qid 0 00:25:34.361 [2024-07-23 01:05:18.346032] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:34.361 [2024-07-23 01:05:18.346043] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:34.361 [2024-07-23 01:05:18.346050] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:34.361 [2024-07-23 01:05:18.346056] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x245d3a0) on tqpair=0x2403eb0 00:25:34.362 [2024-07-23 01:05:18.346073] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.362 [2024-07-23 01:05:18.346082] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.362 [2024-07-23 01:05:18.346089] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2403eb0) 00:25:34.362 [2024-07-23 01:05:18.346103] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.362 [2024-07-23 01:05:18.346124] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x245d3a0, cid 3, qid 0 00:25:34.362 [2024-07-23 01:05:18.346281] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:34.362 [2024-07-23 01:05:18.346296] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:34.362 [2024-07-23 01:05:18.346303] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:34.362 [2024-07-23 01:05:18.346309] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x245d3a0) on tqpair=0x2403eb0 00:25:34.362 [2024-07-23 01:05:18.346327] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.362 [2024-07-23 01:05:18.346336] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.362 [2024-07-23 01:05:18.346342] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2403eb0) 00:25:34.362 [2024-07-23 01:05:18.346352] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.362 [2024-07-23 01:05:18.346373] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x245d3a0, cid 3, qid 0 00:25:34.362 [2024-07-23 01:05:18.346504] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:34.362 [2024-07-23 01:05:18.346519] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:34.362 [2024-07-23 01:05:18.346525] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:34.362 [2024-07-23 01:05:18.346532] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x245d3a0) on tqpair=0x2403eb0 00:25:34.362 [2024-07-23 01:05:18.346549] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.362 [2024-07-23 01:05:18.346558] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.362 [2024-07-23 01:05:18.346565] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2403eb0) 00:25:34.362 [2024-07-23 01:05:18.346575] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.362 [2024-07-23 01:05:18.346595] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x245d3a0, cid 3, qid 0 00:25:34.362 [2024-07-23 01:05:18.346734] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:34.362 [2024-07-23 01:05:18.346750] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:34.362 [2024-07-23 01:05:18.346756] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:34.362 [2024-07-23 01:05:18.346763] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x245d3a0) on tqpair=0x2403eb0 00:25:34.362 [2024-07-23 01:05:18.346780] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.362 [2024-07-23 01:05:18.346789] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.362 [2024-07-23 01:05:18.346796] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2403eb0) 00:25:34.362 [2024-07-23 01:05:18.346806] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.362 [2024-07-23 01:05:18.346826] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x245d3a0, cid 3, qid 0 00:25:34.362 [2024-07-23 01:05:18.346955] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:34.362 [2024-07-23 01:05:18.346967] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:34.362 [2024-07-23 01:05:18.346973] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:34.362 [2024-07-23 01:05:18.346980] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x245d3a0) on tqpair=0x2403eb0 00:25:34.362 [2024-07-23 01:05:18.346996] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.362 [2024-07-23 01:05:18.347005] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.362 [2024-07-23 01:05:18.347012] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2403eb0) 00:25:34.362 [2024-07-23 01:05:18.347022] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.362 [2024-07-23 01:05:18.347046] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x245d3a0, cid 3, qid 0 00:25:34.362 [2024-07-23 01:05:18.347179] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:34.362 [2024-07-23 01:05:18.347195] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:34.362 [2024-07-23 01:05:18.347201] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:34.362 [2024-07-23 01:05:18.347208] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x245d3a0) on tqpair=0x2403eb0 00:25:34.362 [2024-07-23 01:05:18.347225] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.362 [2024-07-23 01:05:18.347234] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.362 [2024-07-23 01:05:18.347240] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2403eb0) 00:25:34.362 [2024-07-23 01:05:18.347250] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.362 [2024-07-23 01:05:18.347271] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x245d3a0, cid 3, qid 0 00:25:34.362 [2024-07-23 01:05:18.347395] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:34.362 [2024-07-23 01:05:18.347410] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:34.362 [2024-07-23 01:05:18.347417] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:34.362 [2024-07-23 01:05:18.347423] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x245d3a0) on tqpair=0x2403eb0 00:25:34.362 [2024-07-23 01:05:18.347441] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.362 [2024-07-23 01:05:18.347450] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.362 [2024-07-23 01:05:18.347456] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2403eb0) 00:25:34.362 [2024-07-23 01:05:18.347466] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.362 [2024-07-23 01:05:18.347487] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x245d3a0, cid 3, qid 0 00:25:34.362 [2024-07-23 01:05:18.351643] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:34.362 [2024-07-23 01:05:18.351659] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:34.362 [2024-07-23 01:05:18.351666] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:34.362 [2024-07-23 01:05:18.351672] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x245d3a0) on tqpair=0x2403eb0 00:25:34.362 [2024-07-23 01:05:18.351690] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.362 [2024-07-23 01:05:18.351714] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.362 [2024-07-23 01:05:18.351720] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2403eb0) 00:25:34.362 [2024-07-23 01:05:18.351731] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.362 [2024-07-23 01:05:18.351753] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x245d3a0, cid 3, qid 0 00:25:34.362 [2024-07-23 01:05:18.351889] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:34.362 [2024-07-23 01:05:18.351901] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:34.362 [2024-07-23 01:05:18.351907] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:34.362 [2024-07-23 01:05:18.351914] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x245d3a0) on tqpair=0x2403eb0 00:25:34.362 [2024-07-23 01:05:18.351928] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:25:34.362 00:25:34.362 01:05:18 -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:25:34.362 [2024-07-23 01:05:18.381393] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:25:34.362 [2024-07-23 01:05:18.381434] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3484405 ] 00:25:34.362 EAL: No free 2048 kB hugepages reported on node 1 00:25:34.362 [2024-07-23 01:05:18.413222] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:25:34.362 [2024-07-23 01:05:18.413266] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:25:34.362 [2024-07-23 01:05:18.413275] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:25:34.362 [2024-07-23 01:05:18.413287] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:25:34.362 [2024-07-23 01:05:18.413298] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:25:34.362 [2024-07-23 01:05:18.413603] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:25:34.362 [2024-07-23 01:05:18.413650] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1ae4eb0 0 00:25:34.362 [2024-07-23 01:05:18.427640] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:25:34.362 [2024-07-23 01:05:18.427659] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:25:34.362 [2024-07-23 01:05:18.427666] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:25:34.362 [2024-07-23 01:05:18.427672] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:25:34.362 [2024-07-23 01:05:18.427723] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.362 [2024-07-23 01:05:18.427736] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.362 [2024-07-23 01:05:18.427743] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ae4eb0) 00:25:34.362 [2024-07-23 01:05:18.427756] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:25:34.362 [2024-07-23 01:05:18.427783] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b3df80, cid 0, qid 0 00:25:34.362 [2024-07-23 01:05:18.435630] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:34.362 [2024-07-23 01:05:18.435647] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:34.362 [2024-07-23 01:05:18.435654] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:34.363 [2024-07-23 01:05:18.435661] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b3df80) on tqpair=0x1ae4eb0 00:25:34.363 [2024-07-23 01:05:18.435675] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:25:34.363 [2024-07-23 01:05:18.435699] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:25:34.363 [2024-07-23 01:05:18.435708] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:25:34.363 [2024-07-23 01:05:18.435724] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.363 [2024-07-23 01:05:18.435733] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.363 [2024-07-23 01:05:18.435740] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ae4eb0) 00:25:34.363 [2024-07-23 01:05:18.435751] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.363 [2024-07-23 01:05:18.435775] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b3df80, cid 0, qid 0 00:25:34.363 [2024-07-23 01:05:18.435944] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:34.363 [2024-07-23 01:05:18.435960] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:34.363 [2024-07-23 01:05:18.435967] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:34.363 [2024-07-23 01:05:18.435978] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b3df80) on tqpair=0x1ae4eb0 00:25:34.363 [2024-07-23 01:05:18.435988] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:25:34.363 [2024-07-23 01:05:18.436002] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:25:34.363 [2024-07-23 01:05:18.436015] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.363 [2024-07-23 01:05:18.436022] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.363 [2024-07-23 01:05:18.436029] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ae4eb0) 00:25:34.363 [2024-07-23 01:05:18.436039] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.363 [2024-07-23 01:05:18.436061] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b3df80, cid 0, qid 0 00:25:34.363 [2024-07-23 01:05:18.436247] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:34.363 [2024-07-23 01:05:18.436259] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:34.363 [2024-07-23 01:05:18.436266] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:34.363 [2024-07-23 01:05:18.436273] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b3df80) on tqpair=0x1ae4eb0 00:25:34.363 [2024-07-23 01:05:18.436282] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:25:34.363 [2024-07-23 01:05:18.436296] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:25:34.363 [2024-07-23 01:05:18.436308] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.363 [2024-07-23 01:05:18.436315] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.363 [2024-07-23 01:05:18.436322] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ae4eb0) 00:25:34.363 [2024-07-23 01:05:18.436332] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.363 [2024-07-23 01:05:18.436354] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b3df80, cid 0, qid 0 00:25:34.363 [2024-07-23 01:05:18.436534] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:34.363 [2024-07-23 01:05:18.436547] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:34.363 [2024-07-23 01:05:18.436554] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:34.363 [2024-07-23 01:05:18.436560] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b3df80) on tqpair=0x1ae4eb0 00:25:34.363 [2024-07-23 01:05:18.436570] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:25:34.363 [2024-07-23 01:05:18.436586] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.363 [2024-07-23 01:05:18.436595] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.363 [2024-07-23 01:05:18.436601] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ae4eb0) 00:25:34.363 [2024-07-23 01:05:18.436611] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.363 [2024-07-23 01:05:18.436642] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b3df80, cid 0, qid 0 00:25:34.363 [2024-07-23 01:05:18.436839] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:34.363 [2024-07-23 01:05:18.436855] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:34.363 [2024-07-23 01:05:18.436861] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:34.363 [2024-07-23 01:05:18.436868] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b3df80) on tqpair=0x1ae4eb0 00:25:34.363 [2024-07-23 01:05:18.436877] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:25:34.363 [2024-07-23 01:05:18.436890] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:25:34.363 [2024-07-23 01:05:18.436903] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:25:34.363 [2024-07-23 01:05:18.437013] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:25:34.363 [2024-07-23 01:05:18.437021] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:25:34.363 [2024-07-23 01:05:18.437033] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.363 [2024-07-23 01:05:18.437040] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.363 [2024-07-23 01:05:18.437047] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ae4eb0) 00:25:34.363 [2024-07-23 01:05:18.437057] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.363 [2024-07-23 01:05:18.437078] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b3df80, cid 0, qid 0 00:25:34.363 [2024-07-23 01:05:18.437247] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:34.363 [2024-07-23 01:05:18.437263] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:34.363 [2024-07-23 01:05:18.437269] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:34.363 [2024-07-23 01:05:18.437276] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b3df80) on tqpair=0x1ae4eb0 00:25:34.363 [2024-07-23 01:05:18.437286] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:25:34.363 [2024-07-23 01:05:18.437303] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.363 [2024-07-23 01:05:18.437312] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.363 [2024-07-23 01:05:18.437318] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ae4eb0) 00:25:34.363 [2024-07-23 01:05:18.437329] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.363 [2024-07-23 01:05:18.437350] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b3df80, cid 0, qid 0 00:25:34.363 [2024-07-23 01:05:18.437479] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:34.363 [2024-07-23 01:05:18.437492] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:34.363 [2024-07-23 01:05:18.437498] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:34.363 [2024-07-23 01:05:18.437505] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b3df80) on tqpair=0x1ae4eb0 00:25:34.363 [2024-07-23 01:05:18.437514] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:25:34.363 [2024-07-23 01:05:18.437522] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:25:34.363 [2024-07-23 01:05:18.437535] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:25:34.363 [2024-07-23 01:05:18.437553] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:25:34.363 [2024-07-23 01:05:18.437567] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.363 [2024-07-23 01:05:18.437574] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.363 [2024-07-23 01:05:18.437581] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ae4eb0) 00:25:34.363 [2024-07-23 01:05:18.437591] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.363 [2024-07-23 01:05:18.437621] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b3df80, cid 0, qid 0 00:25:34.363 [2024-07-23 01:05:18.437828] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:34.363 [2024-07-23 01:05:18.437844] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:34.363 [2024-07-23 01:05:18.437851] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:34.363 [2024-07-23 01:05:18.437857] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ae4eb0): datao=0, datal=4096, cccid=0 00:25:34.363 [2024-07-23 01:05:18.437865] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b3df80) on tqpair(0x1ae4eb0): expected_datao=0, payload_size=4096 00:25:34.363 [2024-07-23 01:05:18.437876] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:34.363 [2024-07-23 01:05:18.437884] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:34.363 [2024-07-23 01:05:18.437959] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:34.363 [2024-07-23 01:05:18.437971] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:34.363 [2024-07-23 01:05:18.437977] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:34.363 [2024-07-23 01:05:18.437984] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b3df80) on tqpair=0x1ae4eb0 00:25:34.363 [2024-07-23 01:05:18.437996] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:25:34.363 [2024-07-23 01:05:18.438004] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:25:34.363 [2024-07-23 01:05:18.438011] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:25:34.363 [2024-07-23 01:05:18.438018] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:25:34.363 [2024-07-23 01:05:18.438026] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:25:34.363 [2024-07-23 01:05:18.438034] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:25:34.364 [2024-07-23 01:05:18.438052] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:25:34.364 [2024-07-23 01:05:18.438065] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.364 [2024-07-23 01:05:18.438073] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.364 [2024-07-23 01:05:18.438079] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ae4eb0) 00:25:34.364 [2024-07-23 01:05:18.438090] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:34.364 [2024-07-23 01:05:18.438126] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b3df80, cid 0, qid 0 00:25:34.364 [2024-07-23 01:05:18.438361] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:34.364 [2024-07-23 01:05:18.438376] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:34.364 [2024-07-23 01:05:18.438384] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:34.364 [2024-07-23 01:05:18.438390] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b3df80) on tqpair=0x1ae4eb0 00:25:34.364 [2024-07-23 01:05:18.438402] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.364 [2024-07-23 01:05:18.438409] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.364 [2024-07-23 01:05:18.438416] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ae4eb0) 00:25:34.364 [2024-07-23 01:05:18.438425] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:34.364 [2024-07-23 01:05:18.438436] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.364 [2024-07-23 01:05:18.438443] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.364 [2024-07-23 01:05:18.438449] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1ae4eb0) 00:25:34.364 [2024-07-23 01:05:18.438461] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:34.364 [2024-07-23 01:05:18.438472] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.364 [2024-07-23 01:05:18.438495] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.364 [2024-07-23 01:05:18.438501] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1ae4eb0) 00:25:34.364 [2024-07-23 01:05:18.438510] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:34.364 [2024-07-23 01:05:18.438520] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.364 [2024-07-23 01:05:18.438526] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.364 [2024-07-23 01:05:18.438532] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ae4eb0) 00:25:34.364 [2024-07-23 01:05:18.438541] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:34.364 [2024-07-23 01:05:18.438549] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:25:34.364 [2024-07-23 01:05:18.438567] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:25:34.364 [2024-07-23 01:05:18.438580] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.364 [2024-07-23 01:05:18.438587] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.364 [2024-07-23 01:05:18.438608] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ae4eb0) 00:25:34.364 [2024-07-23 01:05:18.438627] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.364 [2024-07-23 01:05:18.438652] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b3df80, cid 0, qid 0 00:25:34.364 [2024-07-23 01:05:18.438663] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b3e0e0, cid 1, qid 0 00:25:34.364 [2024-07-23 01:05:18.438671] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b3e240, cid 2, qid 0 00:25:34.364 [2024-07-23 01:05:18.438679] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b3e3a0, cid 3, qid 0 00:25:34.364 [2024-07-23 01:05:18.438686] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b3e500, cid 4, qid 0 00:25:34.364 [2024-07-23 01:05:18.438917] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:34.364 [2024-07-23 01:05:18.438933] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:34.364 [2024-07-23 01:05:18.438940] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:34.364 [2024-07-23 01:05:18.438946] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b3e500) on tqpair=0x1ae4eb0 00:25:34.364 [2024-07-23 01:05:18.438955] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:25:34.364 [2024-07-23 01:05:18.438964] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:25:34.364 [2024-07-23 01:05:18.438993] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:25:34.364 [2024-07-23 01:05:18.439008] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:25:34.364 [2024-07-23 01:05:18.439019] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.364 [2024-07-23 01:05:18.439027] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.364 [2024-07-23 01:05:18.439033] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ae4eb0) 00:25:34.364 [2024-07-23 01:05:18.439043] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:34.364 [2024-07-23 01:05:18.439067] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b3e500, cid 4, qid 0 00:25:34.364 [2024-07-23 01:05:18.439239] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:34.364 [2024-07-23 01:05:18.439255] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:34.364 [2024-07-23 01:05:18.439262] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:34.364 [2024-07-23 01:05:18.439269] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b3e500) on tqpair=0x1ae4eb0 00:25:34.364 [2024-07-23 01:05:18.439333] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:25:34.364 [2024-07-23 01:05:18.439351] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:25:34.364 [2024-07-23 01:05:18.439365] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.364 [2024-07-23 01:05:18.439372] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.364 [2024-07-23 01:05:18.439379] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ae4eb0) 00:25:34.364 [2024-07-23 01:05:18.439404] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.364 [2024-07-23 01:05:18.439425] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b3e500, cid 4, qid 0 00:25:34.364 [2024-07-23 01:05:18.443627] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:34.364 [2024-07-23 01:05:18.443643] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:34.364 [2024-07-23 01:05:18.443650] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:34.364 [2024-07-23 01:05:18.443656] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ae4eb0): datao=0, datal=4096, cccid=4 00:25:34.364 [2024-07-23 01:05:18.443664] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b3e500) on tqpair(0x1ae4eb0): expected_datao=0, payload_size=4096 00:25:34.364 [2024-07-23 01:05:18.443674] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:34.364 [2024-07-23 01:05:18.443681] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:34.364 [2024-07-23 01:05:18.443690] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:34.364 [2024-07-23 01:05:18.443698] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:34.364 [2024-07-23 01:05:18.443704] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:34.364 [2024-07-23 01:05:18.443711] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b3e500) on tqpair=0x1ae4eb0 00:25:34.364 [2024-07-23 01:05:18.443731] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:25:34.364 [2024-07-23 01:05:18.443745] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:25:34.364 [2024-07-23 01:05:18.443762] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:25:34.364 [2024-07-23 01:05:18.443791] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.364 [2024-07-23 01:05:18.443799] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.364 [2024-07-23 01:05:18.443805] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ae4eb0) 00:25:34.364 [2024-07-23 01:05:18.443816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.364 [2024-07-23 01:05:18.443838] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b3e500, cid 4, qid 0 00:25:34.364 [2024-07-23 01:05:18.444078] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:34.364 [2024-07-23 01:05:18.444094] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:34.364 [2024-07-23 01:05:18.444101] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:34.364 [2024-07-23 01:05:18.444107] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ae4eb0): datao=0, datal=4096, cccid=4 00:25:34.364 [2024-07-23 01:05:18.444119] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b3e500) on tqpair(0x1ae4eb0): expected_datao=0, payload_size=4096 00:25:34.364 [2024-07-23 01:05:18.444130] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:34.364 [2024-07-23 01:05:18.444138] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:34.364 [2024-07-23 01:05:18.444204] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:34.365 [2024-07-23 01:05:18.444215] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:34.365 [2024-07-23 01:05:18.444222] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:34.365 [2024-07-23 01:05:18.444228] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b3e500) on tqpair=0x1ae4eb0 00:25:34.365 [2024-07-23 01:05:18.444251] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:25:34.365 [2024-07-23 01:05:18.444269] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:25:34.365 [2024-07-23 01:05:18.444282] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.365 [2024-07-23 01:05:18.444290] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.365 [2024-07-23 01:05:18.444296] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ae4eb0) 00:25:34.365 [2024-07-23 01:05:18.444307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.365 [2024-07-23 01:05:18.444328] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b3e500, cid 4, qid 0 00:25:34.365 [2024-07-23 01:05:18.444519] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:34.365 [2024-07-23 01:05:18.444531] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:34.365 [2024-07-23 01:05:18.444538] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:34.365 [2024-07-23 01:05:18.444545] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ae4eb0): datao=0, datal=4096, cccid=4 00:25:34.365 [2024-07-23 01:05:18.444552] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b3e500) on tqpair(0x1ae4eb0): expected_datao=0, payload_size=4096 00:25:34.365 [2024-07-23 01:05:18.444563] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:34.365 [2024-07-23 01:05:18.444571] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:34.365 [2024-07-23 01:05:18.444619] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:34.365 [2024-07-23 01:05:18.444633] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:34.365 [2024-07-23 01:05:18.444639] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:34.365 [2024-07-23 01:05:18.444646] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b3e500) on tqpair=0x1ae4eb0 00:25:34.365 [2024-07-23 01:05:18.444660] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:25:34.365 [2024-07-23 01:05:18.444674] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:25:34.365 [2024-07-23 01:05:18.444689] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:25:34.365 [2024-07-23 01:05:18.444699] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:25:34.365 [2024-07-23 01:05:18.444708] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:25:34.365 [2024-07-23 01:05:18.444716] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:25:34.365 [2024-07-23 01:05:18.444724] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:25:34.365 [2024-07-23 01:05:18.444735] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:25:34.365 [2024-07-23 01:05:18.444754] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.365 [2024-07-23 01:05:18.444763] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.365 [2024-07-23 01:05:18.444769] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ae4eb0) 00:25:34.365 [2024-07-23 01:05:18.444780] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.365 [2024-07-23 01:05:18.444790] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.365 [2024-07-23 01:05:18.444798] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.365 [2024-07-23 01:05:18.444804] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1ae4eb0) 00:25:34.365 [2024-07-23 01:05:18.444813] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:34.365 [2024-07-23 01:05:18.444838] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b3e500, cid 4, qid 0 00:25:34.365 [2024-07-23 01:05:18.444850] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b3e660, cid 5, qid 0 00:25:34.365 [2024-07-23 01:05:18.445031] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:34.365 [2024-07-23 01:05:18.445047] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:34.365 [2024-07-23 01:05:18.445053] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:34.365 [2024-07-23 01:05:18.445060] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b3e500) on tqpair=0x1ae4eb0 00:25:34.365 [2024-07-23 01:05:18.445071] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:34.365 [2024-07-23 01:05:18.445081] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:34.365 [2024-07-23 01:05:18.445087] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:34.365 [2024-07-23 01:05:18.445094] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b3e660) on tqpair=0x1ae4eb0 00:25:34.365 [2024-07-23 01:05:18.445110] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.365 [2024-07-23 01:05:18.445119] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.365 [2024-07-23 01:05:18.445125] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1ae4eb0) 00:25:34.365 [2024-07-23 01:05:18.445136] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.365 [2024-07-23 01:05:18.445171] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b3e660, cid 5, qid 0 00:25:34.365 [2024-07-23 01:05:18.445402] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:34.365 [2024-07-23 01:05:18.445415] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:34.365 [2024-07-23 01:05:18.445422] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:34.365 [2024-07-23 01:05:18.445428] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b3e660) on tqpair=0x1ae4eb0 00:25:34.365 [2024-07-23 01:05:18.445445] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.365 [2024-07-23 01:05:18.445454] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.365 [2024-07-23 01:05:18.445460] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1ae4eb0) 00:25:34.365 [2024-07-23 01:05:18.445471] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.365 [2024-07-23 01:05:18.445491] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b3e660, cid 5, qid 0 00:25:34.365 [2024-07-23 01:05:18.445628] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:34.365 [2024-07-23 01:05:18.445643] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:34.365 [2024-07-23 01:05:18.445650] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:34.365 [2024-07-23 01:05:18.445660] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b3e660) on tqpair=0x1ae4eb0 00:25:34.365 [2024-07-23 01:05:18.445678] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.365 [2024-07-23 01:05:18.445687] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.365 [2024-07-23 01:05:18.445694] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1ae4eb0) 00:25:34.365 [2024-07-23 01:05:18.445704] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.365 [2024-07-23 01:05:18.445726] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b3e660, cid 5, qid 0 00:25:34.365 [2024-07-23 01:05:18.449624] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:34.365 [2024-07-23 01:05:18.449641] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:34.365 [2024-07-23 01:05:18.449648] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:34.365 [2024-07-23 01:05:18.449655] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b3e660) on tqpair=0x1ae4eb0 00:25:34.365 [2024-07-23 01:05:18.449690] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.365 [2024-07-23 01:05:18.449701] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.365 [2024-07-23 01:05:18.449707] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1ae4eb0) 00:25:34.365 [2024-07-23 01:05:18.449718] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.365 [2024-07-23 01:05:18.449730] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.365 [2024-07-23 01:05:18.449738] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.365 [2024-07-23 01:05:18.449744] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ae4eb0) 00:25:34.365 [2024-07-23 01:05:18.449753] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.365 [2024-07-23 01:05:18.449765] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.365 [2024-07-23 01:05:18.449772] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.365 [2024-07-23 01:05:18.449778] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1ae4eb0) 00:25:34.365 [2024-07-23 01:05:18.449787] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.365 [2024-07-23 01:05:18.449799] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.365 [2024-07-23 01:05:18.449806] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.365 [2024-07-23 01:05:18.449813] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1ae4eb0) 00:25:34.365 [2024-07-23 01:05:18.449822] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.366 [2024-07-23 01:05:18.449845] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b3e660, cid 5, qid 0 00:25:34.366 [2024-07-23 01:05:18.449857] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b3e500, cid 4, qid 0 00:25:34.366 [2024-07-23 01:05:18.449865] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b3e7c0, cid 6, qid 0 00:25:34.366 [2024-07-23 01:05:18.449872] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b3e920, cid 7, qid 0 00:25:34.366 [2024-07-23 01:05:18.450118] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:34.366 [2024-07-23 01:05:18.450133] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:34.366 [2024-07-23 01:05:18.450140] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:34.366 [2024-07-23 01:05:18.450147] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ae4eb0): datao=0, datal=8192, cccid=5 00:25:34.366 [2024-07-23 01:05:18.450159] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b3e660) on tqpair(0x1ae4eb0): expected_datao=0, payload_size=8192 00:25:34.366 [2024-07-23 01:05:18.450270] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:34.366 [2024-07-23 01:05:18.450281] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:34.366 [2024-07-23 01:05:18.450290] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:34.366 [2024-07-23 01:05:18.450299] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:34.366 [2024-07-23 01:05:18.450305] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:34.366 [2024-07-23 01:05:18.450312] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ae4eb0): datao=0, datal=512, cccid=4 00:25:34.366 [2024-07-23 01:05:18.450319] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b3e500) on tqpair(0x1ae4eb0): expected_datao=0, payload_size=512 00:25:34.366 [2024-07-23 01:05:18.450329] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:34.366 [2024-07-23 01:05:18.450336] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:34.366 [2024-07-23 01:05:18.450345] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:34.366 [2024-07-23 01:05:18.450353] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:34.366 [2024-07-23 01:05:18.450360] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:34.366 [2024-07-23 01:05:18.450366] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ae4eb0): datao=0, datal=512, cccid=6 00:25:34.366 [2024-07-23 01:05:18.450374] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b3e7c0) on tqpair(0x1ae4eb0): expected_datao=0, payload_size=512 00:25:34.366 [2024-07-23 01:05:18.450384] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:34.366 [2024-07-23 01:05:18.450391] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:34.366 [2024-07-23 01:05:18.450399] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:34.366 [2024-07-23 01:05:18.450408] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:34.366 [2024-07-23 01:05:18.450414] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:34.366 [2024-07-23 01:05:18.450421] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ae4eb0): datao=0, datal=4096, cccid=7 00:25:34.366 [2024-07-23 01:05:18.450428] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b3e920) on tqpair(0x1ae4eb0): expected_datao=0, payload_size=4096 00:25:34.366 [2024-07-23 01:05:18.450439] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:34.366 [2024-07-23 01:05:18.450446] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:34.366 [2024-07-23 01:05:18.450457] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:34.366 [2024-07-23 01:05:18.450467] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:34.366 [2024-07-23 01:05:18.450473] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:34.366 [2024-07-23 01:05:18.450480] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b3e660) on tqpair=0x1ae4eb0 00:25:34.366 [2024-07-23 01:05:18.450500] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:34.366 [2024-07-23 01:05:18.450511] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:34.366 [2024-07-23 01:05:18.450518] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:34.366 [2024-07-23 01:05:18.450524] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b3e500) on tqpair=0x1ae4eb0 00:25:34.366 [2024-07-23 01:05:18.450539] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:34.366 [2024-07-23 01:05:18.450549] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:34.366 [2024-07-23 01:05:18.450556] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:34.366 [2024-07-23 01:05:18.450562] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b3e7c0) on tqpair=0x1ae4eb0 00:25:34.366 [2024-07-23 01:05:18.450589] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:34.366 [2024-07-23 01:05:18.450599] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:34.366 [2024-07-23 01:05:18.450608] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:34.366 [2024-07-23 01:05:18.450623] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b3e920) on tqpair=0x1ae4eb0 00:25:34.366 ===================================================== 00:25:34.366 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:34.366 ===================================================== 00:25:34.366 Controller Capabilities/Features 00:25:34.366 ================================ 00:25:34.366 Vendor ID: 8086 00:25:34.366 Subsystem Vendor ID: 8086 00:25:34.366 Serial Number: SPDK00000000000001 00:25:34.366 Model Number: SPDK bdev Controller 00:25:34.366 Firmware Version: 24.01.1 00:25:34.366 Recommended Arb Burst: 6 00:25:34.366 IEEE OUI Identifier: e4 d2 5c 00:25:34.366 Multi-path I/O 00:25:34.366 May have multiple subsystem ports: Yes 00:25:34.366 May have multiple controllers: Yes 00:25:34.366 Associated with SR-IOV VF: No 00:25:34.366 Max Data Transfer Size: 131072 00:25:34.366 Max Number of Namespaces: 32 00:25:34.366 Max Number of I/O Queues: 127 00:25:34.366 NVMe Specification Version (VS): 1.3 00:25:34.366 NVMe Specification Version (Identify): 1.3 00:25:34.366 Maximum Queue Entries: 128 00:25:34.366 Contiguous Queues Required: Yes 00:25:34.366 Arbitration Mechanisms Supported 00:25:34.366 Weighted Round Robin: Not Supported 00:25:34.366 Vendor Specific: Not Supported 00:25:34.366 Reset Timeout: 15000 ms 00:25:34.366 Doorbell Stride: 4 bytes 00:25:34.366 NVM Subsystem Reset: Not Supported 00:25:34.366 Command Sets Supported 00:25:34.366 NVM Command Set: Supported 00:25:34.366 Boot Partition: Not Supported 00:25:34.366 Memory Page Size Minimum: 4096 bytes 00:25:34.366 Memory Page Size Maximum: 4096 bytes 00:25:34.366 Persistent Memory Region: Not Supported 00:25:34.366 Optional Asynchronous Events Supported 00:25:34.366 Namespace Attribute Notices: Supported 00:25:34.366 Firmware Activation Notices: Not Supported 00:25:34.366 ANA Change Notices: Not Supported 00:25:34.366 PLE Aggregate Log Change Notices: Not Supported 00:25:34.366 LBA Status Info Alert Notices: Not Supported 00:25:34.366 EGE Aggregate Log Change Notices: Not Supported 00:25:34.366 Normal NVM Subsystem Shutdown event: Not Supported 00:25:34.366 Zone Descriptor Change Notices: Not Supported 00:25:34.366 Discovery Log Change Notices: Not Supported 00:25:34.366 Controller Attributes 00:25:34.366 128-bit Host Identifier: Supported 00:25:34.366 Non-Operational Permissive Mode: Not Supported 00:25:34.366 NVM Sets: Not Supported 00:25:34.366 Read Recovery Levels: Not Supported 00:25:34.366 Endurance Groups: Not Supported 00:25:34.366 Predictable Latency Mode: Not Supported 00:25:34.366 Traffic Based Keep ALive: Not Supported 00:25:34.366 Namespace Granularity: Not Supported 00:25:34.366 SQ Associations: Not Supported 00:25:34.366 UUID List: Not Supported 00:25:34.366 Multi-Domain Subsystem: Not Supported 00:25:34.366 Fixed Capacity Management: Not Supported 00:25:34.366 Variable Capacity Management: Not Supported 00:25:34.366 Delete Endurance Group: Not Supported 00:25:34.366 Delete NVM Set: Not Supported 00:25:34.366 Extended LBA Formats Supported: Not Supported 00:25:34.366 Flexible Data Placement Supported: Not Supported 00:25:34.366 00:25:34.366 Controller Memory Buffer Support 00:25:34.366 ================================ 00:25:34.366 Supported: No 00:25:34.366 00:25:34.366 Persistent Memory Region Support 00:25:34.366 ================================ 00:25:34.366 Supported: No 00:25:34.366 00:25:34.366 Admin Command Set Attributes 00:25:34.366 ============================ 00:25:34.366 Security Send/Receive: Not Supported 00:25:34.366 Format NVM: Not Supported 00:25:34.366 Firmware Activate/Download: Not Supported 00:25:34.366 Namespace Management: Not Supported 00:25:34.366 Device Self-Test: Not Supported 00:25:34.366 Directives: Not Supported 00:25:34.366 NVMe-MI: Not Supported 00:25:34.366 Virtualization Management: Not Supported 00:25:34.366 Doorbell Buffer Config: Not Supported 00:25:34.366 Get LBA Status Capability: Not Supported 00:25:34.366 Command & Feature Lockdown Capability: Not Supported 00:25:34.366 Abort Command Limit: 4 00:25:34.366 Async Event Request Limit: 4 00:25:34.366 Number of Firmware Slots: N/A 00:25:34.366 Firmware Slot 1 Read-Only: N/A 00:25:34.366 Firmware Activation Without Reset: N/A 00:25:34.366 Multiple Update Detection Support: N/A 00:25:34.366 Firmware Update Granularity: No Information Provided 00:25:34.366 Per-Namespace SMART Log: No 00:25:34.366 Asymmetric Namespace Access Log Page: Not Supported 00:25:34.366 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:25:34.366 Command Effects Log Page: Supported 00:25:34.366 Get Log Page Extended Data: Supported 00:25:34.366 Telemetry Log Pages: Not Supported 00:25:34.366 Persistent Event Log Pages: Not Supported 00:25:34.366 Supported Log Pages Log Page: May Support 00:25:34.366 Commands Supported & Effects Log Page: Not Supported 00:25:34.367 Feature Identifiers & Effects Log Page:May Support 00:25:34.367 NVMe-MI Commands & Effects Log Page: May Support 00:25:34.367 Data Area 4 for Telemetry Log: Not Supported 00:25:34.367 Error Log Page Entries Supported: 128 00:25:34.367 Keep Alive: Supported 00:25:34.367 Keep Alive Granularity: 10000 ms 00:25:34.367 00:25:34.367 NVM Command Set Attributes 00:25:34.367 ========================== 00:25:34.367 Submission Queue Entry Size 00:25:34.367 Max: 64 00:25:34.367 Min: 64 00:25:34.367 Completion Queue Entry Size 00:25:34.367 Max: 16 00:25:34.367 Min: 16 00:25:34.367 Number of Namespaces: 32 00:25:34.367 Compare Command: Supported 00:25:34.367 Write Uncorrectable Command: Not Supported 00:25:34.367 Dataset Management Command: Supported 00:25:34.367 Write Zeroes Command: Supported 00:25:34.367 Set Features Save Field: Not Supported 00:25:34.367 Reservations: Supported 00:25:34.367 Timestamp: Not Supported 00:25:34.367 Copy: Supported 00:25:34.367 Volatile Write Cache: Present 00:25:34.367 Atomic Write Unit (Normal): 1 00:25:34.367 Atomic Write Unit (PFail): 1 00:25:34.367 Atomic Compare & Write Unit: 1 00:25:34.367 Fused Compare & Write: Supported 00:25:34.367 Scatter-Gather List 00:25:34.367 SGL Command Set: Supported 00:25:34.367 SGL Keyed: Supported 00:25:34.367 SGL Bit Bucket Descriptor: Not Supported 00:25:34.367 SGL Metadata Pointer: Not Supported 00:25:34.367 Oversized SGL: Not Supported 00:25:34.367 SGL Metadata Address: Not Supported 00:25:34.367 SGL Offset: Supported 00:25:34.367 Transport SGL Data Block: Not Supported 00:25:34.367 Replay Protected Memory Block: Not Supported 00:25:34.367 00:25:34.367 Firmware Slot Information 00:25:34.367 ========================= 00:25:34.367 Active slot: 1 00:25:34.367 Slot 1 Firmware Revision: 24.01.1 00:25:34.367 00:25:34.367 00:25:34.367 Commands Supported and Effects 00:25:34.367 ============================== 00:25:34.367 Admin Commands 00:25:34.367 -------------- 00:25:34.367 Get Log Page (02h): Supported 00:25:34.367 Identify (06h): Supported 00:25:34.367 Abort (08h): Supported 00:25:34.367 Set Features (09h): Supported 00:25:34.367 Get Features (0Ah): Supported 00:25:34.367 Asynchronous Event Request (0Ch): Supported 00:25:34.367 Keep Alive (18h): Supported 00:25:34.367 I/O Commands 00:25:34.367 ------------ 00:25:34.367 Flush (00h): Supported LBA-Change 00:25:34.367 Write (01h): Supported LBA-Change 00:25:34.367 Read (02h): Supported 00:25:34.367 Compare (05h): Supported 00:25:34.367 Write Zeroes (08h): Supported LBA-Change 00:25:34.367 Dataset Management (09h): Supported LBA-Change 00:25:34.367 Copy (19h): Supported LBA-Change 00:25:34.367 Unknown (79h): Supported LBA-Change 00:25:34.367 Unknown (7Ah): Supported 00:25:34.367 00:25:34.367 Error Log 00:25:34.367 ========= 00:25:34.367 00:25:34.367 Arbitration 00:25:34.367 =========== 00:25:34.367 Arbitration Burst: 1 00:25:34.367 00:25:34.367 Power Management 00:25:34.367 ================ 00:25:34.367 Number of Power States: 1 00:25:34.367 Current Power State: Power State #0 00:25:34.367 Power State #0: 00:25:34.367 Max Power: 0.00 W 00:25:34.367 Non-Operational State: Operational 00:25:34.367 Entry Latency: Not Reported 00:25:34.367 Exit Latency: Not Reported 00:25:34.367 Relative Read Throughput: 0 00:25:34.367 Relative Read Latency: 0 00:25:34.367 Relative Write Throughput: 0 00:25:34.367 Relative Write Latency: 0 00:25:34.367 Idle Power: Not Reported 00:25:34.367 Active Power: Not Reported 00:25:34.367 Non-Operational Permissive Mode: Not Supported 00:25:34.367 00:25:34.367 Health Information 00:25:34.367 ================== 00:25:34.367 Critical Warnings: 00:25:34.367 Available Spare Space: OK 00:25:34.367 Temperature: OK 00:25:34.367 Device Reliability: OK 00:25:34.367 Read Only: No 00:25:34.367 Volatile Memory Backup: OK 00:25:34.367 Current Temperature: 0 Kelvin (-273 Celsius) 00:25:34.367 Temperature Threshold: [2024-07-23 01:05:18.450763] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.367 [2024-07-23 01:05:18.450775] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.367 [2024-07-23 01:05:18.450782] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1ae4eb0) 00:25:34.367 [2024-07-23 01:05:18.450793] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.367 [2024-07-23 01:05:18.450816] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b3e920, cid 7, qid 0 00:25:34.367 [2024-07-23 01:05:18.450986] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:34.367 [2024-07-23 01:05:18.451001] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:34.367 [2024-07-23 01:05:18.451008] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:34.367 [2024-07-23 01:05:18.451015] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b3e920) on tqpair=0x1ae4eb0 00:25:34.367 [2024-07-23 01:05:18.451056] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:25:34.367 [2024-07-23 01:05:18.451077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.367 [2024-07-23 01:05:18.451089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.367 [2024-07-23 01:05:18.451099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.367 [2024-07-23 01:05:18.451108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.367 [2024-07-23 01:05:18.451121] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.367 [2024-07-23 01:05:18.451144] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.367 [2024-07-23 01:05:18.451150] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ae4eb0) 00:25:34.367 [2024-07-23 01:05:18.451161] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.367 [2024-07-23 01:05:18.451182] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b3e3a0, cid 3, qid 0 00:25:34.367 [2024-07-23 01:05:18.451383] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:34.367 [2024-07-23 01:05:18.451396] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:34.367 [2024-07-23 01:05:18.451402] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:34.367 [2024-07-23 01:05:18.451409] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b3e3a0) on tqpair=0x1ae4eb0 00:25:34.367 [2024-07-23 01:05:18.451422] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.367 [2024-07-23 01:05:18.451429] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.367 [2024-07-23 01:05:18.451435] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ae4eb0) 00:25:34.367 [2024-07-23 01:05:18.451446] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.367 [2024-07-23 01:05:18.451471] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b3e3a0, cid 3, qid 0 00:25:34.367 [2024-07-23 01:05:18.451627] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:34.367 [2024-07-23 01:05:18.451641] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:34.367 [2024-07-23 01:05:18.451648] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:34.367 [2024-07-23 01:05:18.451655] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b3e3a0) on tqpair=0x1ae4eb0 00:25:34.367 [2024-07-23 01:05:18.451667] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:25:34.367 [2024-07-23 01:05:18.451675] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:25:34.367 [2024-07-23 01:05:18.451691] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.367 [2024-07-23 01:05:18.451700] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.367 [2024-07-23 01:05:18.451707] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ae4eb0) 00:25:34.367 [2024-07-23 01:05:18.451717] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.367 [2024-07-23 01:05:18.451738] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b3e3a0, cid 3, qid 0 00:25:34.367 [2024-07-23 01:05:18.451917] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:34.367 [2024-07-23 01:05:18.451936] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:34.367 [2024-07-23 01:05:18.451943] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:34.367 [2024-07-23 01:05:18.451949] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b3e3a0) on tqpair=0x1ae4eb0 00:25:34.367 [2024-07-23 01:05:18.451967] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.367 [2024-07-23 01:05:18.451976] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.367 [2024-07-23 01:05:18.451982] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ae4eb0) 00:25:34.367 [2024-07-23 01:05:18.451992] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.368 [2024-07-23 01:05:18.452012] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b3e3a0, cid 3, qid 0 00:25:34.368 [2024-07-23 01:05:18.452146] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:34.368 [2024-07-23 01:05:18.452162] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:34.368 [2024-07-23 01:05:18.452169] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:34.368 [2024-07-23 01:05:18.452175] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b3e3a0) on tqpair=0x1ae4eb0 00:25:34.368 [2024-07-23 01:05:18.452193] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.368 [2024-07-23 01:05:18.452202] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.368 [2024-07-23 01:05:18.452208] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ae4eb0) 00:25:34.368 [2024-07-23 01:05:18.452219] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.368 [2024-07-23 01:05:18.452242] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b3e3a0, cid 3, qid 0 00:25:34.368 [2024-07-23 01:05:18.452396] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:34.368 [2024-07-23 01:05:18.452411] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:34.368 [2024-07-23 01:05:18.452418] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:34.368 [2024-07-23 01:05:18.452424] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b3e3a0) on tqpair=0x1ae4eb0 00:25:34.368 [2024-07-23 01:05:18.452442] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.368 [2024-07-23 01:05:18.452451] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.368 [2024-07-23 01:05:18.452457] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ae4eb0) 00:25:34.368 [2024-07-23 01:05:18.452468] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.368 [2024-07-23 01:05:18.452489] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b3e3a0, cid 3, qid 0 00:25:34.368 [2024-07-23 01:05:18.452655] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:34.368 [2024-07-23 01:05:18.452671] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:34.368 [2024-07-23 01:05:18.452678] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:34.368 [2024-07-23 01:05:18.452688] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b3e3a0) on tqpair=0x1ae4eb0 00:25:34.368 [2024-07-23 01:05:18.452706] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.368 [2024-07-23 01:05:18.452715] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.368 [2024-07-23 01:05:18.452722] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ae4eb0) 00:25:34.368 [2024-07-23 01:05:18.452732] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.368 [2024-07-23 01:05:18.452753] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b3e3a0, cid 3, qid 0 00:25:34.368 [2024-07-23 01:05:18.452935] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:34.368 [2024-07-23 01:05:18.452950] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:34.368 [2024-07-23 01:05:18.452957] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:34.368 [2024-07-23 01:05:18.452963] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b3e3a0) on tqpair=0x1ae4eb0 00:25:34.368 [2024-07-23 01:05:18.452981] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.368 [2024-07-23 01:05:18.452990] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.368 [2024-07-23 01:05:18.452997] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ae4eb0) 00:25:34.368 [2024-07-23 01:05:18.453007] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.368 [2024-07-23 01:05:18.453028] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b3e3a0, cid 3, qid 0 00:25:34.368 [2024-07-23 01:05:18.453156] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:34.368 [2024-07-23 01:05:18.453168] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:34.368 [2024-07-23 01:05:18.453175] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:34.368 [2024-07-23 01:05:18.453182] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b3e3a0) on tqpair=0x1ae4eb0 00:25:34.368 [2024-07-23 01:05:18.453199] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.368 [2024-07-23 01:05:18.453208] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.368 [2024-07-23 01:05:18.453214] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ae4eb0) 00:25:34.368 [2024-07-23 01:05:18.453224] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.368 [2024-07-23 01:05:18.453244] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b3e3a0, cid 3, qid 0 00:25:34.368 [2024-07-23 01:05:18.453376] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:34.368 [2024-07-23 01:05:18.453391] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:34.368 [2024-07-23 01:05:18.453398] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:34.368 [2024-07-23 01:05:18.453405] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b3e3a0) on tqpair=0x1ae4eb0 00:25:34.368 [2024-07-23 01:05:18.453422] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.368 [2024-07-23 01:05:18.453431] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.368 [2024-07-23 01:05:18.453438] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ae4eb0) 00:25:34.368 [2024-07-23 01:05:18.453448] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.368 [2024-07-23 01:05:18.453469] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b3e3a0, cid 3, qid 0 00:25:34.368 [2024-07-23 01:05:18.453625] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:34.368 [2024-07-23 01:05:18.453638] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:34.368 [2024-07-23 01:05:18.453645] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:34.368 [2024-07-23 01:05:18.453652] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b3e3a0) on tqpair=0x1ae4eb0 00:25:34.368 [2024-07-23 01:05:18.453673] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.368 [2024-07-23 01:05:18.453683] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.368 [2024-07-23 01:05:18.453690] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ae4eb0) 00:25:34.368 [2024-07-23 01:05:18.453700] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.368 [2024-07-23 01:05:18.453721] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b3e3a0, cid 3, qid 0 00:25:34.368 [2024-07-23 01:05:18.453902] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:34.368 [2024-07-23 01:05:18.453918] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:34.368 [2024-07-23 01:05:18.453925] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:34.368 [2024-07-23 01:05:18.453931] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b3e3a0) on tqpair=0x1ae4eb0 00:25:34.368 [2024-07-23 01:05:18.453949] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.368 [2024-07-23 01:05:18.453958] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.368 [2024-07-23 01:05:18.453964] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ae4eb0) 00:25:34.368 [2024-07-23 01:05:18.453975] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.368 [2024-07-23 01:05:18.453996] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b3e3a0, cid 3, qid 0 00:25:34.368 [2024-07-23 01:05:18.454160] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:34.368 [2024-07-23 01:05:18.454175] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:34.368 [2024-07-23 01:05:18.454182] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:34.368 [2024-07-23 01:05:18.454189] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b3e3a0) on tqpair=0x1ae4eb0 00:25:34.368 [2024-07-23 01:05:18.454206] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.368 [2024-07-23 01:05:18.454215] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.368 [2024-07-23 01:05:18.454221] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ae4eb0) 00:25:34.368 [2024-07-23 01:05:18.454232] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.368 [2024-07-23 01:05:18.454252] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b3e3a0, cid 3, qid 0 00:25:34.368 [2024-07-23 01:05:18.454378] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:34.368 [2024-07-23 01:05:18.454391] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:34.368 [2024-07-23 01:05:18.454397] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:34.368 [2024-07-23 01:05:18.454404] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b3e3a0) on tqpair=0x1ae4eb0 00:25:34.368 [2024-07-23 01:05:18.454421] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.368 [2024-07-23 01:05:18.454430] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.368 [2024-07-23 01:05:18.454436] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ae4eb0) 00:25:34.368 [2024-07-23 01:05:18.454446] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.368 [2024-07-23 01:05:18.454467] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b3e3a0, cid 3, qid 0 00:25:34.368 [2024-07-23 01:05:18.454597] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:34.368 [2024-07-23 01:05:18.458620] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:34.368 [2024-07-23 01:05:18.458633] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:34.368 [2024-07-23 01:05:18.458640] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b3e3a0) on tqpair=0x1ae4eb0 00:25:34.368 [2024-07-23 01:05:18.458678] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.368 [2024-07-23 01:05:18.458688] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.368 [2024-07-23 01:05:18.458695] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ae4eb0) 00:25:34.368 [2024-07-23 01:05:18.458706] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.368 [2024-07-23 01:05:18.458728] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b3e3a0, cid 3, qid 0 00:25:34.368 [2024-07-23 01:05:18.458912] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:34.368 [2024-07-23 01:05:18.458925] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:34.368 [2024-07-23 01:05:18.458931] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:34.368 [2024-07-23 01:05:18.458938] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b3e3a0) on tqpair=0x1ae4eb0 00:25:34.368 [2024-07-23 01:05:18.458952] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:25:34.369 0 Kelvin (-273 Celsius) 00:25:34.369 Available Spare: 0% 00:25:34.369 Available Spare Threshold: 0% 00:25:34.369 Life Percentage Used: 0% 00:25:34.369 Data Units Read: 0 00:25:34.369 Data Units Written: 0 00:25:34.369 Host Read Commands: 0 00:25:34.369 Host Write Commands: 0 00:25:34.369 Controller Busy Time: 0 minutes 00:25:34.369 Power Cycles: 0 00:25:34.369 Power On Hours: 0 hours 00:25:34.369 Unsafe Shutdowns: 0 00:25:34.369 Unrecoverable Media Errors: 0 00:25:34.369 Lifetime Error Log Entries: 0 00:25:34.369 Warning Temperature Time: 0 minutes 00:25:34.369 Critical Temperature Time: 0 minutes 00:25:34.369 00:25:34.369 Number of Queues 00:25:34.369 ================ 00:25:34.369 Number of I/O Submission Queues: 127 00:25:34.369 Number of I/O Completion Queues: 127 00:25:34.369 00:25:34.369 Active Namespaces 00:25:34.369 ================= 00:25:34.369 Namespace ID:1 00:25:34.369 Error Recovery Timeout: Unlimited 00:25:34.369 Command Set Identifier: NVM (00h) 00:25:34.369 Deallocate: Supported 00:25:34.369 Deallocated/Unwritten Error: Not Supported 00:25:34.369 Deallocated Read Value: Unknown 00:25:34.369 Deallocate in Write Zeroes: Not Supported 00:25:34.369 Deallocated Guard Field: 0xFFFF 00:25:34.369 Flush: Supported 00:25:34.369 Reservation: Supported 00:25:34.369 Namespace Sharing Capabilities: Multiple Controllers 00:25:34.369 Size (in LBAs): 131072 (0GiB) 00:25:34.369 Capacity (in LBAs): 131072 (0GiB) 00:25:34.369 Utilization (in LBAs): 131072 (0GiB) 00:25:34.369 NGUID: ABCDEF0123456789ABCDEF0123456789 00:25:34.369 EUI64: ABCDEF0123456789 00:25:34.369 UUID: 99fe6f23-e75a-4f8f-a20f-5eddb38097a0 00:25:34.369 Thin Provisioning: Not Supported 00:25:34.369 Per-NS Atomic Units: Yes 00:25:34.369 Atomic Boundary Size (Normal): 0 00:25:34.369 Atomic Boundary Size (PFail): 0 00:25:34.369 Atomic Boundary Offset: 0 00:25:34.369 Maximum Single Source Range Length: 65535 00:25:34.369 Maximum Copy Length: 65535 00:25:34.369 Maximum Source Range Count: 1 00:25:34.369 NGUID/EUI64 Never Reused: No 00:25:34.369 Namespace Write Protected: No 00:25:34.369 Number of LBA Formats: 1 00:25:34.369 Current LBA Format: LBA Format #00 00:25:34.369 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:34.369 00:25:34.369 01:05:18 -- host/identify.sh@51 -- # sync 00:25:34.369 01:05:18 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:34.369 01:05:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:34.369 01:05:18 -- common/autotest_common.sh@10 -- # set +x 00:25:34.369 01:05:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:34.369 01:05:18 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:25:34.369 01:05:18 -- host/identify.sh@56 -- # nvmftestfini 00:25:34.369 01:05:18 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:34.369 01:05:18 -- nvmf/common.sh@116 -- # sync 00:25:34.369 01:05:18 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:34.369 01:05:18 -- nvmf/common.sh@119 -- # set +e 00:25:34.369 01:05:18 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:34.369 01:05:18 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:34.369 rmmod nvme_tcp 00:25:34.369 rmmod nvme_fabrics 00:25:34.369 rmmod nvme_keyring 00:25:34.369 01:05:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:34.369 01:05:18 -- nvmf/common.sh@123 -- # set -e 00:25:34.369 01:05:18 -- nvmf/common.sh@124 -- # return 0 00:25:34.369 01:05:18 -- nvmf/common.sh@477 -- # '[' -n 3484236 ']' 00:25:34.369 01:05:18 -- nvmf/common.sh@478 -- # killprocess 3484236 00:25:34.369 01:05:18 -- common/autotest_common.sh@926 -- # '[' -z 3484236 ']' 00:25:34.369 01:05:18 -- common/autotest_common.sh@930 -- # kill -0 3484236 00:25:34.369 01:05:18 -- common/autotest_common.sh@931 -- # uname 00:25:34.369 01:05:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:34.369 01:05:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3484236 00:25:34.656 01:05:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:34.656 01:05:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:34.656 01:05:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3484236' 00:25:34.656 killing process with pid 3484236 00:25:34.656 01:05:18 -- common/autotest_common.sh@945 -- # kill 3484236 00:25:34.656 [2024-07-23 01:05:18.567286] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:25:34.656 01:05:18 -- common/autotest_common.sh@950 -- # wait 3484236 00:25:34.657 01:05:18 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:34.657 01:05:18 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:34.657 01:05:18 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:34.657 01:05:18 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:34.657 01:05:18 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:34.657 01:05:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:34.657 01:05:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:34.657 01:05:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:37.194 01:05:20 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:37.194 00:25:37.194 real 0m5.847s 00:25:37.194 user 0m6.905s 00:25:37.194 sys 0m1.803s 00:25:37.194 01:05:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:37.194 01:05:20 -- common/autotest_common.sh@10 -- # set +x 00:25:37.194 ************************************ 00:25:37.194 END TEST nvmf_identify 00:25:37.194 ************************************ 00:25:37.194 01:05:20 -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:25:37.194 01:05:20 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:37.194 01:05:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:37.194 01:05:20 -- common/autotest_common.sh@10 -- # set +x 00:25:37.194 ************************************ 00:25:37.194 START TEST nvmf_perf 00:25:37.194 ************************************ 00:25:37.194 01:05:20 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:25:37.194 * Looking for test storage... 00:25:37.194 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:37.194 01:05:20 -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:37.194 01:05:20 -- nvmf/common.sh@7 -- # uname -s 00:25:37.194 01:05:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:37.194 01:05:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:37.194 01:05:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:37.194 01:05:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:37.194 01:05:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:37.194 01:05:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:37.194 01:05:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:37.194 01:05:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:37.194 01:05:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:37.194 01:05:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:37.194 01:05:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:37.194 01:05:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:37.194 01:05:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:37.194 01:05:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:37.194 01:05:20 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:37.194 01:05:20 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:37.194 01:05:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:37.194 01:05:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:37.194 01:05:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:37.194 01:05:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:37.194 01:05:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:37.194 01:05:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:37.194 01:05:20 -- paths/export.sh@5 -- # export PATH 00:25:37.194 01:05:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:37.194 01:05:20 -- nvmf/common.sh@46 -- # : 0 00:25:37.194 01:05:20 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:37.195 01:05:20 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:37.195 01:05:20 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:37.195 01:05:20 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:37.195 01:05:20 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:37.195 01:05:20 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:37.195 01:05:20 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:37.195 01:05:20 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:37.195 01:05:20 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:37.195 01:05:20 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:37.195 01:05:20 -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:37.195 01:05:20 -- host/perf.sh@17 -- # nvmftestinit 00:25:37.195 01:05:20 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:37.195 01:05:20 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:37.195 01:05:20 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:37.195 01:05:20 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:37.195 01:05:20 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:37.195 01:05:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:37.195 01:05:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:37.195 01:05:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:37.195 01:05:20 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:37.195 01:05:20 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:37.195 01:05:20 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:37.195 01:05:20 -- common/autotest_common.sh@10 -- # set +x 00:25:39.100 01:05:22 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:39.100 01:05:22 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:39.100 01:05:22 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:39.100 01:05:22 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:39.100 01:05:22 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:39.100 01:05:22 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:39.100 01:05:22 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:39.100 01:05:22 -- nvmf/common.sh@294 -- # net_devs=() 00:25:39.100 01:05:22 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:39.100 01:05:22 -- nvmf/common.sh@295 -- # e810=() 00:25:39.100 01:05:22 -- nvmf/common.sh@295 -- # local -ga e810 00:25:39.100 01:05:22 -- nvmf/common.sh@296 -- # x722=() 00:25:39.100 01:05:22 -- nvmf/common.sh@296 -- # local -ga x722 00:25:39.100 01:05:22 -- nvmf/common.sh@297 -- # mlx=() 00:25:39.100 01:05:22 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:39.100 01:05:22 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:39.100 01:05:22 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:39.100 01:05:22 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:39.100 01:05:22 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:39.100 01:05:22 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:39.100 01:05:22 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:39.100 01:05:22 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:39.100 01:05:22 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:39.100 01:05:22 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:39.100 01:05:22 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:39.100 01:05:22 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:39.100 01:05:22 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:39.100 01:05:22 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:39.100 01:05:22 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:39.100 01:05:22 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:39.100 01:05:22 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:39.100 01:05:22 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:39.100 01:05:22 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:39.100 01:05:22 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:39.100 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:39.100 01:05:22 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:39.100 01:05:22 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:39.100 01:05:22 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:39.100 01:05:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:39.100 01:05:22 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:39.100 01:05:22 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:39.100 01:05:22 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:39.100 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:39.100 01:05:22 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:39.100 01:05:22 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:39.100 01:05:22 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:39.100 01:05:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:39.100 01:05:22 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:39.100 01:05:22 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:39.100 01:05:22 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:39.100 01:05:22 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:39.100 01:05:22 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:39.100 01:05:22 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:39.100 01:05:22 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:39.100 01:05:22 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:39.100 01:05:22 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:39.100 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:39.100 01:05:22 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:39.100 01:05:22 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:39.100 01:05:22 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:39.100 01:05:22 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:39.100 01:05:22 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:39.100 01:05:22 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:39.100 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:39.100 01:05:22 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:39.100 01:05:22 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:39.100 01:05:22 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:39.100 01:05:22 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:39.100 01:05:22 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:39.100 01:05:22 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:39.100 01:05:22 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:39.100 01:05:22 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:39.100 01:05:22 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:39.100 01:05:22 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:39.100 01:05:22 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:39.100 01:05:22 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:39.100 01:05:22 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:39.100 01:05:22 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:39.100 01:05:22 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:39.100 01:05:22 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:39.100 01:05:22 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:39.100 01:05:22 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:39.100 01:05:22 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:39.100 01:05:22 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:39.100 01:05:22 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:39.100 01:05:22 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:39.100 01:05:22 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:39.100 01:05:22 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:39.100 01:05:22 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:39.100 01:05:22 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:39.100 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:39.100 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.218 ms 00:25:39.100 00:25:39.100 --- 10.0.0.2 ping statistics --- 00:25:39.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:39.100 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:25:39.100 01:05:22 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:39.100 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:39.100 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:25:39.100 00:25:39.100 --- 10.0.0.1 ping statistics --- 00:25:39.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:39.100 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:25:39.100 01:05:22 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:39.100 01:05:22 -- nvmf/common.sh@410 -- # return 0 00:25:39.100 01:05:22 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:39.100 01:05:22 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:39.101 01:05:22 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:39.101 01:05:22 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:39.101 01:05:22 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:39.101 01:05:22 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:39.101 01:05:22 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:39.101 01:05:22 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:25:39.101 01:05:22 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:39.101 01:05:22 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:39.101 01:05:22 -- common/autotest_common.sh@10 -- # set +x 00:25:39.101 01:05:22 -- nvmf/common.sh@469 -- # nvmfpid=3486341 00:25:39.101 01:05:22 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:39.101 01:05:22 -- nvmf/common.sh@470 -- # waitforlisten 3486341 00:25:39.101 01:05:22 -- common/autotest_common.sh@819 -- # '[' -z 3486341 ']' 00:25:39.101 01:05:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:39.101 01:05:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:39.101 01:05:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:39.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:39.101 01:05:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:39.101 01:05:22 -- common/autotest_common.sh@10 -- # set +x 00:25:39.101 [2024-07-23 01:05:23.044190] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:25:39.101 [2024-07-23 01:05:23.044270] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:39.101 EAL: No free 2048 kB hugepages reported on node 1 00:25:39.101 [2024-07-23 01:05:23.112716] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:39.101 [2024-07-23 01:05:23.208729] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:39.101 [2024-07-23 01:05:23.208884] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:39.101 [2024-07-23 01:05:23.208901] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:39.101 [2024-07-23 01:05:23.208923] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:39.101 [2024-07-23 01:05:23.208977] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:39.101 [2024-07-23 01:05:23.209013] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:39.101 [2024-07-23 01:05:23.209083] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:39.101 [2024-07-23 01:05:23.209086] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:40.035 01:05:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:40.035 01:05:23 -- common/autotest_common.sh@852 -- # return 0 00:25:40.035 01:05:23 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:40.035 01:05:23 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:40.035 01:05:23 -- common/autotest_common.sh@10 -- # set +x 00:25:40.035 01:05:23 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:40.035 01:05:23 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:25:40.035 01:05:23 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:25:43.323 01:05:27 -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:25:43.323 01:05:27 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:25:43.323 01:05:27 -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:25:43.323 01:05:27 -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:25:43.582 01:05:27 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:25:43.582 01:05:27 -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:25:43.582 01:05:27 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:25:43.582 01:05:27 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:25:43.582 01:05:27 -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:43.582 [2024-07-23 01:05:27.775095] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:43.840 01:05:27 -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:43.840 01:05:28 -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:43.840 01:05:28 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:44.098 01:05:28 -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:44.098 01:05:28 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:44.356 01:05:28 -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:44.614 [2024-07-23 01:05:28.718650] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:44.614 01:05:28 -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:44.871 01:05:28 -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:25:44.871 01:05:28 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:25:44.871 01:05:28 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:25:44.871 01:05:28 -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:25:46.247 Initializing NVMe Controllers 00:25:46.247 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:25:46.247 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:25:46.247 Initialization complete. Launching workers. 00:25:46.247 ======================================================== 00:25:46.247 Latency(us) 00:25:46.247 Device Information : IOPS MiB/s Average min max 00:25:46.247 PCIE (0000:88:00.0) NSID 1 from core 0: 85431.36 333.72 374.11 27.29 5825.91 00:25:46.247 ======================================================== 00:25:46.247 Total : 85431.36 333.72 374.11 27.29 5825.91 00:25:46.247 00:25:46.247 01:05:30 -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:46.247 EAL: No free 2048 kB hugepages reported on node 1 00:25:47.622 Initializing NVMe Controllers 00:25:47.622 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:47.622 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:47.622 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:47.622 Initialization complete. Launching workers. 00:25:47.622 ======================================================== 00:25:47.622 Latency(us) 00:25:47.622 Device Information : IOPS MiB/s Average min max 00:25:47.622 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 161.00 0.63 6444.48 187.14 45888.51 00:25:47.622 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 51.00 0.20 20478.15 4997.14 47923.20 00:25:47.622 ======================================================== 00:25:47.622 Total : 212.00 0.83 9820.50 187.14 47923.20 00:25:47.622 00:25:47.622 01:05:31 -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:47.622 EAL: No free 2048 kB hugepages reported on node 1 00:25:48.554 Initializing NVMe Controllers 00:25:48.554 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:48.554 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:48.554 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:48.554 Initialization complete. Launching workers. 00:25:48.554 ======================================================== 00:25:48.554 Latency(us) 00:25:48.554 Device Information : IOPS MiB/s Average min max 00:25:48.554 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8324.04 32.52 3845.75 545.33 9096.30 00:25:48.554 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3958.59 15.46 8139.98 4631.70 15690.77 00:25:48.554 ======================================================== 00:25:48.554 Total : 12282.64 47.98 5229.75 545.33 15690.77 00:25:48.554 00:25:48.554 01:05:32 -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:25:48.554 01:05:32 -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:25:48.554 01:05:32 -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:48.554 EAL: No free 2048 kB hugepages reported on node 1 00:25:51.089 Initializing NVMe Controllers 00:25:51.089 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:51.089 Controller IO queue size 128, less than required. 00:25:51.089 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:51.089 Controller IO queue size 128, less than required. 00:25:51.089 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:51.089 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:51.089 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:51.089 Initialization complete. Launching workers. 00:25:51.089 ======================================================== 00:25:51.089 Latency(us) 00:25:51.089 Device Information : IOPS MiB/s Average min max 00:25:51.089 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 616.12 154.03 213764.64 111732.38 317785.64 00:25:51.089 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 528.17 132.04 253214.68 101847.82 375274.84 00:25:51.089 ======================================================== 00:25:51.089 Total : 1144.29 286.07 231973.68 101847.82 375274.84 00:25:51.089 00:25:51.089 01:05:35 -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:25:51.089 EAL: No free 2048 kB hugepages reported on node 1 00:25:51.347 No valid NVMe controllers or AIO or URING devices found 00:25:51.347 Initializing NVMe Controllers 00:25:51.347 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:51.347 Controller IO queue size 128, less than required. 00:25:51.347 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:51.347 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:25:51.347 Controller IO queue size 128, less than required. 00:25:51.347 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:51.347 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:25:51.347 WARNING: Some requested NVMe devices were skipped 00:25:51.347 01:05:35 -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:25:51.605 EAL: No free 2048 kB hugepages reported on node 1 00:25:54.138 Initializing NVMe Controllers 00:25:54.138 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:54.138 Controller IO queue size 128, less than required. 00:25:54.138 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:54.138 Controller IO queue size 128, less than required. 00:25:54.138 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:54.138 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:54.138 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:54.138 Initialization complete. Launching workers. 00:25:54.138 00:25:54.138 ==================== 00:25:54.138 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:25:54.138 TCP transport: 00:25:54.138 polls: 31240 00:25:54.138 idle_polls: 12723 00:25:54.138 sock_completions: 18517 00:25:54.138 nvme_completions: 3223 00:25:54.138 submitted_requests: 5031 00:25:54.138 queued_requests: 1 00:25:54.138 00:25:54.138 ==================== 00:25:54.138 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:25:54.138 TCP transport: 00:25:54.138 polls: 28613 00:25:54.138 idle_polls: 8806 00:25:54.138 sock_completions: 19807 00:25:54.138 nvme_completions: 3824 00:25:54.138 submitted_requests: 5846 00:25:54.138 queued_requests: 1 00:25:54.138 ======================================================== 00:25:54.138 Latency(us) 00:25:54.138 Device Information : IOPS MiB/s Average min max 00:25:54.138 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 868.99 217.25 153458.76 74991.59 233773.07 00:25:54.138 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1018.90 254.72 127235.57 71029.99 187300.96 00:25:54.138 ======================================================== 00:25:54.138 Total : 1887.88 471.97 139306.01 71029.99 233773.07 00:25:54.138 00:25:54.138 01:05:38 -- host/perf.sh@66 -- # sync 00:25:54.138 01:05:38 -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:54.396 01:05:38 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:25:54.396 01:05:38 -- host/perf.sh@71 -- # '[' -n 0000:88:00.0 ']' 00:25:54.396 01:05:38 -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:25:57.722 01:05:41 -- host/perf.sh@72 -- # ls_guid=f95751d8-a169-474d-803c-0d1571f651e2 00:25:57.722 01:05:41 -- host/perf.sh@73 -- # get_lvs_free_mb f95751d8-a169-474d-803c-0d1571f651e2 00:25:57.722 01:05:41 -- common/autotest_common.sh@1343 -- # local lvs_uuid=f95751d8-a169-474d-803c-0d1571f651e2 00:25:57.722 01:05:41 -- common/autotest_common.sh@1344 -- # local lvs_info 00:25:57.722 01:05:41 -- common/autotest_common.sh@1345 -- # local fc 00:25:57.722 01:05:41 -- common/autotest_common.sh@1346 -- # local cs 00:25:57.722 01:05:41 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:57.979 01:05:42 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:25:57.979 { 00:25:57.979 "uuid": "f95751d8-a169-474d-803c-0d1571f651e2", 00:25:57.980 "name": "lvs_0", 00:25:57.980 "base_bdev": "Nvme0n1", 00:25:57.980 "total_data_clusters": 238234, 00:25:57.980 "free_clusters": 238234, 00:25:57.980 "block_size": 512, 00:25:57.980 "cluster_size": 4194304 00:25:57.980 } 00:25:57.980 ]' 00:25:57.980 01:05:42 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="f95751d8-a169-474d-803c-0d1571f651e2") .free_clusters' 00:25:57.980 01:05:42 -- common/autotest_common.sh@1348 -- # fc=238234 00:25:57.980 01:05:42 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="f95751d8-a169-474d-803c-0d1571f651e2") .cluster_size' 00:25:57.980 01:05:42 -- common/autotest_common.sh@1349 -- # cs=4194304 00:25:57.980 01:05:42 -- common/autotest_common.sh@1352 -- # free_mb=952936 00:25:57.980 01:05:42 -- common/autotest_common.sh@1353 -- # echo 952936 00:25:57.980 952936 00:25:57.980 01:05:42 -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:25:57.980 01:05:42 -- host/perf.sh@78 -- # free_mb=20480 00:25:57.980 01:05:42 -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f95751d8-a169-474d-803c-0d1571f651e2 lbd_0 20480 00:25:58.547 01:05:42 -- host/perf.sh@80 -- # lb_guid=1291bdcf-f8fc-48d7-a16b-2bb7c2d9dbcd 00:25:58.547 01:05:42 -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 1291bdcf-f8fc-48d7-a16b-2bb7c2d9dbcd lvs_n_0 00:25:59.479 01:05:43 -- host/perf.sh@83 -- # ls_nested_guid=56a30022-1155-490c-81c1-115fac7d80cf 00:25:59.479 01:05:43 -- host/perf.sh@84 -- # get_lvs_free_mb 56a30022-1155-490c-81c1-115fac7d80cf 00:25:59.479 01:05:43 -- common/autotest_common.sh@1343 -- # local lvs_uuid=56a30022-1155-490c-81c1-115fac7d80cf 00:25:59.479 01:05:43 -- common/autotest_common.sh@1344 -- # local lvs_info 00:25:59.479 01:05:43 -- common/autotest_common.sh@1345 -- # local fc 00:25:59.479 01:05:43 -- common/autotest_common.sh@1346 -- # local cs 00:25:59.479 01:05:43 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:59.737 01:05:43 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:25:59.737 { 00:25:59.737 "uuid": "f95751d8-a169-474d-803c-0d1571f651e2", 00:25:59.737 "name": "lvs_0", 00:25:59.737 "base_bdev": "Nvme0n1", 00:25:59.737 "total_data_clusters": 238234, 00:25:59.737 "free_clusters": 233114, 00:25:59.737 "block_size": 512, 00:25:59.737 "cluster_size": 4194304 00:25:59.737 }, 00:25:59.737 { 00:25:59.737 "uuid": "56a30022-1155-490c-81c1-115fac7d80cf", 00:25:59.737 "name": "lvs_n_0", 00:25:59.737 "base_bdev": "1291bdcf-f8fc-48d7-a16b-2bb7c2d9dbcd", 00:25:59.737 "total_data_clusters": 5114, 00:25:59.737 "free_clusters": 5114, 00:25:59.737 "block_size": 512, 00:25:59.737 "cluster_size": 4194304 00:25:59.737 } 00:25:59.737 ]' 00:25:59.737 01:05:43 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="56a30022-1155-490c-81c1-115fac7d80cf") .free_clusters' 00:25:59.737 01:05:43 -- common/autotest_common.sh@1348 -- # fc=5114 00:25:59.737 01:05:43 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="56a30022-1155-490c-81c1-115fac7d80cf") .cluster_size' 00:25:59.737 01:05:43 -- common/autotest_common.sh@1349 -- # cs=4194304 00:25:59.737 01:05:43 -- common/autotest_common.sh@1352 -- # free_mb=20456 00:25:59.737 01:05:43 -- common/autotest_common.sh@1353 -- # echo 20456 00:25:59.737 20456 00:25:59.737 01:05:43 -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:25:59.737 01:05:43 -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 56a30022-1155-490c-81c1-115fac7d80cf lbd_nest_0 20456 00:25:59.995 01:05:44 -- host/perf.sh@88 -- # lb_nested_guid=889da5cc-21e9-4797-a556-7f256744f800 00:25:59.995 01:05:44 -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:00.253 01:05:44 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:26:00.253 01:05:44 -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 889da5cc-21e9-4797-a556-7f256744f800 00:26:00.511 01:05:44 -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:00.769 01:05:44 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:26:00.769 01:05:44 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:26:00.769 01:05:44 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:26:00.769 01:05:44 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:00.769 01:05:44 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:00.769 EAL: No free 2048 kB hugepages reported on node 1 00:26:12.973 Initializing NVMe Controllers 00:26:12.973 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:12.973 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:12.973 Initialization complete. Launching workers. 00:26:12.973 ======================================================== 00:26:12.973 Latency(us) 00:26:12.973 Device Information : IOPS MiB/s Average min max 00:26:12.973 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 45.40 0.02 22114.90 221.65 47145.89 00:26:12.973 ======================================================== 00:26:12.973 Total : 45.40 0.02 22114.90 221.65 47145.89 00:26:12.973 00:26:12.973 01:05:55 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:12.973 01:05:55 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:12.973 EAL: No free 2048 kB hugepages reported on node 1 00:26:22.947 Initializing NVMe Controllers 00:26:22.947 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:22.947 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:22.947 Initialization complete. Launching workers. 00:26:22.947 ======================================================== 00:26:22.947 Latency(us) 00:26:22.947 Device Information : IOPS MiB/s Average min max 00:26:22.947 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 83.20 10.40 12026.52 4987.25 47885.38 00:26:22.947 ======================================================== 00:26:22.947 Total : 83.20 10.40 12026.52 4987.25 47885.38 00:26:22.947 00:26:22.947 01:06:05 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:26:22.947 01:06:05 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:22.947 01:06:05 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:22.947 EAL: No free 2048 kB hugepages reported on node 1 00:26:32.918 Initializing NVMe Controllers 00:26:32.918 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:32.918 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:32.918 Initialization complete. Launching workers. 00:26:32.918 ======================================================== 00:26:32.918 Latency(us) 00:26:32.918 Device Information : IOPS MiB/s Average min max 00:26:32.918 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7371.67 3.60 4341.52 289.54 12104.35 00:26:32.918 ======================================================== 00:26:32.918 Total : 7371.67 3.60 4341.52 289.54 12104.35 00:26:32.918 00:26:32.918 01:06:15 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:32.918 01:06:15 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:32.918 EAL: No free 2048 kB hugepages reported on node 1 00:26:42.922 Initializing NVMe Controllers 00:26:42.922 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:42.922 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:42.922 Initialization complete. Launching workers. 00:26:42.922 ======================================================== 00:26:42.922 Latency(us) 00:26:42.922 Device Information : IOPS MiB/s Average min max 00:26:42.922 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1955.51 244.44 16379.21 1094.93 37486.75 00:26:42.922 ======================================================== 00:26:42.922 Total : 1955.51 244.44 16379.21 1094.93 37486.75 00:26:42.922 00:26:42.922 01:06:26 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:26:42.922 01:06:26 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:42.922 01:06:26 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:42.922 EAL: No free 2048 kB hugepages reported on node 1 00:26:52.904 Initializing NVMe Controllers 00:26:52.904 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:52.904 Controller IO queue size 128, less than required. 00:26:52.904 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:52.904 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:52.904 Initialization complete. Launching workers. 00:26:52.904 ======================================================== 00:26:52.905 Latency(us) 00:26:52.905 Device Information : IOPS MiB/s Average min max 00:26:52.905 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11947.76 5.83 10713.27 1768.05 24988.77 00:26:52.905 ======================================================== 00:26:52.905 Total : 11947.76 5.83 10713.27 1768.05 24988.77 00:26:52.905 00:26:52.905 01:06:36 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:52.905 01:06:36 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:52.905 EAL: No free 2048 kB hugepages reported on node 1 00:27:02.877 Initializing NVMe Controllers 00:27:02.877 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:02.877 Controller IO queue size 128, less than required. 00:27:02.877 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:02.877 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:02.877 Initialization complete. Launching workers. 00:27:02.877 ======================================================== 00:27:02.877 Latency(us) 00:27:02.877 Device Information : IOPS MiB/s Average min max 00:27:02.877 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1204.80 150.60 106277.86 22780.94 233927.67 00:27:02.877 ======================================================== 00:27:02.877 Total : 1204.80 150.60 106277.86 22780.94 233927.67 00:27:02.877 00:27:02.877 01:06:46 -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:03.135 01:06:47 -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 889da5cc-21e9-4797-a556-7f256744f800 00:27:03.703 01:06:47 -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:27:03.960 01:06:48 -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 1291bdcf-f8fc-48d7-a16b-2bb7c2d9dbcd 00:27:04.218 01:06:48 -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:27:04.477 01:06:48 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:27:04.477 01:06:48 -- host/perf.sh@114 -- # nvmftestfini 00:27:04.477 01:06:48 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:04.477 01:06:48 -- nvmf/common.sh@116 -- # sync 00:27:04.477 01:06:48 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:04.477 01:06:48 -- nvmf/common.sh@119 -- # set +e 00:27:04.477 01:06:48 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:04.477 01:06:48 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:04.477 rmmod nvme_tcp 00:27:04.477 rmmod nvme_fabrics 00:27:04.477 rmmod nvme_keyring 00:27:04.736 01:06:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:04.736 01:06:48 -- nvmf/common.sh@123 -- # set -e 00:27:04.736 01:06:48 -- nvmf/common.sh@124 -- # return 0 00:27:04.736 01:06:48 -- nvmf/common.sh@477 -- # '[' -n 3486341 ']' 00:27:04.736 01:06:48 -- nvmf/common.sh@478 -- # killprocess 3486341 00:27:04.736 01:06:48 -- common/autotest_common.sh@926 -- # '[' -z 3486341 ']' 00:27:04.736 01:06:48 -- common/autotest_common.sh@930 -- # kill -0 3486341 00:27:04.736 01:06:48 -- common/autotest_common.sh@931 -- # uname 00:27:04.736 01:06:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:04.736 01:06:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3486341 00:27:04.736 01:06:48 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:04.736 01:06:48 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:04.736 01:06:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3486341' 00:27:04.736 killing process with pid 3486341 00:27:04.736 01:06:48 -- common/autotest_common.sh@945 -- # kill 3486341 00:27:04.736 01:06:48 -- common/autotest_common.sh@950 -- # wait 3486341 00:27:06.641 01:06:50 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:06.641 01:06:50 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:06.641 01:06:50 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:06.641 01:06:50 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:06.641 01:06:50 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:06.641 01:06:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:06.641 01:06:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:06.641 01:06:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:08.543 01:06:52 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:27:08.543 00:27:08.543 real 1m31.519s 00:27:08.543 user 5m40.123s 00:27:08.543 sys 0m14.813s 00:27:08.543 01:06:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:08.543 01:06:52 -- common/autotest_common.sh@10 -- # set +x 00:27:08.543 ************************************ 00:27:08.543 END TEST nvmf_perf 00:27:08.543 ************************************ 00:27:08.543 01:06:52 -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:27:08.543 01:06:52 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:08.543 01:06:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:08.543 01:06:52 -- common/autotest_common.sh@10 -- # set +x 00:27:08.543 ************************************ 00:27:08.543 START TEST nvmf_fio_host 00:27:08.543 ************************************ 00:27:08.543 01:06:52 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:27:08.543 * Looking for test storage... 00:27:08.543 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:08.543 01:06:52 -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:08.543 01:06:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:08.543 01:06:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:08.543 01:06:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:08.543 01:06:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:08.543 01:06:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:08.543 01:06:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:08.543 01:06:52 -- paths/export.sh@5 -- # export PATH 00:27:08.543 01:06:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:08.543 01:06:52 -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:08.543 01:06:52 -- nvmf/common.sh@7 -- # uname -s 00:27:08.543 01:06:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:08.543 01:06:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:08.543 01:06:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:08.543 01:06:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:08.543 01:06:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:08.543 01:06:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:08.543 01:06:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:08.543 01:06:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:08.543 01:06:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:08.543 01:06:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:08.543 01:06:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:08.543 01:06:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:08.543 01:06:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:08.543 01:06:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:08.543 01:06:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:08.543 01:06:52 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:08.543 01:06:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:08.543 01:06:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:08.543 01:06:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:08.543 01:06:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:08.543 01:06:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:08.543 01:06:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:08.543 01:06:52 -- paths/export.sh@5 -- # export PATH 00:27:08.543 01:06:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:08.543 01:06:52 -- nvmf/common.sh@46 -- # : 0 00:27:08.543 01:06:52 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:08.543 01:06:52 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:08.543 01:06:52 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:08.543 01:06:52 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:08.543 01:06:52 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:08.543 01:06:52 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:08.543 01:06:52 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:08.543 01:06:52 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:08.543 01:06:52 -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:08.543 01:06:52 -- host/fio.sh@14 -- # nvmftestinit 00:27:08.543 01:06:52 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:08.543 01:06:52 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:08.543 01:06:52 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:08.543 01:06:52 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:08.543 01:06:52 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:08.543 01:06:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:08.543 01:06:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:08.543 01:06:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:08.543 01:06:52 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:08.543 01:06:52 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:08.543 01:06:52 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:08.543 01:06:52 -- common/autotest_common.sh@10 -- # set +x 00:27:10.448 01:06:54 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:10.448 01:06:54 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:10.448 01:06:54 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:10.448 01:06:54 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:10.448 01:06:54 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:10.448 01:06:54 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:10.448 01:06:54 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:10.448 01:06:54 -- nvmf/common.sh@294 -- # net_devs=() 00:27:10.448 01:06:54 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:10.448 01:06:54 -- nvmf/common.sh@295 -- # e810=() 00:27:10.448 01:06:54 -- nvmf/common.sh@295 -- # local -ga e810 00:27:10.448 01:06:54 -- nvmf/common.sh@296 -- # x722=() 00:27:10.448 01:06:54 -- nvmf/common.sh@296 -- # local -ga x722 00:27:10.448 01:06:54 -- nvmf/common.sh@297 -- # mlx=() 00:27:10.448 01:06:54 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:10.448 01:06:54 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:10.448 01:06:54 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:10.448 01:06:54 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:10.448 01:06:54 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:10.448 01:06:54 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:10.448 01:06:54 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:10.448 01:06:54 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:10.448 01:06:54 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:10.448 01:06:54 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:10.448 01:06:54 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:10.448 01:06:54 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:10.448 01:06:54 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:10.448 01:06:54 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:27:10.448 01:06:54 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:27:10.448 01:06:54 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:27:10.448 01:06:54 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:27:10.448 01:06:54 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:10.448 01:06:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:10.448 01:06:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:10.448 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:10.448 01:06:54 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:10.448 01:06:54 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:10.448 01:06:54 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:10.448 01:06:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:10.448 01:06:54 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:10.448 01:06:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:10.448 01:06:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:10.448 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:10.448 01:06:54 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:10.448 01:06:54 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:10.448 01:06:54 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:10.448 01:06:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:10.448 01:06:54 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:10.448 01:06:54 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:10.448 01:06:54 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:27:10.448 01:06:54 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:27:10.448 01:06:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:10.448 01:06:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:10.448 01:06:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:10.448 01:06:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:10.448 01:06:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:10.448 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:10.448 01:06:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:10.448 01:06:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:10.448 01:06:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:10.448 01:06:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:10.448 01:06:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:10.448 01:06:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:10.448 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:10.448 01:06:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:10.448 01:06:54 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:10.448 01:06:54 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:10.448 01:06:54 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:10.448 01:06:54 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:27:10.448 01:06:54 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:27:10.448 01:06:54 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:10.448 01:06:54 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:10.448 01:06:54 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:10.448 01:06:54 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:27:10.448 01:06:54 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:10.448 01:06:54 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:10.448 01:06:54 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:27:10.448 01:06:54 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:10.448 01:06:54 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:10.448 01:06:54 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:27:10.448 01:06:54 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:27:10.448 01:06:54 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:27:10.448 01:06:54 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:10.448 01:06:54 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:10.448 01:06:54 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:10.448 01:06:54 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:27:10.448 01:06:54 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:10.448 01:06:54 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:10.448 01:06:54 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:10.448 01:06:54 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:27:10.448 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:10.448 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.224 ms 00:27:10.448 00:27:10.448 --- 10.0.0.2 ping statistics --- 00:27:10.448 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:10.448 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:27:10.448 01:06:54 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:10.448 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:10.448 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:27:10.448 00:27:10.448 --- 10.0.0.1 ping statistics --- 00:27:10.448 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:10.448 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:27:10.448 01:06:54 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:10.448 01:06:54 -- nvmf/common.sh@410 -- # return 0 00:27:10.448 01:06:54 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:10.448 01:06:54 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:10.448 01:06:54 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:10.448 01:06:54 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:10.448 01:06:54 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:10.448 01:06:54 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:10.448 01:06:54 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:10.448 01:06:54 -- host/fio.sh@16 -- # [[ y != y ]] 00:27:10.448 01:06:54 -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:27:10.448 01:06:54 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:10.448 01:06:54 -- common/autotest_common.sh@10 -- # set +x 00:27:10.448 01:06:54 -- host/fio.sh@24 -- # nvmfpid=3499388 00:27:10.448 01:06:54 -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:10.448 01:06:54 -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:10.448 01:06:54 -- host/fio.sh@28 -- # waitforlisten 3499388 00:27:10.448 01:06:54 -- common/autotest_common.sh@819 -- # '[' -z 3499388 ']' 00:27:10.448 01:06:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:10.448 01:06:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:10.449 01:06:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:10.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:10.449 01:06:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:10.449 01:06:54 -- common/autotest_common.sh@10 -- # set +x 00:27:10.449 [2024-07-23 01:06:54.574028] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:27:10.449 [2024-07-23 01:06:54.574117] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:10.449 EAL: No free 2048 kB hugepages reported on node 1 00:27:10.449 [2024-07-23 01:06:54.636804] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:10.707 [2024-07-23 01:06:54.721637] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:10.707 [2024-07-23 01:06:54.721787] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:10.707 [2024-07-23 01:06:54.721814] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:10.707 [2024-07-23 01:06:54.721827] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:10.707 [2024-07-23 01:06:54.721878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:10.707 [2024-07-23 01:06:54.721904] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:10.707 [2024-07-23 01:06:54.721972] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:10.707 [2024-07-23 01:06:54.721974] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:11.641 01:06:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:11.641 01:06:55 -- common/autotest_common.sh@852 -- # return 0 00:27:11.641 01:06:55 -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:11.642 [2024-07-23 01:06:55.796089] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:11.642 01:06:55 -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:27:11.642 01:06:55 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:11.642 01:06:55 -- common/autotest_common.sh@10 -- # set +x 00:27:11.642 01:06:55 -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:27:11.900 Malloc1 00:27:11.900 01:06:56 -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:12.158 01:06:56 -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:12.415 01:06:56 -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:12.678 [2024-07-23 01:06:56.786843] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:12.678 01:06:56 -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:12.984 01:06:57 -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:27:12.984 01:06:57 -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:12.984 01:06:57 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:12.984 01:06:57 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:27:12.984 01:06:57 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:12.984 01:06:57 -- common/autotest_common.sh@1318 -- # local sanitizers 00:27:12.984 01:06:57 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:12.984 01:06:57 -- common/autotest_common.sh@1320 -- # shift 00:27:12.984 01:06:57 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:27:12.984 01:06:57 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:27:12.984 01:06:57 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:12.984 01:06:57 -- common/autotest_common.sh@1324 -- # grep libasan 00:27:12.984 01:06:57 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:27:12.984 01:06:57 -- common/autotest_common.sh@1324 -- # asan_lib= 00:27:12.984 01:06:57 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:27:12.984 01:06:57 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:27:12.984 01:06:57 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:12.984 01:06:57 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:27:12.984 01:06:57 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:27:12.984 01:06:57 -- common/autotest_common.sh@1324 -- # asan_lib= 00:27:12.984 01:06:57 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:27:12.984 01:06:57 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:12.984 01:06:57 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:13.244 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:27:13.244 fio-3.35 00:27:13.244 Starting 1 thread 00:27:13.244 EAL: No free 2048 kB hugepages reported on node 1 00:27:15.774 00:27:15.774 test: (groupid=0, jobs=1): err= 0: pid=3499887: Tue Jul 23 01:06:59 2024 00:27:15.774 read: IOPS=9474, BW=37.0MiB/s (38.8MB/s)(74.2MiB/2006msec) 00:27:15.774 slat (nsec): min=1930, max=110139, avg=2416.70, stdev=1406.51 00:27:15.774 clat (usec): min=3369, max=13086, avg=7485.15, stdev=606.87 00:27:15.774 lat (usec): min=3381, max=13089, avg=7487.57, stdev=606.80 00:27:15.774 clat percentiles (usec): 00:27:15.774 | 1.00th=[ 6128], 5.00th=[ 6587], 10.00th=[ 6783], 20.00th=[ 6980], 00:27:15.774 | 30.00th=[ 7177], 40.00th=[ 7373], 50.00th=[ 7504], 60.00th=[ 7635], 00:27:15.774 | 70.00th=[ 7767], 80.00th=[ 7898], 90.00th=[ 8225], 95.00th=[ 8455], 00:27:15.774 | 99.00th=[ 8979], 99.50th=[ 9110], 99.90th=[11469], 99.95th=[12387], 00:27:15.774 | 99.99th=[13042] 00:27:15.774 bw ( KiB/s): min=35552, max=38848, per=99.96%, avg=37882.00, stdev=1559.67, samples=4 00:27:15.774 iops : min= 8888, max= 9712, avg=9470.50, stdev=389.92, samples=4 00:27:15.774 write: IOPS=9479, BW=37.0MiB/s (38.8MB/s)(74.3MiB/2006msec); 0 zone resets 00:27:15.774 slat (nsec): min=2043, max=82410, avg=2567.06, stdev=1309.07 00:27:15.774 clat (usec): min=1393, max=12325, avg=5984.53, stdev=532.07 00:27:15.774 lat (usec): min=1399, max=12327, avg=5987.09, stdev=532.02 00:27:15.774 clat percentiles (usec): 00:27:15.774 | 1.00th=[ 4817], 5.00th=[ 5211], 10.00th=[ 5342], 20.00th=[ 5604], 00:27:15.774 | 30.00th=[ 5735], 40.00th=[ 5866], 50.00th=[ 5997], 60.00th=[ 6128], 00:27:15.774 | 70.00th=[ 6259], 80.00th=[ 6390], 90.00th=[ 6587], 95.00th=[ 6783], 00:27:15.774 | 99.00th=[ 7177], 99.50th=[ 7373], 99.90th=[ 9372], 99.95th=[10552], 00:27:15.774 | 99.99th=[12256] 00:27:15.774 bw ( KiB/s): min=36304, max=38656, per=99.97%, avg=37908.00, stdev=1084.97, samples=4 00:27:15.774 iops : min= 9076, max= 9664, avg=9477.00, stdev=271.24, samples=4 00:27:15.774 lat (msec) : 2=0.01%, 4=0.12%, 10=99.76%, 20=0.11% 00:27:15.774 cpu : usr=55.26%, sys=37.26%, ctx=60, majf=0, minf=37 00:27:15.774 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:27:15.774 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:15.774 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:15.774 issued rwts: total=19006,19016,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:15.774 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:15.774 00:27:15.774 Run status group 0 (all jobs): 00:27:15.774 READ: bw=37.0MiB/s (38.8MB/s), 37.0MiB/s-37.0MiB/s (38.8MB/s-38.8MB/s), io=74.2MiB (77.8MB), run=2006-2006msec 00:27:15.774 WRITE: bw=37.0MiB/s (38.8MB/s), 37.0MiB/s-37.0MiB/s (38.8MB/s-38.8MB/s), io=74.3MiB (77.9MB), run=2006-2006msec 00:27:15.774 01:06:59 -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:27:15.774 01:06:59 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:27:15.774 01:06:59 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:27:15.774 01:06:59 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:15.774 01:06:59 -- common/autotest_common.sh@1318 -- # local sanitizers 00:27:15.774 01:06:59 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:15.775 01:06:59 -- common/autotest_common.sh@1320 -- # shift 00:27:15.775 01:06:59 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:27:15.775 01:06:59 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:27:15.775 01:06:59 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:15.775 01:06:59 -- common/autotest_common.sh@1324 -- # grep libasan 00:27:15.775 01:06:59 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:27:15.775 01:06:59 -- common/autotest_common.sh@1324 -- # asan_lib= 00:27:15.775 01:06:59 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:27:15.775 01:06:59 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:27:15.775 01:06:59 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:15.775 01:06:59 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:27:15.775 01:06:59 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:27:15.775 01:06:59 -- common/autotest_common.sh@1324 -- # asan_lib= 00:27:15.775 01:06:59 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:27:15.775 01:06:59 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:15.775 01:06:59 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:27:15.775 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:27:15.775 fio-3.35 00:27:15.775 Starting 1 thread 00:27:15.775 EAL: No free 2048 kB hugepages reported on node 1 00:27:16.708 [2024-07-23 01:07:00.734199] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1983360 is same with the state(5) to be set 00:27:18.081 00:27:18.081 test: (groupid=0, jobs=1): err= 0: pid=3500227: Tue Jul 23 01:07:02 2024 00:27:18.081 read: IOPS=8382, BW=131MiB/s (137MB/s)(263MiB/2005msec) 00:27:18.081 slat (usec): min=2, max=108, avg= 3.66, stdev= 1.78 00:27:18.081 clat (usec): min=2412, max=52455, avg=9279.38, stdev=4026.04 00:27:18.081 lat (usec): min=2416, max=52458, avg=9283.05, stdev=4026.09 00:27:18.081 clat percentiles (usec): 00:27:18.081 | 1.00th=[ 4817], 5.00th=[ 5800], 10.00th=[ 6390], 20.00th=[ 7111], 00:27:18.081 | 30.00th=[ 7832], 40.00th=[ 8356], 50.00th=[ 8848], 60.00th=[ 9372], 00:27:18.081 | 70.00th=[10028], 80.00th=[10814], 90.00th=[11731], 95.00th=[13042], 00:27:18.081 | 99.00th=[16909], 99.50th=[47449], 99.90th=[51643], 99.95th=[52167], 00:27:18.081 | 99.99th=[52167] 00:27:18.081 bw ( KiB/s): min=54944, max=78880, per=50.77%, avg=68088.00, stdev=11359.40, samples=4 00:27:18.081 iops : min= 3434, max= 4930, avg=4255.50, stdev=709.96, samples=4 00:27:18.081 write: IOPS=4995, BW=78.1MiB/s (81.8MB/s)(140MiB/1788msec); 0 zone resets 00:27:18.081 slat (usec): min=30, max=200, avg=34.06, stdev= 5.94 00:27:18.081 clat (usec): min=2465, max=18000, avg=10494.37, stdev=1785.53 00:27:18.081 lat (usec): min=2497, max=18032, avg=10528.44, stdev=1786.32 00:27:18.081 clat percentiles (usec): 00:27:18.081 | 1.00th=[ 7177], 5.00th=[ 7963], 10.00th=[ 8455], 20.00th=[ 8979], 00:27:18.081 | 30.00th=[ 9503], 40.00th=[ 9765], 50.00th=[10290], 60.00th=[10683], 00:27:18.081 | 70.00th=[11338], 80.00th=[11994], 90.00th=[12911], 95.00th=[13829], 00:27:18.081 | 99.00th=[15270], 99.50th=[15664], 99.90th=[16712], 99.95th=[16712], 00:27:18.081 | 99.99th=[17957] 00:27:18.081 bw ( KiB/s): min=56160, max=81504, per=88.87%, avg=71032.00, stdev=12225.49, samples=4 00:27:18.081 iops : min= 3510, max= 5094, avg=4439.50, stdev=764.09, samples=4 00:27:18.081 lat (msec) : 4=0.13%, 10=61.20%, 20=38.18%, 50=0.32%, 100=0.17% 00:27:18.081 cpu : usr=73.70%, sys=22.46%, ctx=23, majf=0, minf=63 00:27:18.081 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:27:18.081 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:18.081 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:18.081 issued rwts: total=16807,8932,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:18.081 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:18.081 00:27:18.081 Run status group 0 (all jobs): 00:27:18.081 READ: bw=131MiB/s (137MB/s), 131MiB/s-131MiB/s (137MB/s-137MB/s), io=263MiB (275MB), run=2005-2005msec 00:27:18.081 WRITE: bw=78.1MiB/s (81.8MB/s), 78.1MiB/s-78.1MiB/s (81.8MB/s-81.8MB/s), io=140MiB (146MB), run=1788-1788msec 00:27:18.081 01:07:02 -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:18.339 01:07:02 -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:27:18.339 01:07:02 -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:27:18.339 01:07:02 -- host/fio.sh@51 -- # get_nvme_bdfs 00:27:18.339 01:07:02 -- common/autotest_common.sh@1498 -- # bdfs=() 00:27:18.339 01:07:02 -- common/autotest_common.sh@1498 -- # local bdfs 00:27:18.339 01:07:02 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:27:18.339 01:07:02 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:27:18.339 01:07:02 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:27:18.339 01:07:02 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:27:18.339 01:07:02 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:27:18.339 01:07:02 -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 -i 10.0.0.2 00:27:21.631 Nvme0n1 00:27:21.631 01:07:05 -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:27:24.916 01:07:08 -- host/fio.sh@53 -- # ls_guid=8bb889b6-d483-4df2-b7fd-94677d4d58f4 00:27:24.916 01:07:08 -- host/fio.sh@54 -- # get_lvs_free_mb 8bb889b6-d483-4df2-b7fd-94677d4d58f4 00:27:24.916 01:07:08 -- common/autotest_common.sh@1343 -- # local lvs_uuid=8bb889b6-d483-4df2-b7fd-94677d4d58f4 00:27:24.917 01:07:08 -- common/autotest_common.sh@1344 -- # local lvs_info 00:27:24.917 01:07:08 -- common/autotest_common.sh@1345 -- # local fc 00:27:24.917 01:07:08 -- common/autotest_common.sh@1346 -- # local cs 00:27:24.917 01:07:08 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:24.917 01:07:08 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:27:24.917 { 00:27:24.917 "uuid": "8bb889b6-d483-4df2-b7fd-94677d4d58f4", 00:27:24.917 "name": "lvs_0", 00:27:24.917 "base_bdev": "Nvme0n1", 00:27:24.917 "total_data_clusters": 930, 00:27:24.917 "free_clusters": 930, 00:27:24.917 "block_size": 512, 00:27:24.917 "cluster_size": 1073741824 00:27:24.917 } 00:27:24.917 ]' 00:27:24.917 01:07:08 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="8bb889b6-d483-4df2-b7fd-94677d4d58f4") .free_clusters' 00:27:24.917 01:07:08 -- common/autotest_common.sh@1348 -- # fc=930 00:27:24.917 01:07:08 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="8bb889b6-d483-4df2-b7fd-94677d4d58f4") .cluster_size' 00:27:24.917 01:07:08 -- common/autotest_common.sh@1349 -- # cs=1073741824 00:27:24.917 01:07:08 -- common/autotest_common.sh@1352 -- # free_mb=952320 00:27:24.917 01:07:08 -- common/autotest_common.sh@1353 -- # echo 952320 00:27:24.917 952320 00:27:24.917 01:07:08 -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:27:24.917 64028eb6-b15d-4d98-8540-4c3537cdaa6e 00:27:24.917 01:07:09 -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:27:25.174 01:07:09 -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:27:25.431 01:07:09 -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:27:25.689 01:07:09 -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:25.689 01:07:09 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:25.689 01:07:09 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:27:25.689 01:07:09 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:25.689 01:07:09 -- common/autotest_common.sh@1318 -- # local sanitizers 00:27:25.689 01:07:09 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:25.689 01:07:09 -- common/autotest_common.sh@1320 -- # shift 00:27:25.689 01:07:09 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:27:25.689 01:07:09 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:27:25.689 01:07:09 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:25.689 01:07:09 -- common/autotest_common.sh@1324 -- # grep libasan 00:27:25.689 01:07:09 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:27:25.689 01:07:09 -- common/autotest_common.sh@1324 -- # asan_lib= 00:27:25.689 01:07:09 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:27:25.689 01:07:09 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:27:25.689 01:07:09 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:25.689 01:07:09 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:27:25.689 01:07:09 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:27:25.689 01:07:09 -- common/autotest_common.sh@1324 -- # asan_lib= 00:27:25.689 01:07:09 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:27:25.689 01:07:09 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:25.689 01:07:09 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:25.948 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:27:25.948 fio-3.35 00:27:25.948 Starting 1 thread 00:27:25.948 EAL: No free 2048 kB hugepages reported on node 1 00:27:28.474 00:27:28.474 test: (groupid=0, jobs=1): err= 0: pid=3501545: Tue Jul 23 01:07:12 2024 00:27:28.474 read: IOPS=5024, BW=19.6MiB/s (20.6MB/s)(39.4MiB/2008msec) 00:27:28.474 slat (nsec): min=1908, max=185360, avg=2599.99, stdev=2703.41 00:27:28.474 clat (usec): min=1387, max=175355, avg=13975.95, stdev=12715.25 00:27:28.474 lat (usec): min=1390, max=175390, avg=13978.55, stdev=12715.65 00:27:28.474 clat percentiles (msec): 00:27:28.474 | 1.00th=[ 10], 5.00th=[ 11], 10.00th=[ 12], 20.00th=[ 12], 00:27:28.474 | 30.00th=[ 13], 40.00th=[ 13], 50.00th=[ 13], 60.00th=[ 14], 00:27:28.474 | 70.00th=[ 14], 80.00th=[ 15], 90.00th=[ 15], 95.00th=[ 16], 00:27:28.474 | 99.00th=[ 18], 99.50th=[ 157], 99.90th=[ 176], 99.95th=[ 176], 00:27:28.474 | 99.99th=[ 176] 00:27:28.474 bw ( KiB/s): min=13696, max=22304, per=99.68%, avg=20034.00, stdev=4227.01, samples=4 00:27:28.474 iops : min= 3424, max= 5576, avg=5008.50, stdev=1056.75, samples=4 00:27:28.474 write: IOPS=5017, BW=19.6MiB/s (20.6MB/s)(39.4MiB/2008msec); 0 zone resets 00:27:28.474 slat (usec): min=2, max=151, avg= 2.76, stdev= 2.19 00:27:28.474 clat (usec): min=571, max=172124, avg=11316.38, stdev=11866.29 00:27:28.474 lat (usec): min=574, max=172132, avg=11319.14, stdev=11866.68 00:27:28.474 clat percentiles (msec): 00:27:28.474 | 1.00th=[ 9], 5.00th=[ 9], 10.00th=[ 10], 20.00th=[ 10], 00:27:28.474 | 30.00th=[ 10], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 11], 00:27:28.474 | 70.00th=[ 11], 80.00th=[ 12], 90.00th=[ 12], 95.00th=[ 13], 00:27:28.474 | 99.00th=[ 14], 99.50th=[ 157], 99.90th=[ 171], 99.95th=[ 171], 00:27:28.474 | 99.99th=[ 174] 00:27:28.474 bw ( KiB/s): min=14272, max=22248, per=99.85%, avg=20042.00, stdev=3853.27, samples=4 00:27:28.474 iops : min= 3568, max= 5562, avg=5010.50, stdev=963.32, samples=4 00:27:28.474 lat (usec) : 750=0.01%, 1000=0.01% 00:27:28.474 lat (msec) : 2=0.01%, 4=0.09%, 10=17.82%, 20=81.37%, 50=0.05% 00:27:28.474 lat (msec) : 250=0.63% 00:27:28.474 cpu : usr=53.81%, sys=42.25%, ctx=97, majf=0, minf=37 00:27:28.474 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:27:28.474 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:28.474 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:28.474 issued rwts: total=10089,10076,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:28.474 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:28.474 00:27:28.474 Run status group 0 (all jobs): 00:27:28.474 READ: bw=19.6MiB/s (20.6MB/s), 19.6MiB/s-19.6MiB/s (20.6MB/s-20.6MB/s), io=39.4MiB (41.3MB), run=2008-2008msec 00:27:28.474 WRITE: bw=19.6MiB/s (20.6MB/s), 19.6MiB/s-19.6MiB/s (20.6MB/s-20.6MB/s), io=39.4MiB (41.3MB), run=2008-2008msec 00:27:28.474 01:07:12 -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:28.474 01:07:12 -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:27:29.847 01:07:13 -- host/fio.sh@64 -- # ls_nested_guid=1773899c-3d0d-4339-bc7f-7d6285bae764 00:27:29.847 01:07:13 -- host/fio.sh@65 -- # get_lvs_free_mb 1773899c-3d0d-4339-bc7f-7d6285bae764 00:27:29.847 01:07:13 -- common/autotest_common.sh@1343 -- # local lvs_uuid=1773899c-3d0d-4339-bc7f-7d6285bae764 00:27:29.847 01:07:13 -- common/autotest_common.sh@1344 -- # local lvs_info 00:27:29.847 01:07:13 -- common/autotest_common.sh@1345 -- # local fc 00:27:29.848 01:07:13 -- common/autotest_common.sh@1346 -- # local cs 00:27:29.848 01:07:13 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:29.848 01:07:14 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:27:29.848 { 00:27:29.848 "uuid": "8bb889b6-d483-4df2-b7fd-94677d4d58f4", 00:27:29.848 "name": "lvs_0", 00:27:29.848 "base_bdev": "Nvme0n1", 00:27:29.848 "total_data_clusters": 930, 00:27:29.848 "free_clusters": 0, 00:27:29.848 "block_size": 512, 00:27:29.848 "cluster_size": 1073741824 00:27:29.848 }, 00:27:29.848 { 00:27:29.848 "uuid": "1773899c-3d0d-4339-bc7f-7d6285bae764", 00:27:29.848 "name": "lvs_n_0", 00:27:29.848 "base_bdev": "64028eb6-b15d-4d98-8540-4c3537cdaa6e", 00:27:29.848 "total_data_clusters": 237847, 00:27:29.848 "free_clusters": 237847, 00:27:29.848 "block_size": 512, 00:27:29.848 "cluster_size": 4194304 00:27:29.848 } 00:27:29.848 ]' 00:27:29.848 01:07:14 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="1773899c-3d0d-4339-bc7f-7d6285bae764") .free_clusters' 00:27:30.106 01:07:14 -- common/autotest_common.sh@1348 -- # fc=237847 00:27:30.106 01:07:14 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="1773899c-3d0d-4339-bc7f-7d6285bae764") .cluster_size' 00:27:30.106 01:07:14 -- common/autotest_common.sh@1349 -- # cs=4194304 00:27:30.106 01:07:14 -- common/autotest_common.sh@1352 -- # free_mb=951388 00:27:30.106 01:07:14 -- common/autotest_common.sh@1353 -- # echo 951388 00:27:30.106 951388 00:27:30.106 01:07:14 -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:27:30.674 ad72a029-4dd5-49fa-900b-cfcc5bab190a 00:27:30.674 01:07:14 -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:27:30.932 01:07:14 -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:27:31.190 01:07:15 -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:27:31.447 01:07:15 -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:31.447 01:07:15 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:31.447 01:07:15 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:27:31.447 01:07:15 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:31.447 01:07:15 -- common/autotest_common.sh@1318 -- # local sanitizers 00:27:31.447 01:07:15 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:31.447 01:07:15 -- common/autotest_common.sh@1320 -- # shift 00:27:31.447 01:07:15 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:27:31.447 01:07:15 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:27:31.447 01:07:15 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:31.447 01:07:15 -- common/autotest_common.sh@1324 -- # grep libasan 00:27:31.447 01:07:15 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:27:31.447 01:07:15 -- common/autotest_common.sh@1324 -- # asan_lib= 00:27:31.447 01:07:15 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:27:31.447 01:07:15 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:27:31.447 01:07:15 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:31.447 01:07:15 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:27:31.447 01:07:15 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:27:31.447 01:07:15 -- common/autotest_common.sh@1324 -- # asan_lib= 00:27:31.447 01:07:15 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:27:31.447 01:07:15 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:31.447 01:07:15 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:31.704 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:27:31.704 fio-3.35 00:27:31.704 Starting 1 thread 00:27:31.704 EAL: No free 2048 kB hugepages reported on node 1 00:27:34.252 00:27:34.252 test: (groupid=0, jobs=1): err= 0: pid=3502302: Tue Jul 23 01:07:18 2024 00:27:34.252 read: IOPS=6070, BW=23.7MiB/s (24.9MB/s)(47.6MiB/2008msec) 00:27:34.252 slat (nsec): min=1916, max=170634, avg=2607.28, stdev=2517.78 00:27:34.252 clat (usec): min=4085, max=20426, avg=11659.09, stdev=971.64 00:27:34.252 lat (usec): min=4120, max=20428, avg=11661.69, stdev=971.50 00:27:34.252 clat percentiles (usec): 00:27:34.252 | 1.00th=[ 9503], 5.00th=[10159], 10.00th=[10421], 20.00th=[10945], 00:27:34.252 | 30.00th=[11207], 40.00th=[11469], 50.00th=[11600], 60.00th=[11863], 00:27:34.252 | 70.00th=[12125], 80.00th=[12387], 90.00th=[12780], 95.00th=[13173], 00:27:34.252 | 99.00th=[13698], 99.50th=[13960], 99.90th=[17957], 99.95th=[18220], 00:27:34.252 | 99.99th=[19530] 00:27:34.252 bw ( KiB/s): min=22832, max=24800, per=99.81%, avg=24234.00, stdev=941.41, samples=4 00:27:34.252 iops : min= 5708, max= 6200, avg=6058.50, stdev=235.35, samples=4 00:27:34.252 write: IOPS=6051, BW=23.6MiB/s (24.8MB/s)(47.5MiB/2008msec); 0 zone resets 00:27:34.252 slat (usec): min=2, max=150, avg= 2.76, stdev= 2.13 00:27:34.252 clat (usec): min=3194, max=16903, avg=9268.09, stdev=850.94 00:27:34.252 lat (usec): min=3204, max=16905, avg=9270.85, stdev=850.87 00:27:34.252 clat percentiles (usec): 00:27:34.252 | 1.00th=[ 7242], 5.00th=[ 7963], 10.00th=[ 8291], 20.00th=[ 8586], 00:27:34.252 | 30.00th=[ 8848], 40.00th=[ 9110], 50.00th=[ 9241], 60.00th=[ 9503], 00:27:34.252 | 70.00th=[ 9634], 80.00th=[ 9896], 90.00th=[10290], 95.00th=[10552], 00:27:34.252 | 99.00th=[11076], 99.50th=[11338], 99.90th=[13698], 99.95th=[15139], 00:27:34.252 | 99.99th=[16909] 00:27:34.252 bw ( KiB/s): min=23744, max=24512, per=99.95%, avg=24192.00, stdev=322.13, samples=4 00:27:34.252 iops : min= 5936, max= 6128, avg=6048.00, stdev=80.53, samples=4 00:27:34.252 lat (msec) : 4=0.03%, 10=42.98%, 20=56.98%, 50=0.01% 00:27:34.252 cpu : usr=55.85%, sys=39.51%, ctx=103, majf=0, minf=37 00:27:34.252 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:27:34.252 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:34.252 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:34.252 issued rwts: total=12189,12151,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:34.252 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:34.252 00:27:34.252 Run status group 0 (all jobs): 00:27:34.252 READ: bw=23.7MiB/s (24.9MB/s), 23.7MiB/s-23.7MiB/s (24.9MB/s-24.9MB/s), io=47.6MiB (49.9MB), run=2008-2008msec 00:27:34.252 WRITE: bw=23.6MiB/s (24.8MB/s), 23.6MiB/s-23.6MiB/s (24.8MB/s-24.8MB/s), io=47.5MiB (49.8MB), run=2008-2008msec 00:27:34.252 01:07:18 -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:27:34.252 01:07:18 -- host/fio.sh@74 -- # sync 00:27:34.252 01:07:18 -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:27:38.452 01:07:22 -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:27:38.452 01:07:22 -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:27:41.734 01:07:25 -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:27:41.735 01:07:25 -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:27:43.638 01:07:27 -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:27:43.638 01:07:27 -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:27:43.638 01:07:27 -- host/fio.sh@86 -- # nvmftestfini 00:27:43.638 01:07:27 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:43.638 01:07:27 -- nvmf/common.sh@116 -- # sync 00:27:43.638 01:07:27 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:43.638 01:07:27 -- nvmf/common.sh@119 -- # set +e 00:27:43.638 01:07:27 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:43.638 01:07:27 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:43.638 rmmod nvme_tcp 00:27:43.638 rmmod nvme_fabrics 00:27:43.638 rmmod nvme_keyring 00:27:43.638 01:07:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:43.638 01:07:27 -- nvmf/common.sh@123 -- # set -e 00:27:43.638 01:07:27 -- nvmf/common.sh@124 -- # return 0 00:27:43.638 01:07:27 -- nvmf/common.sh@477 -- # '[' -n 3499388 ']' 00:27:43.638 01:07:27 -- nvmf/common.sh@478 -- # killprocess 3499388 00:27:43.638 01:07:27 -- common/autotest_common.sh@926 -- # '[' -z 3499388 ']' 00:27:43.638 01:07:27 -- common/autotest_common.sh@930 -- # kill -0 3499388 00:27:43.638 01:07:27 -- common/autotest_common.sh@931 -- # uname 00:27:43.638 01:07:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:43.638 01:07:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3499388 00:27:43.638 01:07:27 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:43.638 01:07:27 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:43.638 01:07:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3499388' 00:27:43.638 killing process with pid 3499388 00:27:43.638 01:07:27 -- common/autotest_common.sh@945 -- # kill 3499388 00:27:43.638 01:07:27 -- common/autotest_common.sh@950 -- # wait 3499388 00:27:43.638 01:07:27 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:43.638 01:07:27 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:43.638 01:07:27 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:43.638 01:07:27 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:43.638 01:07:27 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:43.638 01:07:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:43.638 01:07:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:43.638 01:07:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:46.178 01:07:29 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:27:46.178 00:27:46.178 real 0m37.393s 00:27:46.178 user 2m22.896s 00:27:46.178 sys 0m7.209s 00:27:46.178 01:07:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:46.178 01:07:29 -- common/autotest_common.sh@10 -- # set +x 00:27:46.178 ************************************ 00:27:46.178 END TEST nvmf_fio_host 00:27:46.178 ************************************ 00:27:46.178 01:07:29 -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:27:46.178 01:07:29 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:46.178 01:07:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:46.178 01:07:29 -- common/autotest_common.sh@10 -- # set +x 00:27:46.178 ************************************ 00:27:46.178 START TEST nvmf_failover 00:27:46.178 ************************************ 00:27:46.178 01:07:29 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:27:46.178 * Looking for test storage... 00:27:46.178 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:46.178 01:07:29 -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:46.178 01:07:29 -- nvmf/common.sh@7 -- # uname -s 00:27:46.178 01:07:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:46.178 01:07:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:46.178 01:07:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:46.178 01:07:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:46.178 01:07:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:46.178 01:07:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:46.178 01:07:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:46.178 01:07:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:46.178 01:07:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:46.178 01:07:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:46.178 01:07:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:46.178 01:07:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:46.178 01:07:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:46.178 01:07:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:46.178 01:07:29 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:46.178 01:07:29 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:46.178 01:07:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:46.178 01:07:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:46.178 01:07:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:46.178 01:07:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:46.178 01:07:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:46.178 01:07:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:46.178 01:07:29 -- paths/export.sh@5 -- # export PATH 00:27:46.178 01:07:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:46.178 01:07:29 -- nvmf/common.sh@46 -- # : 0 00:27:46.178 01:07:29 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:46.178 01:07:29 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:46.178 01:07:29 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:46.178 01:07:29 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:46.179 01:07:29 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:46.179 01:07:29 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:46.179 01:07:29 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:46.179 01:07:29 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:46.179 01:07:29 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:46.179 01:07:29 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:46.179 01:07:29 -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:46.179 01:07:29 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:46.179 01:07:29 -- host/failover.sh@18 -- # nvmftestinit 00:27:46.179 01:07:29 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:46.179 01:07:29 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:46.179 01:07:29 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:46.179 01:07:29 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:46.179 01:07:29 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:46.179 01:07:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:46.179 01:07:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:46.179 01:07:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:46.179 01:07:29 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:46.179 01:07:29 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:46.179 01:07:29 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:46.179 01:07:29 -- common/autotest_common.sh@10 -- # set +x 00:27:48.082 01:07:31 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:48.082 01:07:31 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:48.082 01:07:31 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:48.082 01:07:31 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:48.082 01:07:31 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:48.082 01:07:31 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:48.082 01:07:31 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:48.082 01:07:31 -- nvmf/common.sh@294 -- # net_devs=() 00:27:48.082 01:07:31 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:48.082 01:07:31 -- nvmf/common.sh@295 -- # e810=() 00:27:48.082 01:07:31 -- nvmf/common.sh@295 -- # local -ga e810 00:27:48.082 01:07:31 -- nvmf/common.sh@296 -- # x722=() 00:27:48.082 01:07:31 -- nvmf/common.sh@296 -- # local -ga x722 00:27:48.082 01:07:31 -- nvmf/common.sh@297 -- # mlx=() 00:27:48.082 01:07:31 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:48.082 01:07:31 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:48.082 01:07:31 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:48.082 01:07:31 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:48.082 01:07:31 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:48.082 01:07:31 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:48.082 01:07:31 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:48.082 01:07:31 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:48.082 01:07:31 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:48.082 01:07:31 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:48.082 01:07:31 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:48.082 01:07:31 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:48.082 01:07:31 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:48.082 01:07:31 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:27:48.082 01:07:31 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:27:48.082 01:07:31 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:27:48.082 01:07:31 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:27:48.082 01:07:31 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:48.082 01:07:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:48.082 01:07:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:48.082 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:48.082 01:07:31 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:48.082 01:07:31 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:48.082 01:07:31 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:48.082 01:07:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:48.082 01:07:31 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:48.082 01:07:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:48.082 01:07:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:48.082 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:48.082 01:07:31 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:48.082 01:07:31 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:48.082 01:07:31 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:48.082 01:07:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:48.082 01:07:31 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:48.082 01:07:31 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:48.082 01:07:31 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:27:48.082 01:07:31 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:27:48.082 01:07:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:48.082 01:07:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:48.082 01:07:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:48.082 01:07:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:48.082 01:07:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:48.082 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:48.082 01:07:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:48.082 01:07:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:48.082 01:07:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:48.082 01:07:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:48.082 01:07:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:48.082 01:07:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:48.082 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:48.082 01:07:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:48.082 01:07:31 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:48.082 01:07:31 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:48.082 01:07:31 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:48.082 01:07:31 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:27:48.082 01:07:31 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:27:48.082 01:07:31 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:48.082 01:07:31 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:48.082 01:07:31 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:48.083 01:07:31 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:27:48.083 01:07:31 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:48.083 01:07:31 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:48.083 01:07:31 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:27:48.083 01:07:31 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:48.083 01:07:31 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:48.083 01:07:31 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:27:48.083 01:07:31 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:27:48.083 01:07:31 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:27:48.083 01:07:31 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:48.083 01:07:31 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:48.083 01:07:31 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:48.083 01:07:31 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:27:48.083 01:07:31 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:48.083 01:07:31 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:48.083 01:07:31 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:48.083 01:07:31 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:27:48.083 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:48.083 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.111 ms 00:27:48.083 00:27:48.083 --- 10.0.0.2 ping statistics --- 00:27:48.083 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:48.083 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:27:48.083 01:07:31 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:48.083 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:48.083 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:27:48.083 00:27:48.083 --- 10.0.0.1 ping statistics --- 00:27:48.083 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:48.083 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:27:48.083 01:07:31 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:48.083 01:07:31 -- nvmf/common.sh@410 -- # return 0 00:27:48.083 01:07:31 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:48.083 01:07:31 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:48.083 01:07:31 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:48.083 01:07:31 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:48.083 01:07:31 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:48.083 01:07:31 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:48.083 01:07:31 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:48.083 01:07:31 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:27:48.083 01:07:31 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:27:48.083 01:07:31 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:48.083 01:07:31 -- common/autotest_common.sh@10 -- # set +x 00:27:48.083 01:07:31 -- nvmf/common.sh@469 -- # nvmfpid=3505601 00:27:48.083 01:07:31 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:48.083 01:07:31 -- nvmf/common.sh@470 -- # waitforlisten 3505601 00:27:48.083 01:07:31 -- common/autotest_common.sh@819 -- # '[' -z 3505601 ']' 00:27:48.083 01:07:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:48.083 01:07:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:48.083 01:07:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:48.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:48.083 01:07:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:48.083 01:07:31 -- common/autotest_common.sh@10 -- # set +x 00:27:48.083 [2024-07-23 01:07:31.954293] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:27:48.083 [2024-07-23 01:07:31.954369] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:48.083 EAL: No free 2048 kB hugepages reported on node 1 00:27:48.083 [2024-07-23 01:07:32.024294] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:48.083 [2024-07-23 01:07:32.114143] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:48.083 [2024-07-23 01:07:32.114327] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:48.083 [2024-07-23 01:07:32.114347] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:48.083 [2024-07-23 01:07:32.114362] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:48.083 [2024-07-23 01:07:32.114467] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:48.083 [2024-07-23 01:07:32.114568] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:48.083 [2024-07-23 01:07:32.114570] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:49.020 01:07:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:49.020 01:07:32 -- common/autotest_common.sh@852 -- # return 0 00:27:49.020 01:07:32 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:27:49.020 01:07:32 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:49.020 01:07:32 -- common/autotest_common.sh@10 -- # set +x 00:27:49.020 01:07:32 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:49.020 01:07:32 -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:49.020 [2024-07-23 01:07:33.189334] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:49.020 01:07:33 -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:27:49.279 Malloc0 00:27:49.279 01:07:33 -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:49.536 01:07:33 -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:49.793 01:07:33 -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:50.051 [2024-07-23 01:07:34.159761] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:50.051 01:07:34 -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:50.308 [2024-07-23 01:07:34.408533] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:50.308 01:07:34 -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:27:50.566 [2024-07-23 01:07:34.649373] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:27:50.566 01:07:34 -- host/failover.sh@31 -- # bdevperf_pid=3506029 00:27:50.566 01:07:34 -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:27:50.566 01:07:34 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:50.566 01:07:34 -- host/failover.sh@34 -- # waitforlisten 3506029 /var/tmp/bdevperf.sock 00:27:50.566 01:07:34 -- common/autotest_common.sh@819 -- # '[' -z 3506029 ']' 00:27:50.566 01:07:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:50.566 01:07:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:50.566 01:07:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:50.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:50.566 01:07:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:50.566 01:07:34 -- common/autotest_common.sh@10 -- # set +x 00:27:51.502 01:07:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:51.502 01:07:35 -- common/autotest_common.sh@852 -- # return 0 00:27:51.502 01:07:35 -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:52.071 NVMe0n1 00:27:52.071 01:07:35 -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:52.329 00:27:52.329 01:07:36 -- host/failover.sh@39 -- # run_test_pid=3506174 00:27:52.329 01:07:36 -- host/failover.sh@41 -- # sleep 1 00:27:52.329 01:07:36 -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:53.262 01:07:37 -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:53.522 [2024-07-23 01:07:37.641517] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12718d0 is same with the state(5) to be set 00:27:53.522 [2024-07-23 01:07:37.641600] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12718d0 is same with the state(5) to be set 00:27:53.522 [2024-07-23 01:07:37.641624] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12718d0 is same with the state(5) to be set 00:27:53.522 [2024-07-23 01:07:37.641662] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12718d0 is same with the state(5) to be set 00:27:53.522 [2024-07-23 01:07:37.641676] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12718d0 is same with the state(5) to be set 00:27:53.522 [2024-07-23 01:07:37.641688] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12718d0 is same with the state(5) to be set 00:27:53.522 [2024-07-23 01:07:37.641701] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12718d0 is same with the state(5) to be set 00:27:53.522 [2024-07-23 01:07:37.641713] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12718d0 is same with the state(5) to be set 00:27:53.522 [2024-07-23 01:07:37.641726] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12718d0 is same with the state(5) to be set 00:27:53.522 [2024-07-23 01:07:37.641738] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12718d0 is same with the state(5) to be set 00:27:53.522 [2024-07-23 01:07:37.641750] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12718d0 is same with the state(5) to be set 00:27:53.522 [2024-07-23 01:07:37.641763] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12718d0 is same with the state(5) to be set 00:27:53.522 [2024-07-23 01:07:37.641775] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12718d0 is same with the state(5) to be set 00:27:53.522 [2024-07-23 01:07:37.641804] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12718d0 is same with the state(5) to be set 00:27:53.522 [2024-07-23 01:07:37.641816] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12718d0 is same with the state(5) to be set 00:27:53.522 [2024-07-23 01:07:37.641828] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12718d0 is same with the state(5) to be set 00:27:53.522 [2024-07-23 01:07:37.641841] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12718d0 is same with the state(5) to be set 00:27:53.522 [2024-07-23 01:07:37.641853] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12718d0 is same with the state(5) to be set 00:27:53.522 [2024-07-23 01:07:37.641865] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12718d0 is same with the state(5) to be set 00:27:53.522 [2024-07-23 01:07:37.641877] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12718d0 is same with the state(5) to be set 00:27:53.522 [2024-07-23 01:07:37.641889] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12718d0 is same with the state(5) to be set 00:27:53.522 [2024-07-23 01:07:37.641901] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12718d0 is same with the state(5) to be set 00:27:53.522 [2024-07-23 01:07:37.641913] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12718d0 is same with the state(5) to be set 00:27:53.522 [2024-07-23 01:07:37.641925] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12718d0 is same with the state(5) to be set 00:27:53.522 [2024-07-23 01:07:37.641952] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12718d0 is same with the state(5) to be set 00:27:53.522 [2024-07-23 01:07:37.641964] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12718d0 is same with the state(5) to be set 00:27:53.522 [2024-07-23 01:07:37.641976] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12718d0 is same with the state(5) to be set 00:27:53.522 [2024-07-23 01:07:37.641987] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12718d0 is same with the state(5) to be set 00:27:53.522 [2024-07-23 01:07:37.641999] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12718d0 is same with the state(5) to be set 00:27:53.522 [2024-07-23 01:07:37.642011] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12718d0 is same with the state(5) to be set 00:27:53.522 [2024-07-23 01:07:37.642023] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12718d0 is same with the state(5) to be set 00:27:53.522 [2024-07-23 01:07:37.642035] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12718d0 is same with the state(5) to be set 00:27:53.522 [2024-07-23 01:07:37.642046] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12718d0 is same with the state(5) to be set 00:27:53.522 [2024-07-23 01:07:37.642058] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12718d0 is same with the state(5) to be set 00:27:53.522 [2024-07-23 01:07:37.642073] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12718d0 is same with the state(5) to be set 00:27:53.522 [2024-07-23 01:07:37.642085] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12718d0 is same with the state(5) to be set 00:27:53.522 [2024-07-23 01:07:37.642096] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12718d0 is same with the state(5) to be set 00:27:53.522 [2024-07-23 01:07:37.642107] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12718d0 is same with the state(5) to be set 00:27:53.522 [2024-07-23 01:07:37.642119] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12718d0 is same with the state(5) to be set 00:27:53.522 [2024-07-23 01:07:37.642130] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12718d0 is same with the state(5) to be set 00:27:53.522 [2024-07-23 01:07:37.642145] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12718d0 is same with the state(5) to be set 00:27:53.522 01:07:37 -- host/failover.sh@45 -- # sleep 3 00:27:56.812 01:07:40 -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:57.096 00:27:57.096 01:07:41 -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:57.363 [2024-07-23 01:07:41.347376] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1272e20 is same with the state(5) to be set 00:27:57.363 [2024-07-23 01:07:41.347437] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1272e20 is same with the state(5) to be set 00:27:57.363 [2024-07-23 01:07:41.347471] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1272e20 is same with the state(5) to be set 00:27:57.363 [2024-07-23 01:07:41.347484] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1272e20 is same with the state(5) to be set 00:27:57.363 [2024-07-23 01:07:41.347497] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1272e20 is same with the state(5) to be set 00:27:57.363 [2024-07-23 01:07:41.347510] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1272e20 is same with the state(5) to be set 00:27:57.363 [2024-07-23 01:07:41.347537] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1272e20 is same with the state(5) to be set 00:27:57.363 [2024-07-23 01:07:41.347549] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1272e20 is same with the state(5) to be set 00:27:57.363 [2024-07-23 01:07:41.347561] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1272e20 is same with the state(5) to be set 00:27:57.363 [2024-07-23 01:07:41.347573] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1272e20 is same with the state(5) to be set 00:27:57.363 [2024-07-23 01:07:41.347584] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1272e20 is same with the state(5) to be set 00:27:57.363 [2024-07-23 01:07:41.347596] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1272e20 is same with the state(5) to be set 00:27:57.363 [2024-07-23 01:07:41.347608] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1272e20 is same with the state(5) to be set 00:27:57.363 [2024-07-23 01:07:41.347648] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1272e20 is same with the state(5) to be set 00:27:57.363 [2024-07-23 01:07:41.347662] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1272e20 is same with the state(5) to be set 00:27:57.363 [2024-07-23 01:07:41.347674] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1272e20 is same with the state(5) to be set 00:27:57.363 [2024-07-23 01:07:41.347687] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1272e20 is same with the state(5) to be set 00:27:57.363 [2024-07-23 01:07:41.347700] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1272e20 is same with the state(5) to be set 00:27:57.363 [2024-07-23 01:07:41.347712] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1272e20 is same with the state(5) to be set 00:27:57.363 [2024-07-23 01:07:41.347725] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1272e20 is same with the state(5) to be set 00:27:57.363 [2024-07-23 01:07:41.347737] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1272e20 is same with the state(5) to be set 00:27:57.363 [2024-07-23 01:07:41.347749] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1272e20 is same with the state(5) to be set 00:27:57.363 [2024-07-23 01:07:41.347761] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1272e20 is same with the state(5) to be set 00:27:57.363 [2024-07-23 01:07:41.347783] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1272e20 is same with the state(5) to be set 00:27:57.363 [2024-07-23 01:07:41.347796] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1272e20 is same with the state(5) to be set 00:27:57.363 [2024-07-23 01:07:41.347809] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1272e20 is same with the state(5) to be set 00:27:57.363 [2024-07-23 01:07:41.347821] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1272e20 is same with the state(5) to be set 00:27:57.363 [2024-07-23 01:07:41.347833] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1272e20 is same with the state(5) to be set 00:27:57.363 [2024-07-23 01:07:41.347846] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1272e20 is same with the state(5) to be set 00:27:57.363 [2024-07-23 01:07:41.347858] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1272e20 is same with the state(5) to be set 00:27:57.363 [2024-07-23 01:07:41.347870] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1272e20 is same with the state(5) to be set 00:27:57.363 [2024-07-23 01:07:41.347882] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1272e20 is same with the state(5) to be set 00:27:57.363 [2024-07-23 01:07:41.347894] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1272e20 is same with the state(5) to be set 00:27:57.363 [2024-07-23 01:07:41.347906] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1272e20 is same with the state(5) to be set 00:27:57.363 [2024-07-23 01:07:41.347919] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1272e20 is same with the state(5) to be set 00:27:57.363 [2024-07-23 01:07:41.347948] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1272e20 is same with the state(5) to be set 00:27:57.363 [2024-07-23 01:07:41.347959] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1272e20 is same with the state(5) to be set 00:27:57.363 [2024-07-23 01:07:41.347971] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1272e20 is same with the state(5) to be set 00:27:57.363 [2024-07-23 01:07:41.347983] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1272e20 is same with the state(5) to be set 00:27:57.363 [2024-07-23 01:07:41.347995] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1272e20 is same with the state(5) to be set 00:27:57.363 [2024-07-23 01:07:41.348006] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1272e20 is same with the state(5) to be set 00:27:57.363 [2024-07-23 01:07:41.348018] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1272e20 is same with the state(5) to be set 00:27:57.363 [2024-07-23 01:07:41.348030] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1272e20 is same with the state(5) to be set 00:27:57.363 [2024-07-23 01:07:41.348041] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1272e20 is same with the state(5) to be set 00:27:57.363 [2024-07-23 01:07:41.348053] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1272e20 is same with the state(5) to be set 00:27:57.363 [2024-07-23 01:07:41.348065] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1272e20 is same with the state(5) to be set 00:27:57.363 01:07:41 -- host/failover.sh@50 -- # sleep 3 00:28:00.651 01:07:44 -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:00.651 [2024-07-23 01:07:44.608342] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:00.651 01:07:44 -- host/failover.sh@55 -- # sleep 1 00:28:01.588 01:07:45 -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:28:01.846 [2024-07-23 01:07:45.885917] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273f00 is same with the state(5) to be set 00:28:01.846 [2024-07-23 01:07:45.885953] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273f00 is same with the state(5) to be set 00:28:01.846 [2024-07-23 01:07:45.885968] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273f00 is same with the state(5) to be set 00:28:01.846 [2024-07-23 01:07:45.885980] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273f00 is same with the state(5) to be set 00:28:01.846 [2024-07-23 01:07:45.885992] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273f00 is same with the state(5) to be set 00:28:01.846 [2024-07-23 01:07:45.886004] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273f00 is same with the state(5) to be set 00:28:01.846 [2024-07-23 01:07:45.886015] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273f00 is same with the state(5) to be set 00:28:01.846 [2024-07-23 01:07:45.886027] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273f00 is same with the state(5) to be set 00:28:01.846 [2024-07-23 01:07:45.886039] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273f00 is same with the state(5) to be set 00:28:01.846 [2024-07-23 01:07:45.886050] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273f00 is same with the state(5) to be set 00:28:01.846 [2024-07-23 01:07:45.886062] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273f00 is same with the state(5) to be set 00:28:01.846 [2024-07-23 01:07:45.886073] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273f00 is same with the state(5) to be set 00:28:01.846 [2024-07-23 01:07:45.886085] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273f00 is same with the state(5) to be set 00:28:01.846 [2024-07-23 01:07:45.886097] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273f00 is same with the state(5) to be set 00:28:01.847 [2024-07-23 01:07:45.886109] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273f00 is same with the state(5) to be set 00:28:01.847 [2024-07-23 01:07:45.886120] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273f00 is same with the state(5) to be set 00:28:01.847 [2024-07-23 01:07:45.886131] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273f00 is same with the state(5) to be set 00:28:01.847 [2024-07-23 01:07:45.886143] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273f00 is same with the state(5) to be set 00:28:01.847 [2024-07-23 01:07:45.886155] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273f00 is same with the state(5) to be set 00:28:01.847 [2024-07-23 01:07:45.886166] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273f00 is same with the state(5) to be set 00:28:01.847 [2024-07-23 01:07:45.886178] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273f00 is same with the state(5) to be set 00:28:01.847 [2024-07-23 01:07:45.886189] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273f00 is same with the state(5) to be set 00:28:01.847 [2024-07-23 01:07:45.886201] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273f00 is same with the state(5) to be set 00:28:01.847 [2024-07-23 01:07:45.886213] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273f00 is same with the state(5) to be set 00:28:01.847 [2024-07-23 01:07:45.886224] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273f00 is same with the state(5) to be set 00:28:01.847 [2024-07-23 01:07:45.886236] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273f00 is same with the state(5) to be set 00:28:01.847 [2024-07-23 01:07:45.886248] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273f00 is same with the state(5) to be set 00:28:01.847 [2024-07-23 01:07:45.886260] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273f00 is same with the state(5) to be set 00:28:01.847 [2024-07-23 01:07:45.886278] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273f00 is same with the state(5) to be set 00:28:01.847 [2024-07-23 01:07:45.886290] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273f00 is same with the state(5) to be set 00:28:01.847 [2024-07-23 01:07:45.886302] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273f00 is same with the state(5) to be set 00:28:01.847 [2024-07-23 01:07:45.886314] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273f00 is same with the state(5) to be set 00:28:01.847 [2024-07-23 01:07:45.886326] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273f00 is same with the state(5) to be set 00:28:01.847 [2024-07-23 01:07:45.886338] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273f00 is same with the state(5) to be set 00:28:01.847 [2024-07-23 01:07:45.886350] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273f00 is same with the state(5) to be set 00:28:01.847 [2024-07-23 01:07:45.886361] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273f00 is same with the state(5) to be set 00:28:01.847 [2024-07-23 01:07:45.886373] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273f00 is same with the state(5) to be set 00:28:01.847 [2024-07-23 01:07:45.886384] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273f00 is same with the state(5) to be set 00:28:01.847 [2024-07-23 01:07:45.886395] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273f00 is same with the state(5) to be set 00:28:01.847 [2024-07-23 01:07:45.886407] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273f00 is same with the state(5) to be set 00:28:01.847 [2024-07-23 01:07:45.886419] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273f00 is same with the state(5) to be set 00:28:01.847 [2024-07-23 01:07:45.886430] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273f00 is same with the state(5) to be set 00:28:01.847 [2024-07-23 01:07:45.886443] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273f00 is same with the state(5) to be set 00:28:01.847 [2024-07-23 01:07:45.886455] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273f00 is same with the state(5) to be set 00:28:01.847 [2024-07-23 01:07:45.886467] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273f00 is same with the state(5) to be set 00:28:01.847 [2024-07-23 01:07:45.886480] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273f00 is same with the state(5) to be set 00:28:01.847 [2024-07-23 01:07:45.886491] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273f00 is same with the state(5) to be set 00:28:01.847 [2024-07-23 01:07:45.886503] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273f00 is same with the state(5) to be set 00:28:01.847 [2024-07-23 01:07:45.886515] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273f00 is same with the state(5) to be set 00:28:01.847 [2024-07-23 01:07:45.886527] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273f00 is same with the state(5) to be set 00:28:01.847 [2024-07-23 01:07:45.886539] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273f00 is same with the state(5) to be set 00:28:01.847 [2024-07-23 01:07:45.886551] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273f00 is same with the state(5) to be set 00:28:01.847 [2024-07-23 01:07:45.886562] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273f00 is same with the state(5) to be set 00:28:01.847 [2024-07-23 01:07:45.886574] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273f00 is same with the state(5) to be set 00:28:01.847 [2024-07-23 01:07:45.886586] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273f00 is same with the state(5) to be set 00:28:01.847 [2024-07-23 01:07:45.886611] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273f00 is same with the state(5) to be set 00:28:01.847 [2024-07-23 01:07:45.886661] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273f00 is same with the state(5) to be set 00:28:01.847 [2024-07-23 01:07:45.886676] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273f00 is same with the state(5) to be set 00:28:01.847 [2024-07-23 01:07:45.886688] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273f00 is same with the state(5) to be set 00:28:01.847 [2024-07-23 01:07:45.886700] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273f00 is same with the state(5) to be set 00:28:01.847 [2024-07-23 01:07:45.886712] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273f00 is same with the state(5) to be set 00:28:01.847 01:07:45 -- host/failover.sh@59 -- # wait 3506174 00:28:08.421 0 00:28:08.421 01:07:51 -- host/failover.sh@61 -- # killprocess 3506029 00:28:08.421 01:07:51 -- common/autotest_common.sh@926 -- # '[' -z 3506029 ']' 00:28:08.421 01:07:51 -- common/autotest_common.sh@930 -- # kill -0 3506029 00:28:08.421 01:07:51 -- common/autotest_common.sh@931 -- # uname 00:28:08.421 01:07:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:08.421 01:07:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3506029 00:28:08.421 01:07:51 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:08.421 01:07:51 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:08.421 01:07:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3506029' 00:28:08.421 killing process with pid 3506029 00:28:08.421 01:07:51 -- common/autotest_common.sh@945 -- # kill 3506029 00:28:08.421 01:07:51 -- common/autotest_common.sh@950 -- # wait 3506029 00:28:08.421 01:07:51 -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:08.421 [2024-07-23 01:07:34.709666] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:28:08.421 [2024-07-23 01:07:34.709757] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3506029 ] 00:28:08.421 EAL: No free 2048 kB hugepages reported on node 1 00:28:08.421 [2024-07-23 01:07:34.772155] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:08.421 [2024-07-23 01:07:34.859245] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:08.421 Running I/O for 15 seconds... 00:28:08.421 [2024-07-23 01:07:37.642466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.421 [2024-07-23 01:07:37.642509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.421 [2024-07-23 01:07:37.642538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:83832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.421 [2024-07-23 01:07:37.642554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.421 [2024-07-23 01:07:37.642571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:83840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.421 [2024-07-23 01:07:37.642584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.421 [2024-07-23 01:07:37.642600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:83360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.421 [2024-07-23 01:07:37.642621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.421 [2024-07-23 01:07:37.642638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:83368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.421 [2024-07-23 01:07:37.642662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.421 [2024-07-23 01:07:37.642678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:83384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.421 [2024-07-23 01:07:37.642691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.421 [2024-07-23 01:07:37.642707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:83392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.421 [2024-07-23 01:07:37.642720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.421 [2024-07-23 01:07:37.642735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:83416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.421 [2024-07-23 01:07:37.642748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.421 [2024-07-23 01:07:37.642764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:83488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.421 [2024-07-23 01:07:37.642777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.421 [2024-07-23 01:07:37.642792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:83496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.421 [2024-07-23 01:07:37.642805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.421 [2024-07-23 01:07:37.642820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:83520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.421 [2024-07-23 01:07:37.642833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.421 [2024-07-23 01:07:37.642855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:83872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.421 [2024-07-23 01:07:37.642869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.421 [2024-07-23 01:07:37.642885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:83880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.421 [2024-07-23 01:07:37.642898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.421 [2024-07-23 01:07:37.642920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:83888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.421 [2024-07-23 01:07:37.642933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.422 [2024-07-23 01:07:37.642948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:83896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.422 [2024-07-23 01:07:37.642983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.422 [2024-07-23 01:07:37.642998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:83904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.422 [2024-07-23 01:07:37.643011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.422 [2024-07-23 01:07:37.643025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:83912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.422 [2024-07-23 01:07:37.643038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.422 [2024-07-23 01:07:37.643053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:83920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.422 [2024-07-23 01:07:37.643065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.422 [2024-07-23 01:07:37.643079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:83928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.422 [2024-07-23 01:07:37.643092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.422 [2024-07-23 01:07:37.643107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:83936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.422 [2024-07-23 01:07:37.643119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.422 [2024-07-23 01:07:37.643133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:83944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.422 [2024-07-23 01:07:37.643146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.422 [2024-07-23 01:07:37.643161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:83952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.422 [2024-07-23 01:07:37.643174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.422 [2024-07-23 01:07:37.643189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:83960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.422 [2024-07-23 01:07:37.643201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.422 [2024-07-23 01:07:37.643215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:83968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.422 [2024-07-23 01:07:37.643232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.422 [2024-07-23 01:07:37.643247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:83976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.422 [2024-07-23 01:07:37.643260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.422 [2024-07-23 01:07:37.643274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:83984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.422 [2024-07-23 01:07:37.643287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.422 [2024-07-23 01:07:37.643301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:83992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.422 [2024-07-23 01:07:37.643314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.422 [2024-07-23 01:07:37.643328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:84000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.422 [2024-07-23 01:07:37.643341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.422 [2024-07-23 01:07:37.643355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:84008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.422 [2024-07-23 01:07:37.643368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.422 [2024-07-23 01:07:37.643383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:84016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.422 [2024-07-23 01:07:37.643396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.422 [2024-07-23 01:07:37.643425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:84024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.422 [2024-07-23 01:07:37.643439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.422 [2024-07-23 01:07:37.643454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:84032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.422 [2024-07-23 01:07:37.643467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.422 [2024-07-23 01:07:37.643482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:84040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.422 [2024-07-23 01:07:37.643496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.422 [2024-07-23 01:07:37.643510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:84048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.422 [2024-07-23 01:07:37.643524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.422 [2024-07-23 01:07:37.643548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:84056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.422 [2024-07-23 01:07:37.643561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.422 [2024-07-23 01:07:37.643576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:84064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.422 [2024-07-23 01:07:37.643590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.422 [2024-07-23 01:07:37.643608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:84072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.422 [2024-07-23 01:07:37.643631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.422 [2024-07-23 01:07:37.643647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:84080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.422 [2024-07-23 01:07:37.643671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.422 [2024-07-23 01:07:37.643686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:84088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.422 [2024-07-23 01:07:37.643699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.422 [2024-07-23 01:07:37.643715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:84096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.422 [2024-07-23 01:07:37.643728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.422 [2024-07-23 01:07:37.643743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:84104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.422 [2024-07-23 01:07:37.643756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.422 [2024-07-23 01:07:37.643771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:84112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.422 [2024-07-23 01:07:37.643784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.422 [2024-07-23 01:07:37.643799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:84120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.422 [2024-07-23 01:07:37.643812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.422 [2024-07-23 01:07:37.643826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:84128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.422 [2024-07-23 01:07:37.643839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.422 [2024-07-23 01:07:37.643854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:84136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.422 [2024-07-23 01:07:37.643867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.422 [2024-07-23 01:07:37.643882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:84144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.422 [2024-07-23 01:07:37.643900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.422 [2024-07-23 01:07:37.643915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:84152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.422 [2024-07-23 01:07:37.643928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.422 [2024-07-23 01:07:37.643943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:84160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.422 [2024-07-23 01:07:37.643964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.422 [2024-07-23 01:07:37.643979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:84168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.422 [2024-07-23 01:07:37.643997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.422 [2024-07-23 01:07:37.644013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:84176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.422 [2024-07-23 01:07:37.644027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.422 [2024-07-23 01:07:37.644042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.422 [2024-07-23 01:07:37.644056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.422 [2024-07-23 01:07:37.644071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:84192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.422 [2024-07-23 01:07:37.644084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.422 [2024-07-23 01:07:37.644099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:84200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.422 [2024-07-23 01:07:37.644112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.422 [2024-07-23 01:07:37.644127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:84208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.423 [2024-07-23 01:07:37.644146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.423 [2024-07-23 01:07:37.644161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:84216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.423 [2024-07-23 01:07:37.644175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.423 [2024-07-23 01:07:37.644189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:84224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.423 [2024-07-23 01:07:37.644202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.423 [2024-07-23 01:07:37.644217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:83528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.423 [2024-07-23 01:07:37.644230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.423 [2024-07-23 01:07:37.644245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:83536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.423 [2024-07-23 01:07:37.644258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.423 [2024-07-23 01:07:37.644273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:83544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.423 [2024-07-23 01:07:37.644286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.423 [2024-07-23 01:07:37.644301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:83560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.423 [2024-07-23 01:07:37.644314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.423 [2024-07-23 01:07:37.644329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:83584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.423 [2024-07-23 01:07:37.644346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.423 [2024-07-23 01:07:37.644360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:83616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.423 [2024-07-23 01:07:37.644377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.423 [2024-07-23 01:07:37.644392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:83624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.423 [2024-07-23 01:07:37.644406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.423 [2024-07-23 01:07:37.644422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:83648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.423 [2024-07-23 01:07:37.644435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.423 [2024-07-23 01:07:37.644450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:84232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.423 [2024-07-23 01:07:37.644468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.423 [2024-07-23 01:07:37.644483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:84240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.423 [2024-07-23 01:07:37.644497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.423 [2024-07-23 01:07:37.644512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:84248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.423 [2024-07-23 01:07:37.644525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.423 [2024-07-23 01:07:37.644539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:84256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.423 [2024-07-23 01:07:37.644552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.423 [2024-07-23 01:07:37.644567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:84264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.423 [2024-07-23 01:07:37.644580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.423 [2024-07-23 01:07:37.644595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:84272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.423 [2024-07-23 01:07:37.644608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.423 [2024-07-23 01:07:37.644632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:84280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.423 [2024-07-23 01:07:37.644646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.423 [2024-07-23 01:07:37.644664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:84288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.423 [2024-07-23 01:07:37.644677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.423 [2024-07-23 01:07:37.644691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:84296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.423 [2024-07-23 01:07:37.644705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.423 [2024-07-23 01:07:37.644719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:84304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.423 [2024-07-23 01:07:37.644733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.423 [2024-07-23 01:07:37.644750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:84312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.423 [2024-07-23 01:07:37.644764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.423 [2024-07-23 01:07:37.644779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:84320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.423 [2024-07-23 01:07:37.644792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.423 [2024-07-23 01:07:37.644812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:84328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.423 [2024-07-23 01:07:37.644826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.423 [2024-07-23 01:07:37.644841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:84336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.423 [2024-07-23 01:07:37.644855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.423 [2024-07-23 01:07:37.644869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:84344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.423 [2024-07-23 01:07:37.644883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.423 [2024-07-23 01:07:37.644897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:84352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.423 [2024-07-23 01:07:37.644910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.423 [2024-07-23 01:07:37.644925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:84360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.423 [2024-07-23 01:07:37.644949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.423 [2024-07-23 01:07:37.644964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:84368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.423 [2024-07-23 01:07:37.644977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.423 [2024-07-23 01:07:37.644992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:84376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.423 [2024-07-23 01:07:37.645005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.423 [2024-07-23 01:07:37.645020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:84384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.423 [2024-07-23 01:07:37.645033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.423 [2024-07-23 01:07:37.645048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.423 [2024-07-23 01:07:37.645061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.423 [2024-07-23 01:07:37.645075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:84400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.423 [2024-07-23 01:07:37.645089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.423 [2024-07-23 01:07:37.645103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:84408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.423 [2024-07-23 01:07:37.645120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.423 [2024-07-23 01:07:37.645135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.423 [2024-07-23 01:07:37.645148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.423 [2024-07-23 01:07:37.645163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:84424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.423 [2024-07-23 01:07:37.645180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.423 [2024-07-23 01:07:37.645195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:84432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.423 [2024-07-23 01:07:37.645208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.423 [2024-07-23 01:07:37.645222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:83672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.423 [2024-07-23 01:07:37.645236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.423 [2024-07-23 01:07:37.645250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:83680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.424 [2024-07-23 01:07:37.645263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.424 [2024-07-23 01:07:37.645285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:83688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.424 [2024-07-23 01:07:37.645299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.424 [2024-07-23 01:07:37.645314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:83704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.424 [2024-07-23 01:07:37.645327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.424 [2024-07-23 01:07:37.645345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:83712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.424 [2024-07-23 01:07:37.645358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.424 [2024-07-23 01:07:37.645373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:83752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.424 [2024-07-23 01:07:37.645386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.424 [2024-07-23 01:07:37.645400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:83760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.424 [2024-07-23 01:07:37.645414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.424 [2024-07-23 01:07:37.645429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:83768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.424 [2024-07-23 01:07:37.645442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.424 [2024-07-23 01:07:37.645457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:84440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.424 [2024-07-23 01:07:37.645470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.424 [2024-07-23 01:07:37.645497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:84448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.424 [2024-07-23 01:07:37.645511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.424 [2024-07-23 01:07:37.645525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:84456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.424 [2024-07-23 01:07:37.645539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.424 [2024-07-23 01:07:37.645554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:84464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.424 [2024-07-23 01:07:37.645567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.424 [2024-07-23 01:07:37.645582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:84472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.424 [2024-07-23 01:07:37.645595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.424 [2024-07-23 01:07:37.645609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:84480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.424 [2024-07-23 01:07:37.645631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.424 [2024-07-23 01:07:37.645647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:84488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.424 [2024-07-23 01:07:37.645663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.424 [2024-07-23 01:07:37.645678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:84496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.424 [2024-07-23 01:07:37.645692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.424 [2024-07-23 01:07:37.645706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:84504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.424 [2024-07-23 01:07:37.645720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.424 [2024-07-23 01:07:37.645735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:84512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.424 [2024-07-23 01:07:37.645748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.424 [2024-07-23 01:07:37.645767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:84520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.424 [2024-07-23 01:07:37.645782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.424 [2024-07-23 01:07:37.645797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:84528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.424 [2024-07-23 01:07:37.645810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.424 [2024-07-23 01:07:37.645824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:84536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.424 [2024-07-23 01:07:37.645837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.424 [2024-07-23 01:07:37.645851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:84544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.424 [2024-07-23 01:07:37.645864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.424 [2024-07-23 01:07:37.645883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:84552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.424 [2024-07-23 01:07:37.645897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.424 [2024-07-23 01:07:37.645916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.424 [2024-07-23 01:07:37.645929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.424 [2024-07-23 01:07:37.645944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:84568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.424 [2024-07-23 01:07:37.645958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.424 [2024-07-23 01:07:37.645973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:84576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.424 [2024-07-23 01:07:37.645986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.424 [2024-07-23 01:07:37.646001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:84584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.424 [2024-07-23 01:07:37.646014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.424 [2024-07-23 01:07:37.646029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:84592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.424 [2024-07-23 01:07:37.646043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.424 [2024-07-23 01:07:37.646057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:84600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.424 [2024-07-23 01:07:37.646071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.424 [2024-07-23 01:07:37.646086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:84608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.424 [2024-07-23 01:07:37.646099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.424 [2024-07-23 01:07:37.646114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:83784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.424 [2024-07-23 01:07:37.646127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.424 [2024-07-23 01:07:37.646142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:83792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.424 [2024-07-23 01:07:37.646156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.424 [2024-07-23 01:07:37.646171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:83800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.424 [2024-07-23 01:07:37.646184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.424 [2024-07-23 01:07:37.646199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:83808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.424 [2024-07-23 01:07:37.646212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.424 [2024-07-23 01:07:37.646228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:83816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.424 [2024-07-23 01:07:37.646247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.424 [2024-07-23 01:07:37.646263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:83848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.424 [2024-07-23 01:07:37.646276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.424 [2024-07-23 01:07:37.646291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:83856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.424 [2024-07-23 01:07:37.646305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.424 [2024-07-23 01:07:37.646319] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee1710 is same with the state(5) to be set 00:28:08.424 [2024-07-23 01:07:37.646335] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:08.424 [2024-07-23 01:07:37.646346] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:08.424 [2024-07-23 01:07:37.646358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83864 len:8 PRP1 0x0 PRP2 0x0 00:28:08.424 [2024-07-23 01:07:37.646371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.425 [2024-07-23 01:07:37.646429] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1ee1710 was disconnected and freed. reset controller. 00:28:08.425 [2024-07-23 01:07:37.646454] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:28:08.425 [2024-07-23 01:07:37.646489] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:08.425 [2024-07-23 01:07:37.646507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.425 [2024-07-23 01:07:37.646521] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:08.425 [2024-07-23 01:07:37.646534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.425 [2024-07-23 01:07:37.646547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:08.425 [2024-07-23 01:07:37.646559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.425 [2024-07-23 01:07:37.646572] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:08.425 [2024-07-23 01:07:37.646584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.425 [2024-07-23 01:07:37.646597] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:08.425 [2024-07-23 01:07:37.646642] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec20a0 (9): Bad file descriptor 00:28:08.425 [2024-07-23 01:07:37.648927] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:08.425 [2024-07-23 01:07:37.680532] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:08.425 [2024-07-23 01:07:41.348230] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:08.425 [2024-07-23 01:07:41.348275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.425 [2024-07-23 01:07:41.348294] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:08.425 [2024-07-23 01:07:41.348313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.425 [2024-07-23 01:07:41.348328] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:08.425 [2024-07-23 01:07:41.348341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.425 [2024-07-23 01:07:41.348355] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:08.425 [2024-07-23 01:07:41.348368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.425 [2024-07-23 01:07:41.348380] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec20a0 is same with the state(5) to be set 00:28:08.425 [2024-07-23 01:07:41.348444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:77272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.425 [2024-07-23 01:07:41.348466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.425 [2024-07-23 01:07:41.348490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:76752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.425 [2024-07-23 01:07:41.348505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.425 [2024-07-23 01:07:41.348522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:76768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.425 [2024-07-23 01:07:41.348535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.425 [2024-07-23 01:07:41.348550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:76776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.425 [2024-07-23 01:07:41.348564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.425 [2024-07-23 01:07:41.348578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:76792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.425 [2024-07-23 01:07:41.348592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.425 [2024-07-23 01:07:41.348606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:76808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.425 [2024-07-23 01:07:41.348644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.425 [2024-07-23 01:07:41.348660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:76816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.425 [2024-07-23 01:07:41.348673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.425 [2024-07-23 01:07:41.348703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:76824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.425 [2024-07-23 01:07:41.348717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.425 [2024-07-23 01:07:41.348732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:76840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.425 [2024-07-23 01:07:41.348745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.425 [2024-07-23 01:07:41.348760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:77288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.425 [2024-07-23 01:07:41.348778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.425 [2024-07-23 01:07:41.348793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:77296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.425 [2024-07-23 01:07:41.348806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.425 [2024-07-23 01:07:41.348821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:77304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.425 [2024-07-23 01:07:41.348835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.425 [2024-07-23 01:07:41.348850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:77352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.425 [2024-07-23 01:07:41.348864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.425 [2024-07-23 01:07:41.348879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:77360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.425 [2024-07-23 01:07:41.348891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.425 [2024-07-23 01:07:41.348906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:77384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.425 [2024-07-23 01:07:41.348919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.425 [2024-07-23 01:07:41.348934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:77400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.425 [2024-07-23 01:07:41.348947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.425 [2024-07-23 01:07:41.348961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:77408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.425 [2024-07-23 01:07:41.348989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.425 [2024-07-23 01:07:41.349005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:77424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.425 [2024-07-23 01:07:41.349018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.425 [2024-07-23 01:07:41.349032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:77432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.425 [2024-07-23 01:07:41.349045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.425 [2024-07-23 01:07:41.349059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:77440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.425 [2024-07-23 01:07:41.349072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.425 [2024-07-23 01:07:41.349086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:76848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.425 [2024-07-23 01:07:41.349099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.425 [2024-07-23 01:07:41.349113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:76880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.425 [2024-07-23 01:07:41.349125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.425 [2024-07-23 01:07:41.349143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:76888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.425 [2024-07-23 01:07:41.349157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.425 [2024-07-23 01:07:41.349171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:76896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.425 [2024-07-23 01:07:41.349184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.425 [2024-07-23 01:07:41.349198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:76904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.425 [2024-07-23 01:07:41.349211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.425 [2024-07-23 01:07:41.349225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:76944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.425 [2024-07-23 01:07:41.349238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.425 [2024-07-23 01:07:41.349252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:76952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.425 [2024-07-23 01:07:41.349265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.425 [2024-07-23 01:07:41.349279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:76960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.426 [2024-07-23 01:07:41.349292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.426 [2024-07-23 01:07:41.349307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:77456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.426 [2024-07-23 01:07:41.349320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.426 [2024-07-23 01:07:41.349335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:77480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.426 [2024-07-23 01:07:41.349347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.426 [2024-07-23 01:07:41.349362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:76968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.426 [2024-07-23 01:07:41.349376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.426 [2024-07-23 01:07:41.349390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:76992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.426 [2024-07-23 01:07:41.349403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.426 [2024-07-23 01:07:41.349417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:77032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.426 [2024-07-23 01:07:41.349430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.426 [2024-07-23 01:07:41.349444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:77040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.426 [2024-07-23 01:07:41.349457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.426 [2024-07-23 01:07:41.349471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:77064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.426 [2024-07-23 01:07:41.349484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.426 [2024-07-23 01:07:41.349502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:77072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.426 [2024-07-23 01:07:41.349515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.426 [2024-07-23 01:07:41.349529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:77080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.426 [2024-07-23 01:07:41.349542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.426 [2024-07-23 01:07:41.349557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.426 [2024-07-23 01:07:41.349570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.426 [2024-07-23 01:07:41.349584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:77488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.426 [2024-07-23 01:07:41.349596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.426 [2024-07-23 01:07:41.349611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:77496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.426 [2024-07-23 01:07:41.349646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.426 [2024-07-23 01:07:41.349662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:77512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.426 [2024-07-23 01:07:41.349676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.426 [2024-07-23 01:07:41.349690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:77520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.426 [2024-07-23 01:07:41.349704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.426 [2024-07-23 01:07:41.349719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:77528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.426 [2024-07-23 01:07:41.349732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.426 [2024-07-23 01:07:41.349746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:77544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.426 [2024-07-23 01:07:41.349760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.426 [2024-07-23 01:07:41.349775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:77552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.426 [2024-07-23 01:07:41.349788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.426 [2024-07-23 01:07:41.349804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:77560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.426 [2024-07-23 01:07:41.349816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.426 [2024-07-23 01:07:41.349831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:77568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.426 [2024-07-23 01:07:41.349844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.426 [2024-07-23 01:07:41.349859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:77576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.426 [2024-07-23 01:07:41.349876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.426 [2024-07-23 01:07:41.349891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:77584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.426 [2024-07-23 01:07:41.349904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.426 [2024-07-23 01:07:41.349919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:77592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.426 [2024-07-23 01:07:41.349947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.426 [2024-07-23 01:07:41.349962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:77600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.426 [2024-07-23 01:07:41.349975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.426 [2024-07-23 01:07:41.349989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:77608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.426 [2024-07-23 01:07:41.350002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.426 [2024-07-23 01:07:41.350016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:77616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.426 [2024-07-23 01:07:41.350028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.426 [2024-07-23 01:07:41.350043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:77624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.426 [2024-07-23 01:07:41.350056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.426 [2024-07-23 01:07:41.350070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:77632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.426 [2024-07-23 01:07:41.350082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.426 [2024-07-23 01:07:41.350096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:77640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.426 [2024-07-23 01:07:41.350109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.426 [2024-07-23 01:07:41.350123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:77648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.426 [2024-07-23 01:07:41.350136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.426 [2024-07-23 01:07:41.350151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:77656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.426 [2024-07-23 01:07:41.350164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.426 [2024-07-23 01:07:41.350178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:77664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.426 [2024-07-23 01:07:41.350191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.426 [2024-07-23 01:07:41.350206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:77104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.426 [2024-07-23 01:07:41.350219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.426 [2024-07-23 01:07:41.350237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:77120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.427 [2024-07-23 01:07:41.350250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.427 [2024-07-23 01:07:41.350265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:77160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.427 [2024-07-23 01:07:41.350278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.427 [2024-07-23 01:07:41.350292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:77168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.427 [2024-07-23 01:07:41.350305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.427 [2024-07-23 01:07:41.350319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:77184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.427 [2024-07-23 01:07:41.350332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.427 [2024-07-23 01:07:41.350347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:77192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.427 [2024-07-23 01:07:41.350360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.427 [2024-07-23 01:07:41.350374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:77224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.427 [2024-07-23 01:07:41.350387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.427 [2024-07-23 01:07:41.350401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:77248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.427 [2024-07-23 01:07:41.350414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.427 [2024-07-23 01:07:41.350429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:77672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.427 [2024-07-23 01:07:41.350441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.427 [2024-07-23 01:07:41.350456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:77680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.427 [2024-07-23 01:07:41.350468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.427 [2024-07-23 01:07:41.350483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:77688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.427 [2024-07-23 01:07:41.350496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.427 [2024-07-23 01:07:41.350510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:77696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.427 [2024-07-23 01:07:41.350522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.427 [2024-07-23 01:07:41.350536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:77704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.427 [2024-07-23 01:07:41.350549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.427 [2024-07-23 01:07:41.350563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:77712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.427 [2024-07-23 01:07:41.350579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.427 [2024-07-23 01:07:41.350610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:77720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.427 [2024-07-23 01:07:41.350632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.427 [2024-07-23 01:07:41.350648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:77728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.427 [2024-07-23 01:07:41.350661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.427 [2024-07-23 01:07:41.350676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:77736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.427 [2024-07-23 01:07:41.350689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.427 [2024-07-23 01:07:41.350704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:77744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.427 [2024-07-23 01:07:41.350717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.427 [2024-07-23 01:07:41.350732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:77752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.427 [2024-07-23 01:07:41.350745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.427 [2024-07-23 01:07:41.350759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:77760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.427 [2024-07-23 01:07:41.350772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.427 [2024-07-23 01:07:41.350787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:77768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.427 [2024-07-23 01:07:41.350800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.427 [2024-07-23 01:07:41.350814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:77776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.427 [2024-07-23 01:07:41.350827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.427 [2024-07-23 01:07:41.350842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:77784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.427 [2024-07-23 01:07:41.350854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.427 [2024-07-23 01:07:41.350869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:77792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.427 [2024-07-23 01:07:41.350881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.427 [2024-07-23 01:07:41.350896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:77800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.427 [2024-07-23 01:07:41.350908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.427 [2024-07-23 01:07:41.350923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:77264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.427 [2024-07-23 01:07:41.350935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.427 [2024-07-23 01:07:41.350965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:77280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.427 [2024-07-23 01:07:41.350981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.427 [2024-07-23 01:07:41.350995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:77312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.427 [2024-07-23 01:07:41.351008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.427 [2024-07-23 01:07:41.351023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:77320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.427 [2024-07-23 01:07:41.351035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.427 [2024-07-23 01:07:41.351049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:77328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.427 [2024-07-23 01:07:41.351061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.427 [2024-07-23 01:07:41.351075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:77336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.427 [2024-07-23 01:07:41.351088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.427 [2024-07-23 01:07:41.351102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:77344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.427 [2024-07-23 01:07:41.351115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.427 [2024-07-23 01:07:41.351129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:77368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.427 [2024-07-23 01:07:41.351157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.427 [2024-07-23 01:07:41.351173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:77808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.427 [2024-07-23 01:07:41.351187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.427 [2024-07-23 01:07:41.351202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:77816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.427 [2024-07-23 01:07:41.351214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.427 [2024-07-23 01:07:41.351230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:77824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.427 [2024-07-23 01:07:41.351243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.427 [2024-07-23 01:07:41.351258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:77832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.427 [2024-07-23 01:07:41.351272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.427 [2024-07-23 01:07:41.351286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:77840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.427 [2024-07-23 01:07:41.351299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.427 [2024-07-23 01:07:41.351314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:77848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.427 [2024-07-23 01:07:41.351327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.427 [2024-07-23 01:07:41.351345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:77856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.427 [2024-07-23 01:07:41.351359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.427 [2024-07-23 01:07:41.351374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:77864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.427 [2024-07-23 01:07:41.351387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.428 [2024-07-23 01:07:41.351403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:77872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.428 [2024-07-23 01:07:41.351415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.428 [2024-07-23 01:07:41.351430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:77880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.428 [2024-07-23 01:07:41.351444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.428 [2024-07-23 01:07:41.351459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:77888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.428 [2024-07-23 01:07:41.351472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.428 [2024-07-23 01:07:41.351487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:77896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.428 [2024-07-23 01:07:41.351500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.428 [2024-07-23 01:07:41.351515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:77904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.428 [2024-07-23 01:07:41.351528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.428 [2024-07-23 01:07:41.351543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:77912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.428 [2024-07-23 01:07:41.351556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.428 [2024-07-23 01:07:41.351571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:77920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.428 [2024-07-23 01:07:41.351584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.428 [2024-07-23 01:07:41.351599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:77928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.428 [2024-07-23 01:07:41.351619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.428 [2024-07-23 01:07:41.351637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:77936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.428 [2024-07-23 01:07:41.351651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.428 [2024-07-23 01:07:41.351666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:77944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.428 [2024-07-23 01:07:41.351679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.428 [2024-07-23 01:07:41.351694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:77952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.428 [2024-07-23 01:07:41.351711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.428 [2024-07-23 01:07:41.351726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:77960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.428 [2024-07-23 01:07:41.351740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.428 [2024-07-23 01:07:41.351754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:77968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.428 [2024-07-23 01:07:41.351767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.428 [2024-07-23 01:07:41.351782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:77976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.428 [2024-07-23 01:07:41.351795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.428 [2024-07-23 01:07:41.351810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:77984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.428 [2024-07-23 01:07:41.351823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.428 [2024-07-23 01:07:41.351838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:77992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.428 [2024-07-23 01:07:41.351852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.428 [2024-07-23 01:07:41.351866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:78000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.428 [2024-07-23 01:07:41.351880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.428 [2024-07-23 01:07:41.351894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:78008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.428 [2024-07-23 01:07:41.351907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.428 [2024-07-23 01:07:41.351922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:78016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.428 [2024-07-23 01:07:41.351935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.428 [2024-07-23 01:07:41.351950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:78024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.428 [2024-07-23 01:07:41.351963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.428 [2024-07-23 01:07:41.351978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:77376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.428 [2024-07-23 01:07:41.351991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.428 [2024-07-23 01:07:41.352005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:77392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.428 [2024-07-23 01:07:41.352019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.428 [2024-07-23 01:07:41.352033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:77416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.428 [2024-07-23 01:07:41.352046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.428 [2024-07-23 01:07:41.352064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:77448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.428 [2024-07-23 01:07:41.352078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.428 [2024-07-23 01:07:41.352093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:77464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.428 [2024-07-23 01:07:41.352106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.428 [2024-07-23 01:07:41.352121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:77472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.428 [2024-07-23 01:07:41.352134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.428 [2024-07-23 01:07:41.352148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:77504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.428 [2024-07-23 01:07:41.352162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.428 [2024-07-23 01:07:41.352175] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ece510 is same with the state(5) to be set 00:28:08.428 [2024-07-23 01:07:41.352191] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:08.428 [2024-07-23 01:07:41.352202] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:08.428 [2024-07-23 01:07:41.352213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77536 len:8 PRP1 0x0 PRP2 0x0 00:28:08.428 [2024-07-23 01:07:41.352225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.428 [2024-07-23 01:07:41.352285] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1ece510 was disconnected and freed. reset controller. 00:28:08.428 [2024-07-23 01:07:41.352302] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:28:08.428 [2024-07-23 01:07:41.352317] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:08.428 [2024-07-23 01:07:41.354313] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:08.428 [2024-07-23 01:07:41.354353] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec20a0 (9): Bad file descriptor 00:28:08.428 [2024-07-23 01:07:41.424657] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:08.428 [2024-07-23 01:07:45.884742] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:08.428 [2024-07-23 01:07:45.884811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.428 [2024-07-23 01:07:45.884832] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:08.428 [2024-07-23 01:07:45.884846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.428 [2024-07-23 01:07:45.884860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:08.428 [2024-07-23 01:07:45.884873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.428 [2024-07-23 01:07:45.884887] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:08.428 [2024-07-23 01:07:45.884899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.428 [2024-07-23 01:07:45.884921] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec20a0 is same with the state(5) to be set 00:28:08.428 [2024-07-23 01:07:45.886879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:48680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.428 [2024-07-23 01:07:45.886905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.428 [2024-07-23 01:07:45.886940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:48152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.428 [2024-07-23 01:07:45.886955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.428 [2024-07-23 01:07:45.886972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:48176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.428 [2024-07-23 01:07:45.886985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.429 [2024-07-23 01:07:45.887000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:48192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.429 [2024-07-23 01:07:45.887013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.429 [2024-07-23 01:07:45.887027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:48200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.429 [2024-07-23 01:07:45.887041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.429 [2024-07-23 01:07:45.887056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:48208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.429 [2024-07-23 01:07:45.887069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.429 [2024-07-23 01:07:45.887084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:48232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.429 [2024-07-23 01:07:45.887096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.429 [2024-07-23 01:07:45.887111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:48240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.429 [2024-07-23 01:07:45.887140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.429 [2024-07-23 01:07:45.887154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:48256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.429 [2024-07-23 01:07:45.887167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.429 [2024-07-23 01:07:45.887181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:48704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.429 [2024-07-23 01:07:45.887194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.429 [2024-07-23 01:07:45.887209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:48712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.429 [2024-07-23 01:07:45.887221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.429 [2024-07-23 01:07:45.887235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:48728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.429 [2024-07-23 01:07:45.887248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.429 [2024-07-23 01:07:45.887262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:48736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.429 [2024-07-23 01:07:45.887283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.429 [2024-07-23 01:07:45.887298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:48744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.429 [2024-07-23 01:07:45.887311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.429 [2024-07-23 01:07:45.887325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:48752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.429 [2024-07-23 01:07:45.887338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.429 [2024-07-23 01:07:45.887352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:48760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.429 [2024-07-23 01:07:45.887365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.429 [2024-07-23 01:07:45.887385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:48800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.429 [2024-07-23 01:07:45.887399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.429 [2024-07-23 01:07:45.887413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:48824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.429 [2024-07-23 01:07:45.887426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.429 [2024-07-23 01:07:45.887440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:48840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.429 [2024-07-23 01:07:45.887452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.429 [2024-07-23 01:07:45.887467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:48848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.429 [2024-07-23 01:07:45.887479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.429 [2024-07-23 01:07:45.887493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:48856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.429 [2024-07-23 01:07:45.887506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.429 [2024-07-23 01:07:45.887520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:48864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.429 [2024-07-23 01:07:45.887532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.429 [2024-07-23 01:07:45.887547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:48872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.429 [2024-07-23 01:07:45.887559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.429 [2024-07-23 01:07:45.887574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:48888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.429 [2024-07-23 01:07:45.887587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.429 [2024-07-23 01:07:45.887608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:48896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.429 [2024-07-23 01:07:45.887646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.429 [2024-07-23 01:07:45.887667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:48904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.429 [2024-07-23 01:07:45.887681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.429 [2024-07-23 01:07:45.887696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:48912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.429 [2024-07-23 01:07:45.887709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.429 [2024-07-23 01:07:45.887724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:48264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.429 [2024-07-23 01:07:45.887737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.429 [2024-07-23 01:07:45.887752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:48280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.429 [2024-07-23 01:07:45.887765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.429 [2024-07-23 01:07:45.887780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:48288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.429 [2024-07-23 01:07:45.887793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.429 [2024-07-23 01:07:45.887807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:48320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.429 [2024-07-23 01:07:45.887820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.429 [2024-07-23 01:07:45.887835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:48368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.429 [2024-07-23 01:07:45.887848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.429 [2024-07-23 01:07:45.887868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:48384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.429 [2024-07-23 01:07:45.887882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.429 [2024-07-23 01:07:45.887897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:48392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.429 [2024-07-23 01:07:45.887910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.429 [2024-07-23 01:07:45.887947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:48424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.429 [2024-07-23 01:07:45.887960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.429 [2024-07-23 01:07:45.887974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:48920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.429 [2024-07-23 01:07:45.887987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.429 [2024-07-23 01:07:45.888001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:48936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.429 [2024-07-23 01:07:45.888014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.429 [2024-07-23 01:07:45.888029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:48944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.429 [2024-07-23 01:07:45.888045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.429 [2024-07-23 01:07:45.888061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:48952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.429 [2024-07-23 01:07:45.888073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.429 [2024-07-23 01:07:45.888088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:48968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.429 [2024-07-23 01:07:45.888101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.429 [2024-07-23 01:07:45.888116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:48984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.429 [2024-07-23 01:07:45.888129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.429 [2024-07-23 01:07:45.888143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:48992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.429 [2024-07-23 01:07:45.888156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.429 [2024-07-23 01:07:45.888170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:49000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.430 [2024-07-23 01:07:45.888183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.430 [2024-07-23 01:07:45.888198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:49008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.430 [2024-07-23 01:07:45.888211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.430 [2024-07-23 01:07:45.888225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:49016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.430 [2024-07-23 01:07:45.888238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.430 [2024-07-23 01:07:45.888252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:49024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.430 [2024-07-23 01:07:45.888265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.430 [2024-07-23 01:07:45.888280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:48432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.430 [2024-07-23 01:07:45.888292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.430 [2024-07-23 01:07:45.888307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:48456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.430 [2024-07-23 01:07:45.888320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.430 [2024-07-23 01:07:45.888334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:48464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.430 [2024-07-23 01:07:45.888347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.430 [2024-07-23 01:07:45.888361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:48488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.430 [2024-07-23 01:07:45.888374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.430 [2024-07-23 01:07:45.888391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:48504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.430 [2024-07-23 01:07:45.888404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.430 [2024-07-23 01:07:45.888418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:48536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.430 [2024-07-23 01:07:45.888431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.430 [2024-07-23 01:07:45.888445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:48544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.430 [2024-07-23 01:07:45.888458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.430 [2024-07-23 01:07:45.888473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:48560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.430 [2024-07-23 01:07:45.888485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.430 [2024-07-23 01:07:45.888499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:49032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.430 [2024-07-23 01:07:45.888511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.430 [2024-07-23 01:07:45.888526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:49040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.430 [2024-07-23 01:07:45.888539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.430 [2024-07-23 01:07:45.888553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:49048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.430 [2024-07-23 01:07:45.888565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.430 [2024-07-23 01:07:45.888579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:49056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.430 [2024-07-23 01:07:45.888592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.430 [2024-07-23 01:07:45.888630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:49064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.430 [2024-07-23 01:07:45.888644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.430 [2024-07-23 01:07:45.888659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:49072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.430 [2024-07-23 01:07:45.888672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.430 [2024-07-23 01:07:45.888687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:49080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.430 [2024-07-23 01:07:45.888700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.430 [2024-07-23 01:07:45.888715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:49088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.430 [2024-07-23 01:07:45.888728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.430 [2024-07-23 01:07:45.888743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:49096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.430 [2024-07-23 01:07:45.888759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.430 [2024-07-23 01:07:45.888775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:49104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.430 [2024-07-23 01:07:45.888788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.430 [2024-07-23 01:07:45.888803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:49112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.430 [2024-07-23 01:07:45.888817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.430 [2024-07-23 01:07:45.888832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:49120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.430 [2024-07-23 01:07:45.888846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.430 [2024-07-23 01:07:45.888860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:49128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.430 [2024-07-23 01:07:45.888873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.430 [2024-07-23 01:07:45.888888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:48576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.430 [2024-07-23 01:07:45.888901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.430 [2024-07-23 01:07:45.888931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:48584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.430 [2024-07-23 01:07:45.888944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.430 [2024-07-23 01:07:45.888958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:48592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.430 [2024-07-23 01:07:45.888971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.430 [2024-07-23 01:07:45.888985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:48616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.430 [2024-07-23 01:07:45.888998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.430 [2024-07-23 01:07:45.889012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:48632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.430 [2024-07-23 01:07:45.889025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.430 [2024-07-23 01:07:45.889039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:48640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.430 [2024-07-23 01:07:45.889051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.430 [2024-07-23 01:07:45.889066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:48648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.430 [2024-07-23 01:07:45.889079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.430 [2024-07-23 01:07:45.889093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:48656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.430 [2024-07-23 01:07:45.889106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.430 [2024-07-23 01:07:45.889121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:49136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.430 [2024-07-23 01:07:45.889136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.430 [2024-07-23 01:07:45.889151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:49144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.430 [2024-07-23 01:07:45.889165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.431 [2024-07-23 01:07:45.889179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:49152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.431 [2024-07-23 01:07:45.889192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.431 [2024-07-23 01:07:45.889206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:49160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.431 [2024-07-23 01:07:45.889219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.431 [2024-07-23 01:07:45.889234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:49168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.431 [2024-07-23 01:07:45.889247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.431 [2024-07-23 01:07:45.889261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:49176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.431 [2024-07-23 01:07:45.889274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.431 [2024-07-23 01:07:45.889289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:49184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.431 [2024-07-23 01:07:45.889302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.431 [2024-07-23 01:07:45.889316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:49192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.431 [2024-07-23 01:07:45.889329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.431 [2024-07-23 01:07:45.889343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:49200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.431 [2024-07-23 01:07:45.889356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.431 [2024-07-23 01:07:45.889371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:49208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.431 [2024-07-23 01:07:45.889384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.431 [2024-07-23 01:07:45.889398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:49216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.431 [2024-07-23 01:07:45.889410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.431 [2024-07-23 01:07:45.889425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:49224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.431 [2024-07-23 01:07:45.889437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.431 [2024-07-23 01:07:45.889452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:49232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.431 [2024-07-23 01:07:45.889464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.431 [2024-07-23 01:07:45.889482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:49240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.431 [2024-07-23 01:07:45.889496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.431 [2024-07-23 01:07:45.889510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:49248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.431 [2024-07-23 01:07:45.889524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.431 [2024-07-23 01:07:45.889538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:49256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.431 [2024-07-23 01:07:45.889551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.431 [2024-07-23 01:07:45.889565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:49264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.431 [2024-07-23 01:07:45.889578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.431 [2024-07-23 01:07:45.889607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:49272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.431 [2024-07-23 01:07:45.889632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.431 [2024-07-23 01:07:45.889648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:49280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.431 [2024-07-23 01:07:45.889662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.431 [2024-07-23 01:07:45.889677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:49288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.431 [2024-07-23 01:07:45.889690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.431 [2024-07-23 01:07:45.889705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:49296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.431 [2024-07-23 01:07:45.889718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.431 [2024-07-23 01:07:45.889733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:49304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.431 [2024-07-23 01:07:45.889746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.431 [2024-07-23 01:07:45.889761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:49312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.431 [2024-07-23 01:07:45.889775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.431 [2024-07-23 01:07:45.889789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:49320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.431 [2024-07-23 01:07:45.889802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.431 [2024-07-23 01:07:45.889817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:49328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.431 [2024-07-23 01:07:45.889830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.431 [2024-07-23 01:07:45.889845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:49336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.431 [2024-07-23 01:07:45.889861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.431 [2024-07-23 01:07:45.889877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:48664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.431 [2024-07-23 01:07:45.889890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.431 [2024-07-23 01:07:45.889905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:48672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.431 [2024-07-23 01:07:45.889918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.431 [2024-07-23 01:07:45.889933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:48688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.431 [2024-07-23 01:07:45.889945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.431 [2024-07-23 01:07:45.889960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:48696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.431 [2024-07-23 01:07:45.889973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.431 [2024-07-23 01:07:45.889988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:48720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.431 [2024-07-23 01:07:45.890001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.431 [2024-07-23 01:07:45.890016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:48768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.431 [2024-07-23 01:07:45.890028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.431 [2024-07-23 01:07:45.890043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:48776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.431 [2024-07-23 01:07:45.890056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.431 [2024-07-23 01:07:45.890071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:48784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.431 [2024-07-23 01:07:45.890083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.431 [2024-07-23 01:07:45.890098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:49344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.431 [2024-07-23 01:07:45.890111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.431 [2024-07-23 01:07:45.890125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:49352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.431 [2024-07-23 01:07:45.890138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.431 [2024-07-23 01:07:45.890153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:49360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.431 [2024-07-23 01:07:45.890165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.431 [2024-07-23 01:07:45.890185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:49368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.431 [2024-07-23 01:07:45.890199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.431 [2024-07-23 01:07:45.890217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:49376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.431 [2024-07-23 01:07:45.890230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.431 [2024-07-23 01:07:45.890245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:49384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.431 [2024-07-23 01:07:45.890258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.432 [2024-07-23 01:07:45.890273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:49392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.432 [2024-07-23 01:07:45.890286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.432 [2024-07-23 01:07:45.890300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:49400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.432 [2024-07-23 01:07:45.890314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.432 [2024-07-23 01:07:45.890329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:49408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.432 [2024-07-23 01:07:45.890342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.432 [2024-07-23 01:07:45.890356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:49416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.432 [2024-07-23 01:07:45.890369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.432 [2024-07-23 01:07:45.890384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:49424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.432 [2024-07-23 01:07:45.890397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.432 [2024-07-23 01:07:45.890411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:48792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.432 [2024-07-23 01:07:45.890424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.432 [2024-07-23 01:07:45.890439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:48808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.432 [2024-07-23 01:07:45.890452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.432 [2024-07-23 01:07:45.890466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:48816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.432 [2024-07-23 01:07:45.890479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.432 [2024-07-23 01:07:45.890493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:48832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.432 [2024-07-23 01:07:45.890506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.432 [2024-07-23 01:07:45.890521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:48880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.432 [2024-07-23 01:07:45.890534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.432 [2024-07-23 01:07:45.890548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:48928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.432 [2024-07-23 01:07:45.890564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.432 [2024-07-23 01:07:45.890581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:48960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.432 [2024-07-23 01:07:45.890595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.432 [2024-07-23 01:07:45.890609] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3820 is same with the state(5) to be set 00:28:08.432 [2024-07-23 01:07:45.890631] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:08.432 [2024-07-23 01:07:45.890649] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:08.432 [2024-07-23 01:07:45.890661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:48976 len:8 PRP1 0x0 PRP2 0x0 00:28:08.432 [2024-07-23 01:07:45.890674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.432 [2024-07-23 01:07:45.890732] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1ee3820 was disconnected and freed. reset controller. 00:28:08.432 [2024-07-23 01:07:45.890750] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:28:08.432 [2024-07-23 01:07:45.890766] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:08.432 [2024-07-23 01:07:45.893004] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:08.432 [2024-07-23 01:07:45.893042] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec20a0 (9): Bad file descriptor 00:28:08.432 [2024-07-23 01:07:45.961373] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:08.432 00:28:08.432 Latency(us) 00:28:08.432 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:08.432 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:08.432 Verification LBA range: start 0x0 length 0x4000 00:28:08.432 NVMe0n1 : 15.01 12528.56 48.94 660.24 0.00 9688.61 831.34 17864.63 00:28:08.432 =================================================================================================================== 00:28:08.432 Total : 12528.56 48.94 660.24 0.00 9688.61 831.34 17864.63 00:28:08.432 Received shutdown signal, test time was about 15.000000 seconds 00:28:08.432 00:28:08.432 Latency(us) 00:28:08.432 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:08.432 =================================================================================================================== 00:28:08.432 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:08.432 01:07:51 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:28:08.432 01:07:51 -- host/failover.sh@65 -- # count=3 00:28:08.432 01:07:51 -- host/failover.sh@67 -- # (( count != 3 )) 00:28:08.432 01:07:51 -- host/failover.sh@73 -- # bdevperf_pid=3508074 00:28:08.432 01:07:51 -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:28:08.432 01:07:51 -- host/failover.sh@75 -- # waitforlisten 3508074 /var/tmp/bdevperf.sock 00:28:08.432 01:07:51 -- common/autotest_common.sh@819 -- # '[' -z 3508074 ']' 00:28:08.432 01:07:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:08.432 01:07:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:08.432 01:07:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:08.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:08.432 01:07:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:08.432 01:07:51 -- common/autotest_common.sh@10 -- # set +x 00:28:08.690 01:07:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:08.690 01:07:52 -- common/autotest_common.sh@852 -- # return 0 00:28:08.690 01:07:52 -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:08.947 [2024-07-23 01:07:53.105179] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:08.947 01:07:53 -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:28:09.206 [2024-07-23 01:07:53.341892] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:28:09.206 01:07:53 -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:09.773 NVMe0n1 00:28:09.773 01:07:53 -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:10.031 00:28:10.031 01:07:54 -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:10.600 00:28:10.600 01:07:54 -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:10.600 01:07:54 -- host/failover.sh@82 -- # grep -q NVMe0 00:28:10.600 01:07:54 -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:10.857 01:07:55 -- host/failover.sh@87 -- # sleep 3 00:28:14.144 01:07:58 -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:14.144 01:07:58 -- host/failover.sh@88 -- # grep -q NVMe0 00:28:14.144 01:07:58 -- host/failover.sh@90 -- # run_test_pid=3508890 00:28:14.144 01:07:58 -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:14.144 01:07:58 -- host/failover.sh@92 -- # wait 3508890 00:28:15.518 0 00:28:15.518 01:07:59 -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:15.518 [2024-07-23 01:07:51.859908] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:28:15.518 [2024-07-23 01:07:51.859997] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3508074 ] 00:28:15.518 EAL: No free 2048 kB hugepages reported on node 1 00:28:15.518 [2024-07-23 01:07:51.919481] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:15.518 [2024-07-23 01:07:52.001355] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:15.518 [2024-07-23 01:07:54.987153] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:28:15.518 [2024-07-23 01:07:54.987234] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.518 [2024-07-23 01:07:54.987256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.518 [2024-07-23 01:07:54.987287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.518 [2024-07-23 01:07:54.987301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.518 [2024-07-23 01:07:54.987316] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.518 [2024-07-23 01:07:54.987329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.518 [2024-07-23 01:07:54.987343] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.518 [2024-07-23 01:07:54.987357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.518 [2024-07-23 01:07:54.987370] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:15.518 [2024-07-23 01:07:54.987405] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:15.518 [2024-07-23 01:07:54.987436] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bbc0a0 (9): Bad file descriptor 00:28:15.518 [2024-07-23 01:07:54.994476] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:15.518 Running I/O for 1 seconds... 00:28:15.518 00:28:15.518 Latency(us) 00:28:15.518 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:15.518 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:15.518 Verification LBA range: start 0x0 length 0x4000 00:28:15.518 NVMe0n1 : 1.00 13097.95 51.16 0.00 0.00 9735.99 928.43 11116.85 00:28:15.518 =================================================================================================================== 00:28:15.518 Total : 13097.95 51.16 0.00 0.00 9735.99 928.43 11116.85 00:28:15.518 01:07:59 -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:15.518 01:07:59 -- host/failover.sh@95 -- # grep -q NVMe0 00:28:15.518 01:07:59 -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:15.776 01:07:59 -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:15.776 01:07:59 -- host/failover.sh@99 -- # grep -q NVMe0 00:28:16.338 01:08:00 -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:16.338 01:08:00 -- host/failover.sh@101 -- # sleep 3 00:28:19.620 01:08:03 -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:19.620 01:08:03 -- host/failover.sh@103 -- # grep -q NVMe0 00:28:19.620 01:08:03 -- host/failover.sh@108 -- # killprocess 3508074 00:28:19.620 01:08:03 -- common/autotest_common.sh@926 -- # '[' -z 3508074 ']' 00:28:19.620 01:08:03 -- common/autotest_common.sh@930 -- # kill -0 3508074 00:28:19.620 01:08:03 -- common/autotest_common.sh@931 -- # uname 00:28:19.620 01:08:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:19.620 01:08:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3508074 00:28:19.620 01:08:03 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:19.620 01:08:03 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:19.620 01:08:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3508074' 00:28:19.620 killing process with pid 3508074 00:28:19.620 01:08:03 -- common/autotest_common.sh@945 -- # kill 3508074 00:28:19.620 01:08:03 -- common/autotest_common.sh@950 -- # wait 3508074 00:28:19.877 01:08:03 -- host/failover.sh@110 -- # sync 00:28:19.877 01:08:03 -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:20.134 01:08:04 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:28:20.134 01:08:04 -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:20.134 01:08:04 -- host/failover.sh@116 -- # nvmftestfini 00:28:20.134 01:08:04 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:20.134 01:08:04 -- nvmf/common.sh@116 -- # sync 00:28:20.134 01:08:04 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:28:20.134 01:08:04 -- nvmf/common.sh@119 -- # set +e 00:28:20.135 01:08:04 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:20.135 01:08:04 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:28:20.135 rmmod nvme_tcp 00:28:20.135 rmmod nvme_fabrics 00:28:20.135 rmmod nvme_keyring 00:28:20.135 01:08:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:20.135 01:08:04 -- nvmf/common.sh@123 -- # set -e 00:28:20.135 01:08:04 -- nvmf/common.sh@124 -- # return 0 00:28:20.135 01:08:04 -- nvmf/common.sh@477 -- # '[' -n 3505601 ']' 00:28:20.135 01:08:04 -- nvmf/common.sh@478 -- # killprocess 3505601 00:28:20.135 01:08:04 -- common/autotest_common.sh@926 -- # '[' -z 3505601 ']' 00:28:20.135 01:08:04 -- common/autotest_common.sh@930 -- # kill -0 3505601 00:28:20.135 01:08:04 -- common/autotest_common.sh@931 -- # uname 00:28:20.135 01:08:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:20.135 01:08:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3505601 00:28:20.135 01:08:04 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:28:20.135 01:08:04 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:28:20.135 01:08:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3505601' 00:28:20.135 killing process with pid 3505601 00:28:20.135 01:08:04 -- common/autotest_common.sh@945 -- # kill 3505601 00:28:20.135 01:08:04 -- common/autotest_common.sh@950 -- # wait 3505601 00:28:20.393 01:08:04 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:28:20.393 01:08:04 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:28:20.393 01:08:04 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:28:20.393 01:08:04 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:20.393 01:08:04 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:28:20.393 01:08:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:20.393 01:08:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:20.393 01:08:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:22.936 01:08:06 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:28:22.936 00:28:22.936 real 0m36.746s 00:28:22.936 user 2m10.192s 00:28:22.936 sys 0m6.085s 00:28:22.936 01:08:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:22.936 01:08:06 -- common/autotest_common.sh@10 -- # set +x 00:28:22.936 ************************************ 00:28:22.936 END TEST nvmf_failover 00:28:22.936 ************************************ 00:28:22.936 01:08:06 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:28:22.936 01:08:06 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:22.936 01:08:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:22.936 01:08:06 -- common/autotest_common.sh@10 -- # set +x 00:28:22.936 ************************************ 00:28:22.936 START TEST nvmf_discovery 00:28:22.936 ************************************ 00:28:22.936 01:08:06 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:28:22.936 * Looking for test storage... 00:28:22.936 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:22.936 01:08:06 -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:22.936 01:08:06 -- nvmf/common.sh@7 -- # uname -s 00:28:22.936 01:08:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:22.936 01:08:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:22.936 01:08:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:22.936 01:08:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:22.936 01:08:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:22.936 01:08:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:22.936 01:08:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:22.936 01:08:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:22.936 01:08:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:22.936 01:08:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:22.936 01:08:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:22.936 01:08:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:22.936 01:08:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:22.936 01:08:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:22.936 01:08:06 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:22.936 01:08:06 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:22.936 01:08:06 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:22.936 01:08:06 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:22.936 01:08:06 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:22.936 01:08:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:22.936 01:08:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:22.936 01:08:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:22.936 01:08:06 -- paths/export.sh@5 -- # export PATH 00:28:22.936 01:08:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:22.936 01:08:06 -- nvmf/common.sh@46 -- # : 0 00:28:22.937 01:08:06 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:22.937 01:08:06 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:22.937 01:08:06 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:22.937 01:08:06 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:22.937 01:08:06 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:22.937 01:08:06 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:22.937 01:08:06 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:22.937 01:08:06 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:22.937 01:08:06 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:28:22.937 01:08:06 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:28:22.937 01:08:06 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:28:22.937 01:08:06 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:28:22.937 01:08:06 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:28:22.937 01:08:06 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:28:22.937 01:08:06 -- host/discovery.sh@25 -- # nvmftestinit 00:28:22.937 01:08:06 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:28:22.937 01:08:06 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:22.937 01:08:06 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:22.937 01:08:06 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:22.937 01:08:06 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:22.937 01:08:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:22.937 01:08:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:22.937 01:08:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:22.937 01:08:06 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:28:22.937 01:08:06 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:28:22.937 01:08:06 -- nvmf/common.sh@284 -- # xtrace_disable 00:28:22.937 01:08:06 -- common/autotest_common.sh@10 -- # set +x 00:28:24.840 01:08:08 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:24.840 01:08:08 -- nvmf/common.sh@290 -- # pci_devs=() 00:28:24.840 01:08:08 -- nvmf/common.sh@290 -- # local -a pci_devs 00:28:24.840 01:08:08 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:28:24.840 01:08:08 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:28:24.840 01:08:08 -- nvmf/common.sh@292 -- # pci_drivers=() 00:28:24.840 01:08:08 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:28:24.840 01:08:08 -- nvmf/common.sh@294 -- # net_devs=() 00:28:24.840 01:08:08 -- nvmf/common.sh@294 -- # local -ga net_devs 00:28:24.840 01:08:08 -- nvmf/common.sh@295 -- # e810=() 00:28:24.840 01:08:08 -- nvmf/common.sh@295 -- # local -ga e810 00:28:24.840 01:08:08 -- nvmf/common.sh@296 -- # x722=() 00:28:24.840 01:08:08 -- nvmf/common.sh@296 -- # local -ga x722 00:28:24.840 01:08:08 -- nvmf/common.sh@297 -- # mlx=() 00:28:24.840 01:08:08 -- nvmf/common.sh@297 -- # local -ga mlx 00:28:24.840 01:08:08 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:24.840 01:08:08 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:24.840 01:08:08 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:24.840 01:08:08 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:24.840 01:08:08 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:24.840 01:08:08 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:24.840 01:08:08 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:24.840 01:08:08 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:24.840 01:08:08 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:24.840 01:08:08 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:24.840 01:08:08 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:24.840 01:08:08 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:28:24.840 01:08:08 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:28:24.840 01:08:08 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:28:24.840 01:08:08 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:28:24.840 01:08:08 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:28:24.840 01:08:08 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:28:24.840 01:08:08 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:24.840 01:08:08 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:24.840 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:24.840 01:08:08 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:24.840 01:08:08 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:24.840 01:08:08 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:24.840 01:08:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:24.840 01:08:08 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:24.840 01:08:08 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:24.840 01:08:08 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:24.840 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:24.840 01:08:08 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:24.841 01:08:08 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:24.841 01:08:08 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:24.841 01:08:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:24.841 01:08:08 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:24.841 01:08:08 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:28:24.841 01:08:08 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:28:24.841 01:08:08 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:28:24.841 01:08:08 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:24.841 01:08:08 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:24.841 01:08:08 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:24.841 01:08:08 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:24.841 01:08:08 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:24.841 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:24.841 01:08:08 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:24.841 01:08:08 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:24.841 01:08:08 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:24.841 01:08:08 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:24.841 01:08:08 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:24.841 01:08:08 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:24.841 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:24.841 01:08:08 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:24.841 01:08:08 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:28:24.841 01:08:08 -- nvmf/common.sh@402 -- # is_hw=yes 00:28:24.841 01:08:08 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:28:24.841 01:08:08 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:28:24.841 01:08:08 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:28:24.841 01:08:08 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:24.841 01:08:08 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:24.841 01:08:08 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:24.841 01:08:08 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:28:24.841 01:08:08 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:24.841 01:08:08 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:24.841 01:08:08 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:28:24.841 01:08:08 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:24.841 01:08:08 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:24.841 01:08:08 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:28:24.841 01:08:08 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:28:24.841 01:08:08 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:28:24.841 01:08:08 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:24.841 01:08:08 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:24.841 01:08:08 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:24.841 01:08:08 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:28:24.841 01:08:08 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:24.841 01:08:08 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:24.841 01:08:08 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:24.841 01:08:08 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:28:24.841 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:24.841 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.161 ms 00:28:24.841 00:28:24.841 --- 10.0.0.2 ping statistics --- 00:28:24.841 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:24.841 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:28:24.841 01:08:08 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:24.841 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:24.841 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.184 ms 00:28:24.841 00:28:24.841 --- 10.0.0.1 ping statistics --- 00:28:24.841 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:24.841 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:28:24.841 01:08:08 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:24.841 01:08:08 -- nvmf/common.sh@410 -- # return 0 00:28:24.841 01:08:08 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:24.841 01:08:08 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:24.841 01:08:08 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:28:24.841 01:08:08 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:28:24.841 01:08:08 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:24.841 01:08:08 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:28:24.841 01:08:08 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:28:24.841 01:08:08 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:28:24.841 01:08:08 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:24.841 01:08:08 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:24.841 01:08:08 -- common/autotest_common.sh@10 -- # set +x 00:28:24.841 01:08:08 -- nvmf/common.sh@469 -- # nvmfpid=3511523 00:28:24.841 01:08:08 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:28:24.841 01:08:08 -- nvmf/common.sh@470 -- # waitforlisten 3511523 00:28:24.841 01:08:08 -- common/autotest_common.sh@819 -- # '[' -z 3511523 ']' 00:28:24.841 01:08:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:24.841 01:08:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:24.841 01:08:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:24.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:24.841 01:08:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:24.841 01:08:08 -- common/autotest_common.sh@10 -- # set +x 00:28:24.841 [2024-07-23 01:08:08.874947] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:28:24.841 [2024-07-23 01:08:08.875027] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:24.841 EAL: No free 2048 kB hugepages reported on node 1 00:28:24.841 [2024-07-23 01:08:08.943352] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:24.841 [2024-07-23 01:08:09.033015] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:24.841 [2024-07-23 01:08:09.033180] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:24.841 [2024-07-23 01:08:09.033200] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:24.841 [2024-07-23 01:08:09.033214] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:24.841 [2024-07-23 01:08:09.033252] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:25.776 01:08:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:25.776 01:08:09 -- common/autotest_common.sh@852 -- # return 0 00:28:25.776 01:08:09 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:25.776 01:08:09 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:25.776 01:08:09 -- common/autotest_common.sh@10 -- # set +x 00:28:25.776 01:08:09 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:25.776 01:08:09 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:25.776 01:08:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:25.776 01:08:09 -- common/autotest_common.sh@10 -- # set +x 00:28:25.776 [2024-07-23 01:08:09.815862] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:25.776 01:08:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:25.776 01:08:09 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:28:25.776 01:08:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:25.776 01:08:09 -- common/autotest_common.sh@10 -- # set +x 00:28:25.776 [2024-07-23 01:08:09.824040] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:28:25.776 01:08:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:25.776 01:08:09 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:28:25.776 01:08:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:25.776 01:08:09 -- common/autotest_common.sh@10 -- # set +x 00:28:25.776 null0 00:28:25.776 01:08:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:25.776 01:08:09 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:28:25.776 01:08:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:25.776 01:08:09 -- common/autotest_common.sh@10 -- # set +x 00:28:25.776 null1 00:28:25.776 01:08:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:25.776 01:08:09 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:28:25.776 01:08:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:25.776 01:08:09 -- common/autotest_common.sh@10 -- # set +x 00:28:25.776 01:08:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:25.776 01:08:09 -- host/discovery.sh@45 -- # hostpid=3511682 00:28:25.776 01:08:09 -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:28:25.776 01:08:09 -- host/discovery.sh@46 -- # waitforlisten 3511682 /tmp/host.sock 00:28:25.776 01:08:09 -- common/autotest_common.sh@819 -- # '[' -z 3511682 ']' 00:28:25.776 01:08:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:28:25.776 01:08:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:25.776 01:08:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:28:25.776 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:28:25.776 01:08:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:25.776 01:08:09 -- common/autotest_common.sh@10 -- # set +x 00:28:25.776 [2024-07-23 01:08:09.891853] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:28:25.776 [2024-07-23 01:08:09.891930] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3511682 ] 00:28:25.776 EAL: No free 2048 kB hugepages reported on node 1 00:28:25.776 [2024-07-23 01:08:09.951927] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:26.035 [2024-07-23 01:08:10.048991] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:26.035 [2024-07-23 01:08:10.049162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:26.969 01:08:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:26.969 01:08:10 -- common/autotest_common.sh@852 -- # return 0 00:28:26.969 01:08:10 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:26.969 01:08:10 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:28:26.969 01:08:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:26.969 01:08:10 -- common/autotest_common.sh@10 -- # set +x 00:28:26.969 01:08:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:26.969 01:08:10 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:28:26.969 01:08:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:26.969 01:08:10 -- common/autotest_common.sh@10 -- # set +x 00:28:26.969 01:08:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:26.969 01:08:10 -- host/discovery.sh@72 -- # notify_id=0 00:28:26.969 01:08:10 -- host/discovery.sh@78 -- # get_subsystem_names 00:28:26.969 01:08:10 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:26.969 01:08:10 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:26.969 01:08:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:26.969 01:08:10 -- common/autotest_common.sh@10 -- # set +x 00:28:26.969 01:08:10 -- host/discovery.sh@59 -- # sort 00:28:26.969 01:08:10 -- host/discovery.sh@59 -- # xargs 00:28:26.969 01:08:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:26.969 01:08:10 -- host/discovery.sh@78 -- # [[ '' == '' ]] 00:28:26.969 01:08:10 -- host/discovery.sh@79 -- # get_bdev_list 00:28:26.969 01:08:10 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:26.969 01:08:10 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:26.969 01:08:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:26.969 01:08:10 -- host/discovery.sh@55 -- # sort 00:28:26.969 01:08:10 -- common/autotest_common.sh@10 -- # set +x 00:28:26.969 01:08:10 -- host/discovery.sh@55 -- # xargs 00:28:26.969 01:08:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:26.969 01:08:10 -- host/discovery.sh@79 -- # [[ '' == '' ]] 00:28:26.969 01:08:10 -- host/discovery.sh@81 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:28:26.969 01:08:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:26.969 01:08:10 -- common/autotest_common.sh@10 -- # set +x 00:28:26.969 01:08:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:26.969 01:08:10 -- host/discovery.sh@82 -- # get_subsystem_names 00:28:26.969 01:08:10 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:26.969 01:08:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:26.969 01:08:10 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:26.969 01:08:10 -- common/autotest_common.sh@10 -- # set +x 00:28:26.969 01:08:10 -- host/discovery.sh@59 -- # sort 00:28:26.969 01:08:10 -- host/discovery.sh@59 -- # xargs 00:28:26.969 01:08:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:26.969 01:08:11 -- host/discovery.sh@82 -- # [[ '' == '' ]] 00:28:26.969 01:08:11 -- host/discovery.sh@83 -- # get_bdev_list 00:28:26.969 01:08:11 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:26.969 01:08:11 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:26.969 01:08:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:26.969 01:08:11 -- host/discovery.sh@55 -- # sort 00:28:26.969 01:08:11 -- common/autotest_common.sh@10 -- # set +x 00:28:26.969 01:08:11 -- host/discovery.sh@55 -- # xargs 00:28:26.969 01:08:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:26.969 01:08:11 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:28:26.969 01:08:11 -- host/discovery.sh@85 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:28:26.969 01:08:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:26.969 01:08:11 -- common/autotest_common.sh@10 -- # set +x 00:28:26.969 01:08:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:26.969 01:08:11 -- host/discovery.sh@86 -- # get_subsystem_names 00:28:26.969 01:08:11 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:26.969 01:08:11 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:26.969 01:08:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:26.969 01:08:11 -- host/discovery.sh@59 -- # sort 00:28:26.969 01:08:11 -- common/autotest_common.sh@10 -- # set +x 00:28:26.969 01:08:11 -- host/discovery.sh@59 -- # xargs 00:28:26.969 01:08:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:26.969 01:08:11 -- host/discovery.sh@86 -- # [[ '' == '' ]] 00:28:26.969 01:08:11 -- host/discovery.sh@87 -- # get_bdev_list 00:28:26.969 01:08:11 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:26.969 01:08:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:26.970 01:08:11 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:26.970 01:08:11 -- common/autotest_common.sh@10 -- # set +x 00:28:26.970 01:08:11 -- host/discovery.sh@55 -- # sort 00:28:26.970 01:08:11 -- host/discovery.sh@55 -- # xargs 00:28:26.970 01:08:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:26.970 01:08:11 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:28:26.970 01:08:11 -- host/discovery.sh@91 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:26.970 01:08:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:26.970 01:08:11 -- common/autotest_common.sh@10 -- # set +x 00:28:26.970 [2024-07-23 01:08:11.143748] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:26.970 01:08:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:26.970 01:08:11 -- host/discovery.sh@92 -- # get_subsystem_names 00:28:26.970 01:08:11 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:26.970 01:08:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:26.970 01:08:11 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:26.970 01:08:11 -- common/autotest_common.sh@10 -- # set +x 00:28:26.970 01:08:11 -- host/discovery.sh@59 -- # sort 00:28:26.970 01:08:11 -- host/discovery.sh@59 -- # xargs 00:28:26.970 01:08:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:27.227 01:08:11 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:28:27.227 01:08:11 -- host/discovery.sh@93 -- # get_bdev_list 00:28:27.227 01:08:11 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:27.227 01:08:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:27.227 01:08:11 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:27.227 01:08:11 -- common/autotest_common.sh@10 -- # set +x 00:28:27.227 01:08:11 -- host/discovery.sh@55 -- # sort 00:28:27.227 01:08:11 -- host/discovery.sh@55 -- # xargs 00:28:27.227 01:08:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:27.227 01:08:11 -- host/discovery.sh@93 -- # [[ '' == '' ]] 00:28:27.227 01:08:11 -- host/discovery.sh@94 -- # get_notification_count 00:28:27.227 01:08:11 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:28:27.227 01:08:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:27.227 01:08:11 -- common/autotest_common.sh@10 -- # set +x 00:28:27.227 01:08:11 -- host/discovery.sh@74 -- # jq '. | length' 00:28:27.227 01:08:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:27.227 01:08:11 -- host/discovery.sh@74 -- # notification_count=0 00:28:27.227 01:08:11 -- host/discovery.sh@75 -- # notify_id=0 00:28:27.227 01:08:11 -- host/discovery.sh@95 -- # [[ 0 == 0 ]] 00:28:27.227 01:08:11 -- host/discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:28:27.227 01:08:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:27.227 01:08:11 -- common/autotest_common.sh@10 -- # set +x 00:28:27.227 01:08:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:27.227 01:08:11 -- host/discovery.sh@100 -- # sleep 1 00:28:27.792 [2024-07-23 01:08:11.885454] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:27.792 [2024-07-23 01:08:11.885489] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:27.792 [2024-07-23 01:08:11.885518] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:28.051 [2024-07-23 01:08:12.013998] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:28:28.051 [2024-07-23 01:08:12.198262] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:28.051 [2024-07-23 01:08:12.198290] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:28.310 01:08:12 -- host/discovery.sh@101 -- # get_subsystem_names 00:28:28.310 01:08:12 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:28.310 01:08:12 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:28.310 01:08:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:28.310 01:08:12 -- host/discovery.sh@59 -- # sort 00:28:28.310 01:08:12 -- common/autotest_common.sh@10 -- # set +x 00:28:28.310 01:08:12 -- host/discovery.sh@59 -- # xargs 00:28:28.310 01:08:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:28.310 01:08:12 -- host/discovery.sh@101 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:28.310 01:08:12 -- host/discovery.sh@102 -- # get_bdev_list 00:28:28.310 01:08:12 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:28.310 01:08:12 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:28.310 01:08:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:28.310 01:08:12 -- common/autotest_common.sh@10 -- # set +x 00:28:28.310 01:08:12 -- host/discovery.sh@55 -- # sort 00:28:28.310 01:08:12 -- host/discovery.sh@55 -- # xargs 00:28:28.310 01:08:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:28.310 01:08:12 -- host/discovery.sh@102 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:28:28.310 01:08:12 -- host/discovery.sh@103 -- # get_subsystem_paths nvme0 00:28:28.310 01:08:12 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:28:28.310 01:08:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:28.310 01:08:12 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:28.310 01:08:12 -- common/autotest_common.sh@10 -- # set +x 00:28:28.310 01:08:12 -- host/discovery.sh@63 -- # sort -n 00:28:28.310 01:08:12 -- host/discovery.sh@63 -- # xargs 00:28:28.310 01:08:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:28.310 01:08:12 -- host/discovery.sh@103 -- # [[ 4420 == \4\4\2\0 ]] 00:28:28.310 01:08:12 -- host/discovery.sh@104 -- # get_notification_count 00:28:28.310 01:08:12 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:28:28.310 01:08:12 -- host/discovery.sh@74 -- # jq '. | length' 00:28:28.310 01:08:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:28.310 01:08:12 -- common/autotest_common.sh@10 -- # set +x 00:28:28.310 01:08:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:28.310 01:08:12 -- host/discovery.sh@74 -- # notification_count=1 00:28:28.310 01:08:12 -- host/discovery.sh@75 -- # notify_id=1 00:28:28.310 01:08:12 -- host/discovery.sh@105 -- # [[ 1 == 1 ]] 00:28:28.310 01:08:12 -- host/discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:28:28.310 01:08:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:28.310 01:08:12 -- common/autotest_common.sh@10 -- # set +x 00:28:28.310 01:08:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:28.310 01:08:12 -- host/discovery.sh@109 -- # sleep 1 00:28:29.683 01:08:13 -- host/discovery.sh@110 -- # get_bdev_list 00:28:29.683 01:08:13 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:29.683 01:08:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:29.683 01:08:13 -- common/autotest_common.sh@10 -- # set +x 00:28:29.683 01:08:13 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:29.683 01:08:13 -- host/discovery.sh@55 -- # sort 00:28:29.683 01:08:13 -- host/discovery.sh@55 -- # xargs 00:28:29.683 01:08:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:29.683 01:08:13 -- host/discovery.sh@110 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:29.683 01:08:13 -- host/discovery.sh@111 -- # get_notification_count 00:28:29.683 01:08:13 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:28:29.683 01:08:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:29.683 01:08:13 -- common/autotest_common.sh@10 -- # set +x 00:28:29.683 01:08:13 -- host/discovery.sh@74 -- # jq '. | length' 00:28:29.683 01:08:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:29.683 01:08:13 -- host/discovery.sh@74 -- # notification_count=1 00:28:29.683 01:08:13 -- host/discovery.sh@75 -- # notify_id=2 00:28:29.684 01:08:13 -- host/discovery.sh@112 -- # [[ 1 == 1 ]] 00:28:29.684 01:08:13 -- host/discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:28:29.684 01:08:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:29.684 01:08:13 -- common/autotest_common.sh@10 -- # set +x 00:28:29.684 [2024-07-23 01:08:13.550926] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:29.684 [2024-07-23 01:08:13.552089] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:28:29.684 [2024-07-23 01:08:13.552125] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:29.684 01:08:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:29.684 01:08:13 -- host/discovery.sh@117 -- # sleep 1 00:28:29.684 [2024-07-23 01:08:13.679521] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:28:29.942 [2024-07-23 01:08:13.943803] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:29.943 [2024-07-23 01:08:13.943826] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:29.943 [2024-07-23 01:08:13.943835] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:28:30.511 01:08:14 -- host/discovery.sh@118 -- # get_subsystem_names 00:28:30.511 01:08:14 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:30.511 01:08:14 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:30.511 01:08:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:30.511 01:08:14 -- common/autotest_common.sh@10 -- # set +x 00:28:30.511 01:08:14 -- host/discovery.sh@59 -- # sort 00:28:30.511 01:08:14 -- host/discovery.sh@59 -- # xargs 00:28:30.511 01:08:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:30.511 01:08:14 -- host/discovery.sh@118 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:30.511 01:08:14 -- host/discovery.sh@119 -- # get_bdev_list 00:28:30.511 01:08:14 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:30.511 01:08:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:30.511 01:08:14 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:30.511 01:08:14 -- common/autotest_common.sh@10 -- # set +x 00:28:30.511 01:08:14 -- host/discovery.sh@55 -- # sort 00:28:30.511 01:08:14 -- host/discovery.sh@55 -- # xargs 00:28:30.511 01:08:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:30.511 01:08:14 -- host/discovery.sh@119 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:30.511 01:08:14 -- host/discovery.sh@120 -- # get_subsystem_paths nvme0 00:28:30.511 01:08:14 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:28:30.511 01:08:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:30.511 01:08:14 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:30.511 01:08:14 -- common/autotest_common.sh@10 -- # set +x 00:28:30.511 01:08:14 -- host/discovery.sh@63 -- # sort -n 00:28:30.511 01:08:14 -- host/discovery.sh@63 -- # xargs 00:28:30.511 01:08:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:30.511 01:08:14 -- host/discovery.sh@120 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:28:30.511 01:08:14 -- host/discovery.sh@121 -- # get_notification_count 00:28:30.511 01:08:14 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:28:30.511 01:08:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:30.511 01:08:14 -- host/discovery.sh@74 -- # jq '. | length' 00:28:30.511 01:08:14 -- common/autotest_common.sh@10 -- # set +x 00:28:30.511 01:08:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:30.769 01:08:14 -- host/discovery.sh@74 -- # notification_count=0 00:28:30.769 01:08:14 -- host/discovery.sh@75 -- # notify_id=2 00:28:30.769 01:08:14 -- host/discovery.sh@122 -- # [[ 0 == 0 ]] 00:28:30.769 01:08:14 -- host/discovery.sh@126 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:30.769 01:08:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:30.769 01:08:14 -- common/autotest_common.sh@10 -- # set +x 00:28:30.769 [2024-07-23 01:08:14.722811] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:28:30.769 [2024-07-23 01:08:14.722840] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:30.769 01:08:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:30.769 01:08:14 -- host/discovery.sh@127 -- # sleep 1 00:28:30.769 [2024-07-23 01:08:14.731551] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.769 [2024-07-23 01:08:14.731584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.769 [2024-07-23 01:08:14.731604] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.769 [2024-07-23 01:08:14.731628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.769 [2024-07-23 01:08:14.731644] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.769 [2024-07-23 01:08:14.731659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.769 [2024-07-23 01:08:14.731689] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.769 [2024-07-23 01:08:14.731703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.769 [2024-07-23 01:08:14.731716] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60b590 is same with the state(5) to be set 00:28:30.769 [2024-07-23 01:08:14.741557] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x60b590 (9): Bad file descriptor 00:28:30.769 [2024-07-23 01:08:14.751604] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:30.769 [2024-07-23 01:08:14.751820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.769 [2024-07-23 01:08:14.752044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.769 [2024-07-23 01:08:14.752072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x60b590 with addr=10.0.0.2, port=4420 00:28:30.769 [2024-07-23 01:08:14.752096] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60b590 is same with the state(5) to be set 00:28:30.769 [2024-07-23 01:08:14.752122] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x60b590 (9): Bad file descriptor 00:28:30.769 [2024-07-23 01:08:14.752172] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:30.769 [2024-07-23 01:08:14.752193] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:30.769 [2024-07-23 01:08:14.752209] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:30.769 [2024-07-23 01:08:14.752232] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:30.769 [2024-07-23 01:08:14.761685] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:30.769 [2024-07-23 01:08:14.761949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.769 [2024-07-23 01:08:14.762152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.769 [2024-07-23 01:08:14.762179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x60b590 with addr=10.0.0.2, port=4420 00:28:30.769 [2024-07-23 01:08:14.762195] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60b590 is same with the state(5) to be set 00:28:30.769 [2024-07-23 01:08:14.762217] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x60b590 (9): Bad file descriptor 00:28:30.769 [2024-07-23 01:08:14.762250] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:30.769 [2024-07-23 01:08:14.762267] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:30.769 [2024-07-23 01:08:14.762295] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:30.769 [2024-07-23 01:08:14.762314] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:30.769 [2024-07-23 01:08:14.771769] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:30.769 [2024-07-23 01:08:14.772033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.769 [2024-07-23 01:08:14.772232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.769 [2024-07-23 01:08:14.772263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x60b590 with addr=10.0.0.2, port=4420 00:28:30.769 [2024-07-23 01:08:14.772281] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60b590 is same with the state(5) to be set 00:28:30.769 [2024-07-23 01:08:14.772306] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x60b590 (9): Bad file descriptor 00:28:30.769 [2024-07-23 01:08:14.772354] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:30.769 [2024-07-23 01:08:14.772375] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:30.769 [2024-07-23 01:08:14.772390] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:30.769 [2024-07-23 01:08:14.772411] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:30.769 [2024-07-23 01:08:14.781854] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:30.769 [2024-07-23 01:08:14.782073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.770 [2024-07-23 01:08:14.782265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.770 [2024-07-23 01:08:14.782292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x60b590 with addr=10.0.0.2, port=4420 00:28:30.770 [2024-07-23 01:08:14.782310] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60b590 is same with the state(5) to be set 00:28:30.770 [2024-07-23 01:08:14.782340] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x60b590 (9): Bad file descriptor 00:28:30.770 [2024-07-23 01:08:14.782362] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:30.770 [2024-07-23 01:08:14.782377] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:30.770 [2024-07-23 01:08:14.782390] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:30.770 [2024-07-23 01:08:14.782425] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:30.770 [2024-07-23 01:08:14.791953] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:30.770 [2024-07-23 01:08:14.792160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.770 [2024-07-23 01:08:14.792347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.770 [2024-07-23 01:08:14.792377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x60b590 with addr=10.0.0.2, port=4420 00:28:30.770 [2024-07-23 01:08:14.792395] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60b590 is same with the state(5) to be set 00:28:30.770 [2024-07-23 01:08:14.792419] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x60b590 (9): Bad file descriptor 00:28:30.770 [2024-07-23 01:08:14.792466] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:30.770 [2024-07-23 01:08:14.792487] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:30.770 [2024-07-23 01:08:14.792502] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:30.770 [2024-07-23 01:08:14.792523] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:30.770 [2024-07-23 01:08:14.802031] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:30.770 [2024-07-23 01:08:14.802261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.770 [2024-07-23 01:08:14.802438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.770 [2024-07-23 01:08:14.802466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x60b590 with addr=10.0.0.2, port=4420 00:28:30.770 [2024-07-23 01:08:14.802484] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60b590 is same with the state(5) to be set 00:28:30.770 [2024-07-23 01:08:14.802508] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x60b590 (9): Bad file descriptor 00:28:30.770 [2024-07-23 01:08:14.802543] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:30.770 [2024-07-23 01:08:14.802562] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:30.770 [2024-07-23 01:08:14.802577] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:30.770 [2024-07-23 01:08:14.802597] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:30.770 [2024-07-23 01:08:14.809556] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:28:30.770 [2024-07-23 01:08:14.809587] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:28:31.706 01:08:15 -- host/discovery.sh@128 -- # get_subsystem_names 00:28:31.706 01:08:15 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:31.706 01:08:15 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:31.706 01:08:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:31.706 01:08:15 -- host/discovery.sh@59 -- # sort 00:28:31.706 01:08:15 -- common/autotest_common.sh@10 -- # set +x 00:28:31.706 01:08:15 -- host/discovery.sh@59 -- # xargs 00:28:31.706 01:08:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:31.706 01:08:15 -- host/discovery.sh@128 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:31.706 01:08:15 -- host/discovery.sh@129 -- # get_bdev_list 00:28:31.706 01:08:15 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:31.706 01:08:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:31.706 01:08:15 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:31.706 01:08:15 -- common/autotest_common.sh@10 -- # set +x 00:28:31.706 01:08:15 -- host/discovery.sh@55 -- # sort 00:28:31.706 01:08:15 -- host/discovery.sh@55 -- # xargs 00:28:31.706 01:08:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:31.706 01:08:15 -- host/discovery.sh@129 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:31.706 01:08:15 -- host/discovery.sh@130 -- # get_subsystem_paths nvme0 00:28:31.706 01:08:15 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:28:31.706 01:08:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:31.706 01:08:15 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:31.706 01:08:15 -- common/autotest_common.sh@10 -- # set +x 00:28:31.706 01:08:15 -- host/discovery.sh@63 -- # sort -n 00:28:31.706 01:08:15 -- host/discovery.sh@63 -- # xargs 00:28:31.706 01:08:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:31.706 01:08:15 -- host/discovery.sh@130 -- # [[ 4421 == \4\4\2\1 ]] 00:28:31.706 01:08:15 -- host/discovery.sh@131 -- # get_notification_count 00:28:31.706 01:08:15 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:28:31.706 01:08:15 -- host/discovery.sh@74 -- # jq '. | length' 00:28:31.706 01:08:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:31.706 01:08:15 -- common/autotest_common.sh@10 -- # set +x 00:28:31.706 01:08:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:31.706 01:08:15 -- host/discovery.sh@74 -- # notification_count=0 00:28:31.706 01:08:15 -- host/discovery.sh@75 -- # notify_id=2 00:28:31.706 01:08:15 -- host/discovery.sh@132 -- # [[ 0 == 0 ]] 00:28:31.706 01:08:15 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:28:31.706 01:08:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:31.706 01:08:15 -- common/autotest_common.sh@10 -- # set +x 00:28:31.964 01:08:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:31.964 01:08:15 -- host/discovery.sh@135 -- # sleep 1 00:28:32.898 01:08:16 -- host/discovery.sh@136 -- # get_subsystem_names 00:28:32.898 01:08:16 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:32.898 01:08:16 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:32.898 01:08:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:32.898 01:08:16 -- common/autotest_common.sh@10 -- # set +x 00:28:32.898 01:08:16 -- host/discovery.sh@59 -- # sort 00:28:32.898 01:08:16 -- host/discovery.sh@59 -- # xargs 00:28:32.898 01:08:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:32.898 01:08:16 -- host/discovery.sh@136 -- # [[ '' == '' ]] 00:28:32.898 01:08:16 -- host/discovery.sh@137 -- # get_bdev_list 00:28:32.898 01:08:16 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:32.898 01:08:16 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:32.898 01:08:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:32.898 01:08:16 -- common/autotest_common.sh@10 -- # set +x 00:28:32.898 01:08:16 -- host/discovery.sh@55 -- # sort 00:28:32.898 01:08:16 -- host/discovery.sh@55 -- # xargs 00:28:32.898 01:08:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:32.898 01:08:16 -- host/discovery.sh@137 -- # [[ '' == '' ]] 00:28:32.898 01:08:16 -- host/discovery.sh@138 -- # get_notification_count 00:28:32.898 01:08:16 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:28:32.898 01:08:16 -- host/discovery.sh@74 -- # jq '. | length' 00:28:32.898 01:08:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:32.898 01:08:16 -- common/autotest_common.sh@10 -- # set +x 00:28:32.898 01:08:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:32.898 01:08:17 -- host/discovery.sh@74 -- # notification_count=2 00:28:32.898 01:08:17 -- host/discovery.sh@75 -- # notify_id=4 00:28:32.898 01:08:17 -- host/discovery.sh@139 -- # [[ 2 == 2 ]] 00:28:32.898 01:08:17 -- host/discovery.sh@142 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:32.898 01:08:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:32.898 01:08:17 -- common/autotest_common.sh@10 -- # set +x 00:28:34.276 [2024-07-23 01:08:18.082681] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:34.276 [2024-07-23 01:08:18.082716] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:34.276 [2024-07-23 01:08:18.082748] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:34.276 [2024-07-23 01:08:18.210180] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:28:34.276 [2024-07-23 01:08:18.317473] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:34.276 [2024-07-23 01:08:18.317517] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:28:34.276 01:08:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:34.276 01:08:18 -- host/discovery.sh@144 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:34.276 01:08:18 -- common/autotest_common.sh@640 -- # local es=0 00:28:34.276 01:08:18 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:34.276 01:08:18 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:28:34.276 01:08:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:34.276 01:08:18 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:28:34.276 01:08:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:34.276 01:08:18 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:34.276 01:08:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:34.276 01:08:18 -- common/autotest_common.sh@10 -- # set +x 00:28:34.276 request: 00:28:34.276 { 00:28:34.276 "name": "nvme", 00:28:34.276 "trtype": "tcp", 00:28:34.276 "traddr": "10.0.0.2", 00:28:34.276 "hostnqn": "nqn.2021-12.io.spdk:test", 00:28:34.276 "adrfam": "ipv4", 00:28:34.276 "trsvcid": "8009", 00:28:34.276 "wait_for_attach": true, 00:28:34.276 "method": "bdev_nvme_start_discovery", 00:28:34.276 "req_id": 1 00:28:34.276 } 00:28:34.276 Got JSON-RPC error response 00:28:34.276 response: 00:28:34.276 { 00:28:34.276 "code": -17, 00:28:34.276 "message": "File exists" 00:28:34.276 } 00:28:34.276 01:08:18 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:28:34.276 01:08:18 -- common/autotest_common.sh@643 -- # es=1 00:28:34.276 01:08:18 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:28:34.276 01:08:18 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:28:34.276 01:08:18 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:28:34.276 01:08:18 -- host/discovery.sh@146 -- # get_discovery_ctrlrs 00:28:34.276 01:08:18 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:28:34.276 01:08:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:34.276 01:08:18 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:28:34.276 01:08:18 -- common/autotest_common.sh@10 -- # set +x 00:28:34.276 01:08:18 -- host/discovery.sh@67 -- # sort 00:28:34.276 01:08:18 -- host/discovery.sh@67 -- # xargs 00:28:34.276 01:08:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:34.276 01:08:18 -- host/discovery.sh@146 -- # [[ nvme == \n\v\m\e ]] 00:28:34.276 01:08:18 -- host/discovery.sh@147 -- # get_bdev_list 00:28:34.276 01:08:18 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:34.276 01:08:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:34.276 01:08:18 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:34.276 01:08:18 -- common/autotest_common.sh@10 -- # set +x 00:28:34.276 01:08:18 -- host/discovery.sh@55 -- # sort 00:28:34.276 01:08:18 -- host/discovery.sh@55 -- # xargs 00:28:34.276 01:08:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:34.276 01:08:18 -- host/discovery.sh@147 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:34.276 01:08:18 -- host/discovery.sh@150 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:34.276 01:08:18 -- common/autotest_common.sh@640 -- # local es=0 00:28:34.276 01:08:18 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:34.276 01:08:18 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:28:34.276 01:08:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:34.276 01:08:18 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:28:34.276 01:08:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:34.276 01:08:18 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:34.276 01:08:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:34.276 01:08:18 -- common/autotest_common.sh@10 -- # set +x 00:28:34.276 request: 00:28:34.276 { 00:28:34.276 "name": "nvme_second", 00:28:34.276 "trtype": "tcp", 00:28:34.276 "traddr": "10.0.0.2", 00:28:34.276 "hostnqn": "nqn.2021-12.io.spdk:test", 00:28:34.276 "adrfam": "ipv4", 00:28:34.276 "trsvcid": "8009", 00:28:34.276 "wait_for_attach": true, 00:28:34.276 "method": "bdev_nvme_start_discovery", 00:28:34.276 "req_id": 1 00:28:34.276 } 00:28:34.276 Got JSON-RPC error response 00:28:34.276 response: 00:28:34.276 { 00:28:34.276 "code": -17, 00:28:34.276 "message": "File exists" 00:28:34.276 } 00:28:34.276 01:08:18 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:28:34.276 01:08:18 -- common/autotest_common.sh@643 -- # es=1 00:28:34.276 01:08:18 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:28:34.277 01:08:18 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:28:34.277 01:08:18 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:28:34.277 01:08:18 -- host/discovery.sh@152 -- # get_discovery_ctrlrs 00:28:34.277 01:08:18 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:28:34.277 01:08:18 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:28:34.277 01:08:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:34.277 01:08:18 -- common/autotest_common.sh@10 -- # set +x 00:28:34.277 01:08:18 -- host/discovery.sh@67 -- # sort 00:28:34.277 01:08:18 -- host/discovery.sh@67 -- # xargs 00:28:34.277 01:08:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:34.277 01:08:18 -- host/discovery.sh@152 -- # [[ nvme == \n\v\m\e ]] 00:28:34.277 01:08:18 -- host/discovery.sh@153 -- # get_bdev_list 00:28:34.277 01:08:18 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:34.277 01:08:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:34.277 01:08:18 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:34.277 01:08:18 -- common/autotest_common.sh@10 -- # set +x 00:28:34.277 01:08:18 -- host/discovery.sh@55 -- # sort 00:28:34.277 01:08:18 -- host/discovery.sh@55 -- # xargs 00:28:34.534 01:08:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:34.534 01:08:18 -- host/discovery.sh@153 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:34.534 01:08:18 -- host/discovery.sh@156 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:28:34.534 01:08:18 -- common/autotest_common.sh@640 -- # local es=0 00:28:34.534 01:08:18 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:28:34.534 01:08:18 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:28:34.534 01:08:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:34.534 01:08:18 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:28:34.534 01:08:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:34.534 01:08:18 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:28:34.534 01:08:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:34.534 01:08:18 -- common/autotest_common.sh@10 -- # set +x 00:28:35.471 [2024-07-23 01:08:19.513717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.471 [2024-07-23 01:08:19.513885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.471 [2024-07-23 01:08:19.513912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x609a80 with addr=10.0.0.2, port=8010 00:28:35.471 [2024-07-23 01:08:19.513939] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:28:35.471 [2024-07-23 01:08:19.513952] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:35.471 [2024-07-23 01:08:19.513965] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:28:36.407 [2024-07-23 01:08:20.516207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.407 [2024-07-23 01:08:20.516401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.407 [2024-07-23 01:08:20.516440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x609a80 with addr=10.0.0.2, port=8010 00:28:36.407 [2024-07-23 01:08:20.516471] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:28:36.407 [2024-07-23 01:08:20.516485] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:36.407 [2024-07-23 01:08:20.516499] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:28:37.344 [2024-07-23 01:08:21.518381] bdev_nvme.c:6802:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:28:37.344 request: 00:28:37.344 { 00:28:37.344 "name": "nvme_second", 00:28:37.344 "trtype": "tcp", 00:28:37.344 "traddr": "10.0.0.2", 00:28:37.344 "hostnqn": "nqn.2021-12.io.spdk:test", 00:28:37.344 "adrfam": "ipv4", 00:28:37.344 "trsvcid": "8010", 00:28:37.344 "attach_timeout_ms": 3000, 00:28:37.344 "method": "bdev_nvme_start_discovery", 00:28:37.344 "req_id": 1 00:28:37.344 } 00:28:37.344 Got JSON-RPC error response 00:28:37.344 response: 00:28:37.344 { 00:28:37.344 "code": -110, 00:28:37.344 "message": "Connection timed out" 00:28:37.344 } 00:28:37.344 01:08:21 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:28:37.344 01:08:21 -- common/autotest_common.sh@643 -- # es=1 00:28:37.344 01:08:21 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:28:37.344 01:08:21 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:28:37.344 01:08:21 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:28:37.344 01:08:21 -- host/discovery.sh@158 -- # get_discovery_ctrlrs 00:28:37.344 01:08:21 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:28:37.344 01:08:21 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:28:37.344 01:08:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:37.344 01:08:21 -- host/discovery.sh@67 -- # sort 00:28:37.344 01:08:21 -- common/autotest_common.sh@10 -- # set +x 00:28:37.344 01:08:21 -- host/discovery.sh@67 -- # xargs 00:28:37.344 01:08:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:37.602 01:08:21 -- host/discovery.sh@158 -- # [[ nvme == \n\v\m\e ]] 00:28:37.602 01:08:21 -- host/discovery.sh@160 -- # trap - SIGINT SIGTERM EXIT 00:28:37.602 01:08:21 -- host/discovery.sh@162 -- # kill 3511682 00:28:37.602 01:08:21 -- host/discovery.sh@163 -- # nvmftestfini 00:28:37.602 01:08:21 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:37.602 01:08:21 -- nvmf/common.sh@116 -- # sync 00:28:37.602 01:08:21 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:28:37.602 01:08:21 -- nvmf/common.sh@119 -- # set +e 00:28:37.602 01:08:21 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:37.602 01:08:21 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:28:37.602 rmmod nvme_tcp 00:28:37.602 rmmod nvme_fabrics 00:28:37.602 rmmod nvme_keyring 00:28:37.602 01:08:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:37.602 01:08:21 -- nvmf/common.sh@123 -- # set -e 00:28:37.602 01:08:21 -- nvmf/common.sh@124 -- # return 0 00:28:37.602 01:08:21 -- nvmf/common.sh@477 -- # '[' -n 3511523 ']' 00:28:37.602 01:08:21 -- nvmf/common.sh@478 -- # killprocess 3511523 00:28:37.602 01:08:21 -- common/autotest_common.sh@926 -- # '[' -z 3511523 ']' 00:28:37.602 01:08:21 -- common/autotest_common.sh@930 -- # kill -0 3511523 00:28:37.602 01:08:21 -- common/autotest_common.sh@931 -- # uname 00:28:37.602 01:08:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:37.602 01:08:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3511523 00:28:37.602 01:08:21 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:28:37.602 01:08:21 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:28:37.602 01:08:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3511523' 00:28:37.602 killing process with pid 3511523 00:28:37.602 01:08:21 -- common/autotest_common.sh@945 -- # kill 3511523 00:28:37.602 01:08:21 -- common/autotest_common.sh@950 -- # wait 3511523 00:28:37.862 01:08:21 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:28:37.862 01:08:21 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:28:37.862 01:08:21 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:28:37.862 01:08:21 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:37.862 01:08:21 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:28:37.862 01:08:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:37.862 01:08:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:37.862 01:08:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:39.772 01:08:23 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:28:39.772 00:28:39.772 real 0m17.312s 00:28:39.772 user 0m26.784s 00:28:39.772 sys 0m2.957s 00:28:39.772 01:08:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:39.772 01:08:23 -- common/autotest_common.sh@10 -- # set +x 00:28:39.772 ************************************ 00:28:39.772 END TEST nvmf_discovery 00:28:39.772 ************************************ 00:28:39.772 01:08:23 -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:28:39.772 01:08:23 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:39.772 01:08:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:39.772 01:08:23 -- common/autotest_common.sh@10 -- # set +x 00:28:39.772 ************************************ 00:28:39.772 START TEST nvmf_discovery_remove_ifc 00:28:39.772 ************************************ 00:28:39.772 01:08:23 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:28:40.030 * Looking for test storage... 00:28:40.030 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:40.030 01:08:24 -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:40.030 01:08:24 -- nvmf/common.sh@7 -- # uname -s 00:28:40.030 01:08:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:40.030 01:08:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:40.030 01:08:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:40.030 01:08:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:40.030 01:08:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:40.030 01:08:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:40.030 01:08:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:40.030 01:08:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:40.030 01:08:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:40.030 01:08:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:40.030 01:08:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:40.030 01:08:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:40.030 01:08:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:40.030 01:08:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:40.030 01:08:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:40.030 01:08:24 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:40.030 01:08:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:40.030 01:08:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:40.030 01:08:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:40.030 01:08:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:40.030 01:08:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:40.030 01:08:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:40.030 01:08:24 -- paths/export.sh@5 -- # export PATH 00:28:40.030 01:08:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:40.030 01:08:24 -- nvmf/common.sh@46 -- # : 0 00:28:40.030 01:08:24 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:40.030 01:08:24 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:40.030 01:08:24 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:40.030 01:08:24 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:40.030 01:08:24 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:40.030 01:08:24 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:40.030 01:08:24 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:40.030 01:08:24 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:40.030 01:08:24 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:28:40.030 01:08:24 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:28:40.030 01:08:24 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:28:40.030 01:08:24 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:28:40.030 01:08:24 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:28:40.030 01:08:24 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:28:40.030 01:08:24 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:28:40.030 01:08:24 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:28:40.030 01:08:24 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:40.030 01:08:24 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:40.030 01:08:24 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:40.030 01:08:24 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:40.030 01:08:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:40.030 01:08:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:40.030 01:08:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:40.030 01:08:24 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:28:40.030 01:08:24 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:28:40.030 01:08:24 -- nvmf/common.sh@284 -- # xtrace_disable 00:28:40.030 01:08:24 -- common/autotest_common.sh@10 -- # set +x 00:28:41.965 01:08:25 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:41.965 01:08:25 -- nvmf/common.sh@290 -- # pci_devs=() 00:28:41.965 01:08:25 -- nvmf/common.sh@290 -- # local -a pci_devs 00:28:41.965 01:08:25 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:28:41.965 01:08:25 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:28:41.965 01:08:25 -- nvmf/common.sh@292 -- # pci_drivers=() 00:28:41.966 01:08:25 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:28:41.966 01:08:25 -- nvmf/common.sh@294 -- # net_devs=() 00:28:41.966 01:08:25 -- nvmf/common.sh@294 -- # local -ga net_devs 00:28:41.966 01:08:25 -- nvmf/common.sh@295 -- # e810=() 00:28:41.966 01:08:25 -- nvmf/common.sh@295 -- # local -ga e810 00:28:41.966 01:08:25 -- nvmf/common.sh@296 -- # x722=() 00:28:41.966 01:08:25 -- nvmf/common.sh@296 -- # local -ga x722 00:28:41.966 01:08:25 -- nvmf/common.sh@297 -- # mlx=() 00:28:41.966 01:08:25 -- nvmf/common.sh@297 -- # local -ga mlx 00:28:41.966 01:08:25 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:41.966 01:08:25 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:41.966 01:08:25 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:41.966 01:08:25 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:41.966 01:08:25 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:41.966 01:08:25 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:41.966 01:08:25 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:41.966 01:08:25 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:41.966 01:08:25 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:41.966 01:08:25 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:41.966 01:08:25 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:41.966 01:08:25 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:28:41.966 01:08:25 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:28:41.966 01:08:25 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:28:41.966 01:08:25 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:28:41.966 01:08:25 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:28:41.966 01:08:25 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:28:41.966 01:08:25 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:41.966 01:08:25 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:41.966 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:41.966 01:08:25 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:41.966 01:08:25 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:41.966 01:08:25 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:41.966 01:08:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:41.966 01:08:25 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:41.966 01:08:25 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:41.966 01:08:25 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:41.966 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:41.966 01:08:25 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:41.966 01:08:25 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:41.966 01:08:25 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:41.966 01:08:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:41.966 01:08:25 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:41.966 01:08:25 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:28:41.966 01:08:25 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:28:41.966 01:08:25 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:28:41.966 01:08:25 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:41.966 01:08:25 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:41.966 01:08:25 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:41.966 01:08:25 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:41.966 01:08:25 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:41.966 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:41.966 01:08:25 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:41.966 01:08:25 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:41.966 01:08:25 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:41.966 01:08:25 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:41.966 01:08:25 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:41.966 01:08:25 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:41.966 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:41.966 01:08:25 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:41.966 01:08:25 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:28:41.966 01:08:25 -- nvmf/common.sh@402 -- # is_hw=yes 00:28:41.966 01:08:25 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:28:41.966 01:08:25 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:28:41.966 01:08:25 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:28:41.966 01:08:25 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:41.966 01:08:25 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:41.966 01:08:25 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:41.966 01:08:25 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:28:41.966 01:08:25 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:41.966 01:08:25 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:41.966 01:08:25 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:28:41.966 01:08:25 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:41.966 01:08:25 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:41.966 01:08:25 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:28:41.966 01:08:25 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:28:41.966 01:08:25 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:28:41.966 01:08:25 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:41.966 01:08:25 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:41.966 01:08:25 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:41.966 01:08:25 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:28:41.966 01:08:25 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:41.966 01:08:25 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:41.966 01:08:25 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:41.966 01:08:25 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:28:41.966 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:41.966 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.247 ms 00:28:41.966 00:28:41.966 --- 10.0.0.2 ping statistics --- 00:28:41.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:41.966 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:28:41.966 01:08:25 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:41.966 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:41.966 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:28:41.966 00:28:41.966 --- 10.0.0.1 ping statistics --- 00:28:41.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:41.966 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:28:41.966 01:08:25 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:41.966 01:08:25 -- nvmf/common.sh@410 -- # return 0 00:28:41.966 01:08:25 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:41.966 01:08:25 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:41.966 01:08:25 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:28:41.966 01:08:25 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:28:41.966 01:08:25 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:41.966 01:08:25 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:28:41.966 01:08:25 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:28:41.966 01:08:26 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:28:41.966 01:08:26 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:41.966 01:08:26 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:41.966 01:08:26 -- common/autotest_common.sh@10 -- # set +x 00:28:41.966 01:08:26 -- nvmf/common.sh@469 -- # nvmfpid=3515152 00:28:41.966 01:08:26 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:28:41.966 01:08:26 -- nvmf/common.sh@470 -- # waitforlisten 3515152 00:28:41.966 01:08:26 -- common/autotest_common.sh@819 -- # '[' -z 3515152 ']' 00:28:41.966 01:08:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:41.966 01:08:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:41.966 01:08:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:41.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:41.966 01:08:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:41.966 01:08:26 -- common/autotest_common.sh@10 -- # set +x 00:28:41.966 [2024-07-23 01:08:26.061141] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:28:41.966 [2024-07-23 01:08:26.061228] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:41.966 EAL: No free 2048 kB hugepages reported on node 1 00:28:41.966 [2024-07-23 01:08:26.122910] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:42.225 [2024-07-23 01:08:26.206243] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:42.225 [2024-07-23 01:08:26.206401] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:42.225 [2024-07-23 01:08:26.206418] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:42.225 [2024-07-23 01:08:26.206430] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:42.225 [2024-07-23 01:08:26.206469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:43.159 01:08:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:43.159 01:08:27 -- common/autotest_common.sh@852 -- # return 0 00:28:43.159 01:08:27 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:43.159 01:08:27 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:43.159 01:08:27 -- common/autotest_common.sh@10 -- # set +x 00:28:43.159 01:08:27 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:43.159 01:08:27 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:28:43.159 01:08:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:43.159 01:08:27 -- common/autotest_common.sh@10 -- # set +x 00:28:43.159 [2024-07-23 01:08:27.074483] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:43.159 [2024-07-23 01:08:27.082638] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:28:43.159 null0 00:28:43.159 [2024-07-23 01:08:27.114600] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:43.159 01:08:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:43.159 01:08:27 -- host/discovery_remove_ifc.sh@59 -- # hostpid=3515307 00:28:43.159 01:08:27 -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:28:43.159 01:08:27 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3515307 /tmp/host.sock 00:28:43.159 01:08:27 -- common/autotest_common.sh@819 -- # '[' -z 3515307 ']' 00:28:43.159 01:08:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:28:43.159 01:08:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:43.159 01:08:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:28:43.159 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:28:43.159 01:08:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:43.159 01:08:27 -- common/autotest_common.sh@10 -- # set +x 00:28:43.159 [2024-07-23 01:08:27.176471] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:28:43.159 [2024-07-23 01:08:27.176537] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3515307 ] 00:28:43.159 EAL: No free 2048 kB hugepages reported on node 1 00:28:43.159 [2024-07-23 01:08:27.238343] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:43.159 [2024-07-23 01:08:27.327048] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:43.159 [2024-07-23 01:08:27.327223] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:43.417 01:08:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:43.417 01:08:27 -- common/autotest_common.sh@852 -- # return 0 00:28:43.417 01:08:27 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:43.417 01:08:27 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:28:43.417 01:08:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:43.417 01:08:27 -- common/autotest_common.sh@10 -- # set +x 00:28:43.417 01:08:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:43.417 01:08:27 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:28:43.417 01:08:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:43.417 01:08:27 -- common/autotest_common.sh@10 -- # set +x 00:28:43.417 01:08:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:43.417 01:08:27 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:28:43.417 01:08:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:43.417 01:08:27 -- common/autotest_common.sh@10 -- # set +x 00:28:44.792 [2024-07-23 01:08:28.582809] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:44.792 [2024-07-23 01:08:28.582846] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:44.792 [2024-07-23 01:08:28.582874] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:44.792 [2024-07-23 01:08:28.669159] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:28:44.792 [2024-07-23 01:08:28.854367] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:28:44.792 [2024-07-23 01:08:28.854424] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:28:44.792 [2024-07-23 01:08:28.854466] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:28:44.792 [2024-07-23 01:08:28.854493] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:44.792 [2024-07-23 01:08:28.854521] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:44.792 01:08:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:44.792 01:08:28 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:28:44.792 01:08:28 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:44.792 01:08:28 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:44.792 01:08:28 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:44.792 01:08:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:44.792 01:08:28 -- common/autotest_common.sh@10 -- # set +x 00:28:44.792 01:08:28 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:44.792 01:08:28 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:44.792 [2024-07-23 01:08:28.861383] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xde6f00 was disconnected and freed. delete nvme_qpair. 00:28:44.792 01:08:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:44.792 01:08:28 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:28:44.792 01:08:28 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:28:44.792 01:08:28 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:28:44.792 01:08:28 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:28:44.792 01:08:28 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:44.792 01:08:28 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:44.792 01:08:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:44.792 01:08:28 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:44.792 01:08:28 -- common/autotest_common.sh@10 -- # set +x 00:28:44.792 01:08:28 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:44.792 01:08:28 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:44.792 01:08:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:44.792 01:08:28 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:44.792 01:08:28 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:46.169 01:08:29 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:46.169 01:08:29 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:46.169 01:08:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:46.169 01:08:29 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:46.169 01:08:29 -- common/autotest_common.sh@10 -- # set +x 00:28:46.169 01:08:29 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:46.169 01:08:29 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:46.169 01:08:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:46.169 01:08:30 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:46.169 01:08:30 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:47.105 01:08:31 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:47.105 01:08:31 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:47.105 01:08:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:47.105 01:08:31 -- common/autotest_common.sh@10 -- # set +x 00:28:47.105 01:08:31 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:47.105 01:08:31 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:47.105 01:08:31 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:47.105 01:08:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:47.105 01:08:31 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:47.105 01:08:31 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:48.041 01:08:32 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:48.041 01:08:32 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:48.041 01:08:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:48.041 01:08:32 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:48.041 01:08:32 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:48.041 01:08:32 -- common/autotest_common.sh@10 -- # set +x 00:28:48.041 01:08:32 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:48.041 01:08:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:48.041 01:08:32 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:48.041 01:08:32 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:48.978 01:08:33 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:48.978 01:08:33 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:48.978 01:08:33 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:48.978 01:08:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:48.978 01:08:33 -- common/autotest_common.sh@10 -- # set +x 00:28:48.978 01:08:33 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:48.978 01:08:33 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:48.978 01:08:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:48.978 01:08:33 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:48.978 01:08:33 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:50.355 01:08:34 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:50.355 01:08:34 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:50.355 01:08:34 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:50.355 01:08:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:50.355 01:08:34 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:50.355 01:08:34 -- common/autotest_common.sh@10 -- # set +x 00:28:50.355 01:08:34 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:50.355 01:08:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:50.355 01:08:34 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:50.355 01:08:34 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:50.355 [2024-07-23 01:08:34.295439] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:28:50.355 [2024-07-23 01:08:34.295504] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:50.355 [2024-07-23 01:08:34.295527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.355 [2024-07-23 01:08:34.295546] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:50.355 [2024-07-23 01:08:34.295561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.355 [2024-07-23 01:08:34.295577] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:50.355 [2024-07-23 01:08:34.295593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.355 [2024-07-23 01:08:34.295610] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:50.355 [2024-07-23 01:08:34.295634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.355 [2024-07-23 01:08:34.295664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:50.355 [2024-07-23 01:08:34.295679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.355 [2024-07-23 01:08:34.295692] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdae150 is same with the state(5) to be set 00:28:50.355 [2024-07-23 01:08:34.305458] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdae150 (9): Bad file descriptor 00:28:50.355 [2024-07-23 01:08:34.315512] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:51.291 01:08:35 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:51.291 01:08:35 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:51.291 01:08:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:51.291 01:08:35 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:51.291 01:08:35 -- common/autotest_common.sh@10 -- # set +x 00:28:51.291 01:08:35 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:51.291 01:08:35 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:51.291 [2024-07-23 01:08:35.336649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:28:52.228 [2024-07-23 01:08:36.360647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:28:52.228 [2024-07-23 01:08:36.360697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdae150 with addr=10.0.0.2, port=4420 00:28:52.228 [2024-07-23 01:08:36.360728] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdae150 is same with the state(5) to be set 00:28:52.228 [2024-07-23 01:08:36.360758] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:52.228 [2024-07-23 01:08:36.360775] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:52.228 [2024-07-23 01:08:36.360789] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:52.228 [2024-07-23 01:08:36.360805] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:28:52.228 [2024-07-23 01:08:36.361197] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdae150 (9): Bad file descriptor 00:28:52.228 [2024-07-23 01:08:36.361239] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.228 [2024-07-23 01:08:36.361284] bdev_nvme.c:6510:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:28:52.228 [2024-07-23 01:08:36.361321] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:52.228 [2024-07-23 01:08:36.361342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.228 [2024-07-23 01:08:36.361360] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:52.228 [2024-07-23 01:08:36.361375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.228 [2024-07-23 01:08:36.361391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:52.228 [2024-07-23 01:08:36.361405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.228 [2024-07-23 01:08:36.361420] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:52.228 [2024-07-23 01:08:36.361434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.228 [2024-07-23 01:08:36.361450] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:52.228 [2024-07-23 01:08:36.361464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.228 [2024-07-23 01:08:36.361479] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:28:52.228 [2024-07-23 01:08:36.361762] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdad680 (9): Bad file descriptor 00:28:52.229 [2024-07-23 01:08:36.362777] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:28:52.229 [2024-07-23 01:08:36.362797] nvme_ctrlr.c:1136:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:28:52.229 01:08:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:52.229 01:08:36 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:52.229 01:08:36 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:53.607 01:08:37 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:53.607 01:08:37 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:53.607 01:08:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:53.607 01:08:37 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:53.607 01:08:37 -- common/autotest_common.sh@10 -- # set +x 00:28:53.607 01:08:37 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:53.607 01:08:37 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:53.607 01:08:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:53.608 01:08:37 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:28:53.608 01:08:37 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:53.608 01:08:37 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:53.608 01:08:37 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:28:53.608 01:08:37 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:53.608 01:08:37 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:53.608 01:08:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:53.608 01:08:37 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:53.608 01:08:37 -- common/autotest_common.sh@10 -- # set +x 00:28:53.608 01:08:37 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:53.608 01:08:37 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:53.608 01:08:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:53.608 01:08:37 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:28:53.608 01:08:37 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:54.177 [2024-07-23 01:08:38.379111] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:54.177 [2024-07-23 01:08:38.379147] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:54.437 [2024-07-23 01:08:38.379174] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:54.437 [2024-07-23 01:08:38.508591] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:28:54.437 01:08:38 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:54.437 01:08:38 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:54.437 01:08:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:54.437 01:08:38 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:54.437 01:08:38 -- common/autotest_common.sh@10 -- # set +x 00:28:54.437 01:08:38 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:54.437 01:08:38 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:54.437 01:08:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:54.437 01:08:38 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:28:54.437 01:08:38 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:54.437 [2024-07-23 01:08:38.568552] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:28:54.437 [2024-07-23 01:08:38.568602] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:28:54.437 [2024-07-23 01:08:38.568647] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:28:54.437 [2024-07-23 01:08:38.568687] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:28:54.437 [2024-07-23 01:08:38.568701] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:54.437 [2024-07-23 01:08:38.577308] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xdbb4c0 was disconnected and freed. delete nvme_qpair. 00:28:55.371 01:08:39 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:55.371 01:08:39 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:55.371 01:08:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:55.371 01:08:39 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:55.371 01:08:39 -- common/autotest_common.sh@10 -- # set +x 00:28:55.371 01:08:39 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:55.371 01:08:39 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:55.631 01:08:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:55.631 01:08:39 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:28:55.631 01:08:39 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:28:55.631 01:08:39 -- host/discovery_remove_ifc.sh@90 -- # killprocess 3515307 00:28:55.631 01:08:39 -- common/autotest_common.sh@926 -- # '[' -z 3515307 ']' 00:28:55.631 01:08:39 -- common/autotest_common.sh@930 -- # kill -0 3515307 00:28:55.631 01:08:39 -- common/autotest_common.sh@931 -- # uname 00:28:55.631 01:08:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:55.631 01:08:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3515307 00:28:55.631 01:08:39 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:55.631 01:08:39 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:55.631 01:08:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3515307' 00:28:55.631 killing process with pid 3515307 00:28:55.631 01:08:39 -- common/autotest_common.sh@945 -- # kill 3515307 00:28:55.631 01:08:39 -- common/autotest_common.sh@950 -- # wait 3515307 00:28:55.631 01:08:39 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:28:55.631 01:08:39 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:55.631 01:08:39 -- nvmf/common.sh@116 -- # sync 00:28:55.890 01:08:39 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:28:55.890 01:08:39 -- nvmf/common.sh@119 -- # set +e 00:28:55.890 01:08:39 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:55.890 01:08:39 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:28:55.890 rmmod nvme_tcp 00:28:55.890 rmmod nvme_fabrics 00:28:55.890 rmmod nvme_keyring 00:28:55.890 01:08:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:55.890 01:08:39 -- nvmf/common.sh@123 -- # set -e 00:28:55.890 01:08:39 -- nvmf/common.sh@124 -- # return 0 00:28:55.890 01:08:39 -- nvmf/common.sh@477 -- # '[' -n 3515152 ']' 00:28:55.890 01:08:39 -- nvmf/common.sh@478 -- # killprocess 3515152 00:28:55.890 01:08:39 -- common/autotest_common.sh@926 -- # '[' -z 3515152 ']' 00:28:55.890 01:08:39 -- common/autotest_common.sh@930 -- # kill -0 3515152 00:28:55.890 01:08:39 -- common/autotest_common.sh@931 -- # uname 00:28:55.890 01:08:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:55.890 01:08:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3515152 00:28:55.890 01:08:39 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:28:55.890 01:08:39 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:28:55.890 01:08:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3515152' 00:28:55.890 killing process with pid 3515152 00:28:55.890 01:08:39 -- common/autotest_common.sh@945 -- # kill 3515152 00:28:55.891 01:08:39 -- common/autotest_common.sh@950 -- # wait 3515152 00:28:56.150 01:08:40 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:28:56.150 01:08:40 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:28:56.150 01:08:40 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:28:56.150 01:08:40 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:56.150 01:08:40 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:28:56.150 01:08:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:56.150 01:08:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:56.150 01:08:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:58.056 01:08:42 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:28:58.056 00:28:58.056 real 0m18.239s 00:28:58.056 user 0m25.571s 00:28:58.056 sys 0m2.866s 00:28:58.056 01:08:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:58.056 01:08:42 -- common/autotest_common.sh@10 -- # set +x 00:28:58.056 ************************************ 00:28:58.056 END TEST nvmf_discovery_remove_ifc 00:28:58.056 ************************************ 00:28:58.056 01:08:42 -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:28:58.056 01:08:42 -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:58.056 01:08:42 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:58.056 01:08:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:58.056 01:08:42 -- common/autotest_common.sh@10 -- # set +x 00:28:58.056 ************************************ 00:28:58.056 START TEST nvmf_digest 00:28:58.056 ************************************ 00:28:58.056 01:08:42 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:58.315 * Looking for test storage... 00:28:58.315 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:58.316 01:08:42 -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:58.316 01:08:42 -- nvmf/common.sh@7 -- # uname -s 00:28:58.316 01:08:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:58.316 01:08:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:58.316 01:08:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:58.316 01:08:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:58.316 01:08:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:58.316 01:08:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:58.316 01:08:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:58.316 01:08:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:58.316 01:08:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:58.316 01:08:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:58.316 01:08:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:58.316 01:08:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:58.316 01:08:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:58.316 01:08:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:58.316 01:08:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:58.316 01:08:42 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:58.316 01:08:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:58.316 01:08:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:58.316 01:08:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:58.316 01:08:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.316 01:08:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.316 01:08:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.316 01:08:42 -- paths/export.sh@5 -- # export PATH 00:28:58.316 01:08:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.316 01:08:42 -- nvmf/common.sh@46 -- # : 0 00:28:58.316 01:08:42 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:58.316 01:08:42 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:58.316 01:08:42 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:58.316 01:08:42 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:58.316 01:08:42 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:58.316 01:08:42 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:58.316 01:08:42 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:58.316 01:08:42 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:58.316 01:08:42 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:28:58.316 01:08:42 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:28:58.316 01:08:42 -- host/digest.sh@16 -- # runtime=2 00:28:58.316 01:08:42 -- host/digest.sh@130 -- # [[ tcp != \t\c\p ]] 00:28:58.316 01:08:42 -- host/digest.sh@132 -- # nvmftestinit 00:28:58.316 01:08:42 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:28:58.316 01:08:42 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:58.316 01:08:42 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:58.316 01:08:42 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:58.316 01:08:42 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:58.316 01:08:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:58.316 01:08:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:58.316 01:08:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:58.316 01:08:42 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:28:58.316 01:08:42 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:28:58.316 01:08:42 -- nvmf/common.sh@284 -- # xtrace_disable 00:28:58.316 01:08:42 -- common/autotest_common.sh@10 -- # set +x 00:29:00.250 01:08:44 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:29:00.250 01:08:44 -- nvmf/common.sh@290 -- # pci_devs=() 00:29:00.250 01:08:44 -- nvmf/common.sh@290 -- # local -a pci_devs 00:29:00.250 01:08:44 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:29:00.250 01:08:44 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:29:00.250 01:08:44 -- nvmf/common.sh@292 -- # pci_drivers=() 00:29:00.250 01:08:44 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:29:00.250 01:08:44 -- nvmf/common.sh@294 -- # net_devs=() 00:29:00.250 01:08:44 -- nvmf/common.sh@294 -- # local -ga net_devs 00:29:00.251 01:08:44 -- nvmf/common.sh@295 -- # e810=() 00:29:00.251 01:08:44 -- nvmf/common.sh@295 -- # local -ga e810 00:29:00.251 01:08:44 -- nvmf/common.sh@296 -- # x722=() 00:29:00.251 01:08:44 -- nvmf/common.sh@296 -- # local -ga x722 00:29:00.251 01:08:44 -- nvmf/common.sh@297 -- # mlx=() 00:29:00.251 01:08:44 -- nvmf/common.sh@297 -- # local -ga mlx 00:29:00.251 01:08:44 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:00.251 01:08:44 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:00.251 01:08:44 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:00.251 01:08:44 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:00.251 01:08:44 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:00.251 01:08:44 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:00.251 01:08:44 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:00.251 01:08:44 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:00.251 01:08:44 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:00.251 01:08:44 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:00.251 01:08:44 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:00.251 01:08:44 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:29:00.251 01:08:44 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:29:00.251 01:08:44 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:29:00.251 01:08:44 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:29:00.251 01:08:44 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:29:00.251 01:08:44 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:29:00.251 01:08:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:00.251 01:08:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:00.251 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:00.251 01:08:44 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:00.251 01:08:44 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:00.251 01:08:44 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:00.251 01:08:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:00.251 01:08:44 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:00.251 01:08:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:00.251 01:08:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:00.251 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:00.251 01:08:44 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:00.251 01:08:44 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:00.251 01:08:44 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:00.251 01:08:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:00.251 01:08:44 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:00.251 01:08:44 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:29:00.251 01:08:44 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:29:00.251 01:08:44 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:29:00.251 01:08:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:00.251 01:08:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:00.251 01:08:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:00.251 01:08:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:00.251 01:08:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:00.251 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:00.251 01:08:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:00.251 01:08:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:00.251 01:08:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:00.251 01:08:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:00.251 01:08:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:00.251 01:08:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:00.251 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:00.251 01:08:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:00.251 01:08:44 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:29:00.251 01:08:44 -- nvmf/common.sh@402 -- # is_hw=yes 00:29:00.251 01:08:44 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:29:00.251 01:08:44 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:29:00.251 01:08:44 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:29:00.251 01:08:44 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:00.251 01:08:44 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:00.251 01:08:44 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:00.251 01:08:44 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:29:00.251 01:08:44 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:00.251 01:08:44 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:00.251 01:08:44 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:29:00.251 01:08:44 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:00.251 01:08:44 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:00.251 01:08:44 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:29:00.251 01:08:44 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:29:00.251 01:08:44 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:29:00.251 01:08:44 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:00.251 01:08:44 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:00.251 01:08:44 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:00.251 01:08:44 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:29:00.251 01:08:44 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:00.251 01:08:44 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:00.251 01:08:44 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:00.251 01:08:44 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:29:00.251 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:00.251 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:29:00.251 00:29:00.251 --- 10.0.0.2 ping statistics --- 00:29:00.251 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:00.251 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:29:00.251 01:08:44 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:00.251 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:00.251 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.068 ms 00:29:00.251 00:29:00.251 --- 10.0.0.1 ping statistics --- 00:29:00.251 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:00.251 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:29:00.251 01:08:44 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:00.251 01:08:44 -- nvmf/common.sh@410 -- # return 0 00:29:00.251 01:08:44 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:29:00.251 01:08:44 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:00.251 01:08:44 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:29:00.251 01:08:44 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:29:00.251 01:08:44 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:00.251 01:08:44 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:29:00.251 01:08:44 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:29:00.251 01:08:44 -- host/digest.sh@134 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:00.251 01:08:44 -- host/digest.sh@135 -- # run_test nvmf_digest_clean run_digest 00:29:00.251 01:08:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:00.251 01:08:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:00.251 01:08:44 -- common/autotest_common.sh@10 -- # set +x 00:29:00.251 ************************************ 00:29:00.251 START TEST nvmf_digest_clean 00:29:00.251 ************************************ 00:29:00.251 01:08:44 -- common/autotest_common.sh@1104 -- # run_digest 00:29:00.251 01:08:44 -- host/digest.sh@119 -- # nvmfappstart --wait-for-rpc 00:29:00.251 01:08:44 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:00.251 01:08:44 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:00.251 01:08:44 -- common/autotest_common.sh@10 -- # set +x 00:29:00.251 01:08:44 -- nvmf/common.sh@469 -- # nvmfpid=3518822 00:29:00.251 01:08:44 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:29:00.251 01:08:44 -- nvmf/common.sh@470 -- # waitforlisten 3518822 00:29:00.251 01:08:44 -- common/autotest_common.sh@819 -- # '[' -z 3518822 ']' 00:29:00.251 01:08:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:00.251 01:08:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:00.251 01:08:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:00.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:00.251 01:08:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:00.251 01:08:44 -- common/autotest_common.sh@10 -- # set +x 00:29:00.251 [2024-07-23 01:08:44.355790] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:29:00.251 [2024-07-23 01:08:44.355885] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:00.251 EAL: No free 2048 kB hugepages reported on node 1 00:29:00.251 [2024-07-23 01:08:44.426105] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:00.510 [2024-07-23 01:08:44.515591] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:00.510 [2024-07-23 01:08:44.515778] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:00.510 [2024-07-23 01:08:44.515799] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:00.510 [2024-07-23 01:08:44.515815] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:00.510 [2024-07-23 01:08:44.515845] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:00.510 01:08:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:00.510 01:08:44 -- common/autotest_common.sh@852 -- # return 0 00:29:00.510 01:08:44 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:00.510 01:08:44 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:00.510 01:08:44 -- common/autotest_common.sh@10 -- # set +x 00:29:00.510 01:08:44 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:00.511 01:08:44 -- host/digest.sh@120 -- # common_target_config 00:29:00.511 01:08:44 -- host/digest.sh@43 -- # rpc_cmd 00:29:00.511 01:08:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:00.511 01:08:44 -- common/autotest_common.sh@10 -- # set +x 00:29:00.511 null0 00:29:00.511 [2024-07-23 01:08:44.689735] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:00.770 [2024-07-23 01:08:44.713929] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:00.770 01:08:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:00.770 01:08:44 -- host/digest.sh@122 -- # run_bperf randread 4096 128 00:29:00.770 01:08:44 -- host/digest.sh@77 -- # local rw bs qd 00:29:00.770 01:08:44 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:00.770 01:08:44 -- host/digest.sh@80 -- # rw=randread 00:29:00.770 01:08:44 -- host/digest.sh@80 -- # bs=4096 00:29:00.770 01:08:44 -- host/digest.sh@80 -- # qd=128 00:29:00.770 01:08:44 -- host/digest.sh@82 -- # bperfpid=3518971 00:29:00.770 01:08:44 -- host/digest.sh@83 -- # waitforlisten 3518971 /var/tmp/bperf.sock 00:29:00.770 01:08:44 -- common/autotest_common.sh@819 -- # '[' -z 3518971 ']' 00:29:00.770 01:08:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:00.770 01:08:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:00.770 01:08:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:00.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:00.770 01:08:44 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:29:00.770 01:08:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:00.770 01:08:44 -- common/autotest_common.sh@10 -- # set +x 00:29:00.770 [2024-07-23 01:08:44.758845] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:29:00.770 [2024-07-23 01:08:44.758923] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3518971 ] 00:29:00.770 EAL: No free 2048 kB hugepages reported on node 1 00:29:00.770 [2024-07-23 01:08:44.820122] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:00.770 [2024-07-23 01:08:44.911722] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:00.770 01:08:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:00.770 01:08:44 -- common/autotest_common.sh@852 -- # return 0 00:29:00.770 01:08:44 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:29:00.770 01:08:44 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:29:00.770 01:08:44 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:01.340 01:08:45 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:01.340 01:08:45 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:01.599 nvme0n1 00:29:01.599 01:08:45 -- host/digest.sh@91 -- # bperf_py perform_tests 00:29:01.599 01:08:45 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:01.599 Running I/O for 2 seconds... 00:29:04.137 00:29:04.137 Latency(us) 00:29:04.137 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:04.137 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:04.137 nvme0n1 : 2.04 15765.91 61.59 0.00 0.00 7953.41 2524.35 47768.46 00:29:04.137 =================================================================================================================== 00:29:04.137 Total : 15765.91 61.59 0.00 0.00 7953.41 2524.35 47768.46 00:29:04.137 0 00:29:04.137 01:08:47 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:29:04.137 01:08:47 -- host/digest.sh@92 -- # get_accel_stats 00:29:04.137 01:08:47 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:04.137 01:08:47 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:04.137 01:08:47 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:04.137 | select(.opcode=="crc32c") 00:29:04.137 | "\(.module_name) \(.executed)"' 00:29:04.137 01:08:48 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:29:04.137 01:08:48 -- host/digest.sh@93 -- # exp_module=software 00:29:04.137 01:08:48 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:29:04.137 01:08:48 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:04.137 01:08:48 -- host/digest.sh@97 -- # killprocess 3518971 00:29:04.137 01:08:48 -- common/autotest_common.sh@926 -- # '[' -z 3518971 ']' 00:29:04.137 01:08:48 -- common/autotest_common.sh@930 -- # kill -0 3518971 00:29:04.137 01:08:48 -- common/autotest_common.sh@931 -- # uname 00:29:04.137 01:08:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:04.137 01:08:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3518971 00:29:04.137 01:08:48 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:29:04.137 01:08:48 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:29:04.137 01:08:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3518971' 00:29:04.137 killing process with pid 3518971 00:29:04.137 01:08:48 -- common/autotest_common.sh@945 -- # kill 3518971 00:29:04.137 Received shutdown signal, test time was about 2.000000 seconds 00:29:04.137 00:29:04.137 Latency(us) 00:29:04.137 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:04.137 =================================================================================================================== 00:29:04.137 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:04.137 01:08:48 -- common/autotest_common.sh@950 -- # wait 3518971 00:29:04.137 01:08:48 -- host/digest.sh@123 -- # run_bperf randread 131072 16 00:29:04.137 01:08:48 -- host/digest.sh@77 -- # local rw bs qd 00:29:04.137 01:08:48 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:04.137 01:08:48 -- host/digest.sh@80 -- # rw=randread 00:29:04.137 01:08:48 -- host/digest.sh@80 -- # bs=131072 00:29:04.137 01:08:48 -- host/digest.sh@80 -- # qd=16 00:29:04.137 01:08:48 -- host/digest.sh@82 -- # bperfpid=3519388 00:29:04.137 01:08:48 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:29:04.137 01:08:48 -- host/digest.sh@83 -- # waitforlisten 3519388 /var/tmp/bperf.sock 00:29:04.137 01:08:48 -- common/autotest_common.sh@819 -- # '[' -z 3519388 ']' 00:29:04.137 01:08:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:04.137 01:08:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:04.137 01:08:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:04.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:04.137 01:08:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:04.137 01:08:48 -- common/autotest_common.sh@10 -- # set +x 00:29:04.137 [2024-07-23 01:08:48.305511] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:29:04.137 [2024-07-23 01:08:48.305580] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3519388 ] 00:29:04.137 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:04.137 Zero copy mechanism will not be used. 00:29:04.137 EAL: No free 2048 kB hugepages reported on node 1 00:29:04.395 [2024-07-23 01:08:48.368088] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:04.395 [2024-07-23 01:08:48.461484] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:04.395 01:08:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:04.395 01:08:48 -- common/autotest_common.sh@852 -- # return 0 00:29:04.395 01:08:48 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:29:04.395 01:08:48 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:29:04.395 01:08:48 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:04.652 01:08:48 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:04.652 01:08:48 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:05.218 nvme0n1 00:29:05.218 01:08:49 -- host/digest.sh@91 -- # bperf_py perform_tests 00:29:05.218 01:08:49 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:05.218 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:05.218 Zero copy mechanism will not be used. 00:29:05.218 Running I/O for 2 seconds... 00:29:07.123 00:29:07.123 Latency(us) 00:29:07.123 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:07.123 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:07.123 nvme0n1 : 2.00 2332.44 291.56 0.00 0.00 6854.85 5801.15 16990.81 00:29:07.123 =================================================================================================================== 00:29:07.123 Total : 2332.44 291.56 0.00 0.00 6854.85 5801.15 16990.81 00:29:07.123 0 00:29:07.383 01:08:51 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:29:07.383 01:08:51 -- host/digest.sh@92 -- # get_accel_stats 00:29:07.383 01:08:51 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:07.383 01:08:51 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:07.383 | select(.opcode=="crc32c") 00:29:07.383 | "\(.module_name) \(.executed)"' 00:29:07.383 01:08:51 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:07.642 01:08:51 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:29:07.642 01:08:51 -- host/digest.sh@93 -- # exp_module=software 00:29:07.642 01:08:51 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:29:07.642 01:08:51 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:07.642 01:08:51 -- host/digest.sh@97 -- # killprocess 3519388 00:29:07.642 01:08:51 -- common/autotest_common.sh@926 -- # '[' -z 3519388 ']' 00:29:07.642 01:08:51 -- common/autotest_common.sh@930 -- # kill -0 3519388 00:29:07.642 01:08:51 -- common/autotest_common.sh@931 -- # uname 00:29:07.642 01:08:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:07.642 01:08:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3519388 00:29:07.642 01:08:51 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:29:07.642 01:08:51 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:29:07.642 01:08:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3519388' 00:29:07.642 killing process with pid 3519388 00:29:07.642 01:08:51 -- common/autotest_common.sh@945 -- # kill 3519388 00:29:07.642 Received shutdown signal, test time was about 2.000000 seconds 00:29:07.642 00:29:07.642 Latency(us) 00:29:07.642 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:07.642 =================================================================================================================== 00:29:07.642 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:07.642 01:08:51 -- common/autotest_common.sh@950 -- # wait 3519388 00:29:07.642 01:08:51 -- host/digest.sh@124 -- # run_bperf randwrite 4096 128 00:29:07.642 01:08:51 -- host/digest.sh@77 -- # local rw bs qd 00:29:07.642 01:08:51 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:07.642 01:08:51 -- host/digest.sh@80 -- # rw=randwrite 00:29:07.642 01:08:51 -- host/digest.sh@80 -- # bs=4096 00:29:07.642 01:08:51 -- host/digest.sh@80 -- # qd=128 00:29:07.642 01:08:51 -- host/digest.sh@82 -- # bperfpid=3519815 00:29:07.642 01:08:51 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:29:07.642 01:08:51 -- host/digest.sh@83 -- # waitforlisten 3519815 /var/tmp/bperf.sock 00:29:07.642 01:08:51 -- common/autotest_common.sh@819 -- # '[' -z 3519815 ']' 00:29:07.642 01:08:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:07.642 01:08:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:07.642 01:08:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:07.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:07.642 01:08:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:07.642 01:08:51 -- common/autotest_common.sh@10 -- # set +x 00:29:07.900 [2024-07-23 01:08:51.878517] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:29:07.900 [2024-07-23 01:08:51.878607] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3519815 ] 00:29:07.900 EAL: No free 2048 kB hugepages reported on node 1 00:29:07.900 [2024-07-23 01:08:51.940145] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:07.900 [2024-07-23 01:08:52.026437] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:07.900 01:08:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:07.900 01:08:52 -- common/autotest_common.sh@852 -- # return 0 00:29:07.900 01:08:52 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:29:07.900 01:08:52 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:29:07.900 01:08:52 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:08.469 01:08:52 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:08.469 01:08:52 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:08.728 nvme0n1 00:29:08.728 01:08:52 -- host/digest.sh@91 -- # bperf_py perform_tests 00:29:08.728 01:08:52 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:08.987 Running I/O for 2 seconds... 00:29:10.893 00:29:10.893 Latency(us) 00:29:10.893 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:10.893 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:10.893 nvme0n1 : 2.01 19124.02 74.70 0.00 0.00 6677.89 3058.35 9417.77 00:29:10.893 =================================================================================================================== 00:29:10.893 Total : 19124.02 74.70 0.00 0.00 6677.89 3058.35 9417.77 00:29:10.893 0 00:29:10.893 01:08:54 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:29:10.893 01:08:54 -- host/digest.sh@92 -- # get_accel_stats 00:29:10.893 01:08:54 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:10.893 01:08:54 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:10.893 01:08:54 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:10.893 | select(.opcode=="crc32c") 00:29:10.893 | "\(.module_name) \(.executed)"' 00:29:11.152 01:08:55 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:29:11.152 01:08:55 -- host/digest.sh@93 -- # exp_module=software 00:29:11.152 01:08:55 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:29:11.152 01:08:55 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:11.152 01:08:55 -- host/digest.sh@97 -- # killprocess 3519815 00:29:11.152 01:08:55 -- common/autotest_common.sh@926 -- # '[' -z 3519815 ']' 00:29:11.152 01:08:55 -- common/autotest_common.sh@930 -- # kill -0 3519815 00:29:11.152 01:08:55 -- common/autotest_common.sh@931 -- # uname 00:29:11.152 01:08:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:11.152 01:08:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3519815 00:29:11.152 01:08:55 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:29:11.152 01:08:55 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:29:11.152 01:08:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3519815' 00:29:11.152 killing process with pid 3519815 00:29:11.152 01:08:55 -- common/autotest_common.sh@945 -- # kill 3519815 00:29:11.152 Received shutdown signal, test time was about 2.000000 seconds 00:29:11.152 00:29:11.152 Latency(us) 00:29:11.152 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:11.152 =================================================================================================================== 00:29:11.152 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:11.152 01:08:55 -- common/autotest_common.sh@950 -- # wait 3519815 00:29:11.411 01:08:55 -- host/digest.sh@125 -- # run_bperf randwrite 131072 16 00:29:11.411 01:08:55 -- host/digest.sh@77 -- # local rw bs qd 00:29:11.411 01:08:55 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:11.411 01:08:55 -- host/digest.sh@80 -- # rw=randwrite 00:29:11.411 01:08:55 -- host/digest.sh@80 -- # bs=131072 00:29:11.411 01:08:55 -- host/digest.sh@80 -- # qd=16 00:29:11.411 01:08:55 -- host/digest.sh@82 -- # bperfpid=3520231 00:29:11.411 01:08:55 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:29:11.411 01:08:55 -- host/digest.sh@83 -- # waitforlisten 3520231 /var/tmp/bperf.sock 00:29:11.411 01:08:55 -- common/autotest_common.sh@819 -- # '[' -z 3520231 ']' 00:29:11.411 01:08:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:11.411 01:08:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:11.411 01:08:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:11.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:11.411 01:08:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:11.411 01:08:55 -- common/autotest_common.sh@10 -- # set +x 00:29:11.411 [2024-07-23 01:08:55.527974] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:29:11.411 [2024-07-23 01:08:55.528069] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3520231 ] 00:29:11.411 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:11.411 Zero copy mechanism will not be used. 00:29:11.411 EAL: No free 2048 kB hugepages reported on node 1 00:29:11.411 [2024-07-23 01:08:55.589666] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:11.669 [2024-07-23 01:08:55.676285] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:11.669 01:08:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:11.669 01:08:55 -- common/autotest_common.sh@852 -- # return 0 00:29:11.669 01:08:55 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:29:11.669 01:08:55 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:29:11.669 01:08:55 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:11.927 01:08:56 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:11.927 01:08:56 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:12.495 nvme0n1 00:29:12.495 01:08:56 -- host/digest.sh@91 -- # bperf_py perform_tests 00:29:12.495 01:08:56 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:12.495 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:12.495 Zero copy mechanism will not be used. 00:29:12.495 Running I/O for 2 seconds... 00:29:15.033 00:29:15.033 Latency(us) 00:29:15.033 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:15.033 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:15.033 nvme0n1 : 2.01 2058.01 257.25 0.00 0.00 7752.94 6068.15 18252.99 00:29:15.033 =================================================================================================================== 00:29:15.033 Total : 2058.01 257.25 0.00 0.00 7752.94 6068.15 18252.99 00:29:15.033 0 00:29:15.033 01:08:58 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:29:15.033 01:08:58 -- host/digest.sh@92 -- # get_accel_stats 00:29:15.033 01:08:58 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:15.033 01:08:58 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:15.033 01:08:58 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:15.033 | select(.opcode=="crc32c") 00:29:15.033 | "\(.module_name) \(.executed)"' 00:29:15.033 01:08:58 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:29:15.033 01:08:58 -- host/digest.sh@93 -- # exp_module=software 00:29:15.033 01:08:58 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:29:15.033 01:08:58 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:15.033 01:08:58 -- host/digest.sh@97 -- # killprocess 3520231 00:29:15.033 01:08:58 -- common/autotest_common.sh@926 -- # '[' -z 3520231 ']' 00:29:15.033 01:08:58 -- common/autotest_common.sh@930 -- # kill -0 3520231 00:29:15.033 01:08:58 -- common/autotest_common.sh@931 -- # uname 00:29:15.033 01:08:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:15.033 01:08:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3520231 00:29:15.033 01:08:58 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:29:15.033 01:08:58 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:29:15.033 01:08:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3520231' 00:29:15.033 killing process with pid 3520231 00:29:15.033 01:08:58 -- common/autotest_common.sh@945 -- # kill 3520231 00:29:15.033 Received shutdown signal, test time was about 2.000000 seconds 00:29:15.033 00:29:15.033 Latency(us) 00:29:15.033 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:15.033 =================================================================================================================== 00:29:15.033 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:15.033 01:08:58 -- common/autotest_common.sh@950 -- # wait 3520231 00:29:15.033 01:08:59 -- host/digest.sh@126 -- # killprocess 3518822 00:29:15.033 01:08:59 -- common/autotest_common.sh@926 -- # '[' -z 3518822 ']' 00:29:15.033 01:08:59 -- common/autotest_common.sh@930 -- # kill -0 3518822 00:29:15.033 01:08:59 -- common/autotest_common.sh@931 -- # uname 00:29:15.033 01:08:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:15.033 01:08:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3518822 00:29:15.033 01:08:59 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:15.033 01:08:59 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:15.033 01:08:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3518822' 00:29:15.033 killing process with pid 3518822 00:29:15.033 01:08:59 -- common/autotest_common.sh@945 -- # kill 3518822 00:29:15.033 01:08:59 -- common/autotest_common.sh@950 -- # wait 3518822 00:29:15.292 00:29:15.292 real 0m15.077s 00:29:15.292 user 0m29.640s 00:29:15.292 sys 0m4.203s 00:29:15.292 01:08:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:15.292 01:08:59 -- common/autotest_common.sh@10 -- # set +x 00:29:15.292 ************************************ 00:29:15.292 END TEST nvmf_digest_clean 00:29:15.292 ************************************ 00:29:15.292 01:08:59 -- host/digest.sh@136 -- # run_test nvmf_digest_error run_digest_error 00:29:15.292 01:08:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:15.292 01:08:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:15.292 01:08:59 -- common/autotest_common.sh@10 -- # set +x 00:29:15.292 ************************************ 00:29:15.292 START TEST nvmf_digest_error 00:29:15.292 ************************************ 00:29:15.292 01:08:59 -- common/autotest_common.sh@1104 -- # run_digest_error 00:29:15.292 01:08:59 -- host/digest.sh@101 -- # nvmfappstart --wait-for-rpc 00:29:15.292 01:08:59 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:15.292 01:08:59 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:15.292 01:08:59 -- common/autotest_common.sh@10 -- # set +x 00:29:15.292 01:08:59 -- nvmf/common.sh@469 -- # nvmfpid=3520795 00:29:15.292 01:08:59 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:29:15.292 01:08:59 -- nvmf/common.sh@470 -- # waitforlisten 3520795 00:29:15.292 01:08:59 -- common/autotest_common.sh@819 -- # '[' -z 3520795 ']' 00:29:15.292 01:08:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:15.292 01:08:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:15.292 01:08:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:15.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:15.292 01:08:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:15.292 01:08:59 -- common/autotest_common.sh@10 -- # set +x 00:29:15.292 [2024-07-23 01:08:59.462361] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:29:15.292 [2024-07-23 01:08:59.462460] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:15.552 EAL: No free 2048 kB hugepages reported on node 1 00:29:15.552 [2024-07-23 01:08:59.533806] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:15.552 [2024-07-23 01:08:59.621546] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:15.552 [2024-07-23 01:08:59.621732] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:15.552 [2024-07-23 01:08:59.621754] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:15.552 [2024-07-23 01:08:59.621779] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:15.552 [2024-07-23 01:08:59.621818] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:15.552 01:08:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:15.552 01:08:59 -- common/autotest_common.sh@852 -- # return 0 00:29:15.552 01:08:59 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:15.552 01:08:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:15.552 01:08:59 -- common/autotest_common.sh@10 -- # set +x 00:29:15.552 01:08:59 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:15.552 01:08:59 -- host/digest.sh@103 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:29:15.552 01:08:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:15.552 01:08:59 -- common/autotest_common.sh@10 -- # set +x 00:29:15.552 [2024-07-23 01:08:59.686391] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:29:15.552 01:08:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:15.552 01:08:59 -- host/digest.sh@104 -- # common_target_config 00:29:15.552 01:08:59 -- host/digest.sh@43 -- # rpc_cmd 00:29:15.552 01:08:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:15.552 01:08:59 -- common/autotest_common.sh@10 -- # set +x 00:29:15.811 null0 00:29:15.811 [2024-07-23 01:08:59.803631] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:15.811 [2024-07-23 01:08:59.827858] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:15.811 01:08:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:15.811 01:08:59 -- host/digest.sh@107 -- # run_bperf_err randread 4096 128 00:29:15.811 01:08:59 -- host/digest.sh@54 -- # local rw bs qd 00:29:15.811 01:08:59 -- host/digest.sh@56 -- # rw=randread 00:29:15.811 01:08:59 -- host/digest.sh@56 -- # bs=4096 00:29:15.811 01:08:59 -- host/digest.sh@56 -- # qd=128 00:29:15.811 01:08:59 -- host/digest.sh@58 -- # bperfpid=3520822 00:29:15.811 01:08:59 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:29:15.811 01:08:59 -- host/digest.sh@60 -- # waitforlisten 3520822 /var/tmp/bperf.sock 00:29:15.811 01:08:59 -- common/autotest_common.sh@819 -- # '[' -z 3520822 ']' 00:29:15.811 01:08:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:15.811 01:08:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:15.811 01:08:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:15.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:15.811 01:08:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:15.811 01:08:59 -- common/autotest_common.sh@10 -- # set +x 00:29:15.811 [2024-07-23 01:08:59.874692] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:29:15.811 [2024-07-23 01:08:59.874763] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3520822 ] 00:29:15.811 EAL: No free 2048 kB hugepages reported on node 1 00:29:15.811 [2024-07-23 01:08:59.939718] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:16.069 [2024-07-23 01:09:00.035407] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:16.663 01:09:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:16.663 01:09:00 -- common/autotest_common.sh@852 -- # return 0 00:29:16.663 01:09:00 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:16.663 01:09:00 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:16.920 01:09:01 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:16.920 01:09:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:16.920 01:09:01 -- common/autotest_common.sh@10 -- # set +x 00:29:16.920 01:09:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:16.920 01:09:01 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:16.920 01:09:01 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:17.178 nvme0n1 00:29:17.436 01:09:01 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:17.436 01:09:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:17.436 01:09:01 -- common/autotest_common.sh@10 -- # set +x 00:29:17.436 01:09:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:17.436 01:09:01 -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:17.436 01:09:01 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:17.437 Running I/O for 2 seconds... 00:29:17.437 [2024-07-23 01:09:01.526815] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:17.437 [2024-07-23 01:09:01.526874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.437 [2024-07-23 01:09:01.526902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.437 [2024-07-23 01:09:01.544720] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:17.437 [2024-07-23 01:09:01.544752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.437 [2024-07-23 01:09:01.544769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.437 [2024-07-23 01:09:01.562626] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:17.437 [2024-07-23 01:09:01.562687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:8757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.437 [2024-07-23 01:09:01.562706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.437 [2024-07-23 01:09:01.580323] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:17.437 [2024-07-23 01:09:01.580358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:21530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.437 [2024-07-23 01:09:01.580380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.437 [2024-07-23 01:09:01.596386] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:17.437 [2024-07-23 01:09:01.596417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:24223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.437 [2024-07-23 01:09:01.596433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.437 [2024-07-23 01:09:01.614234] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:17.437 [2024-07-23 01:09:01.614271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:4133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.437 [2024-07-23 01:09:01.614290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.437 [2024-07-23 01:09:01.630902] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:17.437 [2024-07-23 01:09:01.630959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:10821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.437 [2024-07-23 01:09:01.630979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.696 [2024-07-23 01:09:01.649418] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:17.696 [2024-07-23 01:09:01.649455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.696 [2024-07-23 01:09:01.649475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.696 [2024-07-23 01:09:01.665440] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:17.696 [2024-07-23 01:09:01.665473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:2633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.696 [2024-07-23 01:09:01.665490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.696 [2024-07-23 01:09:01.681861] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:17.696 [2024-07-23 01:09:01.681894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:16423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.696 [2024-07-23 01:09:01.681916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.696 [2024-07-23 01:09:01.698581] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:17.696 [2024-07-23 01:09:01.698628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:23967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.696 [2024-07-23 01:09:01.698650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.696 [2024-07-23 01:09:01.710601] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:17.696 [2024-07-23 01:09:01.710644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:12022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.696 [2024-07-23 01:09:01.710686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.696 [2024-07-23 01:09:01.728288] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:17.697 [2024-07-23 01:09:01.728324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21446 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.697 [2024-07-23 01:09:01.728343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.697 [2024-07-23 01:09:01.746298] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:17.697 [2024-07-23 01:09:01.746334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:1255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.697 [2024-07-23 01:09:01.746353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.697 [2024-07-23 01:09:01.761322] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:17.697 [2024-07-23 01:09:01.761358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:1649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.697 [2024-07-23 01:09:01.761378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.697 [2024-07-23 01:09:01.773631] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:17.697 [2024-07-23 01:09:01.773679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:12731 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.697 [2024-07-23 01:09:01.773705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.697 [2024-07-23 01:09:01.786785] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:17.697 [2024-07-23 01:09:01.786817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:13883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.697 [2024-07-23 01:09:01.786835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.697 [2024-07-23 01:09:01.798889] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:17.697 [2024-07-23 01:09:01.798936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:11029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.697 [2024-07-23 01:09:01.798960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.697 [2024-07-23 01:09:01.811563] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:17.697 [2024-07-23 01:09:01.811599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:22103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.697 [2024-07-23 01:09:01.811630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.697 [2024-07-23 01:09:01.823706] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:17.697 [2024-07-23 01:09:01.823737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:17826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.697 [2024-07-23 01:09:01.823754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.697 [2024-07-23 01:09:01.835861] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:17.697 [2024-07-23 01:09:01.835902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:11070 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.697 [2024-07-23 01:09:01.835919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.697 [2024-07-23 01:09:01.848694] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:17.697 [2024-07-23 01:09:01.848726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:6484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.697 [2024-07-23 01:09:01.848745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.697 [2024-07-23 01:09:01.861288] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:17.697 [2024-07-23 01:09:01.861320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:14431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.697 [2024-07-23 01:09:01.861341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.697 [2024-07-23 01:09:01.872717] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:17.697 [2024-07-23 01:09:01.872748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.697 [2024-07-23 01:09:01.872771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.697 [2024-07-23 01:09:01.884503] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:17.697 [2024-07-23 01:09:01.884544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.697 [2024-07-23 01:09:01.884562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.697 [2024-07-23 01:09:01.896087] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:17.697 [2024-07-23 01:09:01.896118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.697 [2024-07-23 01:09:01.896136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.958 [2024-07-23 01:09:01.908232] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:17.958 [2024-07-23 01:09:01.908264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:7943 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.958 [2024-07-23 01:09:01.908281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.958 [2024-07-23 01:09:01.919819] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:17.958 [2024-07-23 01:09:01.919850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:6956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.958 [2024-07-23 01:09:01.919868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.958 [2024-07-23 01:09:01.931294] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:17.958 [2024-07-23 01:09:01.931325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:14192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.958 [2024-07-23 01:09:01.931342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.958 [2024-07-23 01:09:01.942792] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:17.958 [2024-07-23 01:09:01.942823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.958 [2024-07-23 01:09:01.942841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.958 [2024-07-23 01:09:01.955231] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:17.958 [2024-07-23 01:09:01.955262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.958 [2024-07-23 01:09:01.955279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.958 [2024-07-23 01:09:01.966608] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:17.958 [2024-07-23 01:09:01.966664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:13137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.958 [2024-07-23 01:09:01.966683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.958 [2024-07-23 01:09:01.978271] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:17.958 [2024-07-23 01:09:01.978302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:3069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.958 [2024-07-23 01:09:01.978319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.958 [2024-07-23 01:09:01.989693] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:17.958 [2024-07-23 01:09:01.989725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:24017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.958 [2024-07-23 01:09:01.989743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.958 [2024-07-23 01:09:02.002046] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:17.958 [2024-07-23 01:09:02.002077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.958 [2024-07-23 01:09:02.002094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.958 [2024-07-23 01:09:02.013440] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:17.958 [2024-07-23 01:09:02.013471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:17822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.958 [2024-07-23 01:09:02.013489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.958 [2024-07-23 01:09:02.025204] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:17.958 [2024-07-23 01:09:02.025236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:14795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.958 [2024-07-23 01:09:02.025253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.958 [2024-07-23 01:09:02.036420] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:17.958 [2024-07-23 01:09:02.036452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:20320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.958 [2024-07-23 01:09:02.036469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.958 [2024-07-23 01:09:02.048787] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:17.958 [2024-07-23 01:09:02.048819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:24884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.958 [2024-07-23 01:09:02.048837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.958 [2024-07-23 01:09:02.060409] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:17.958 [2024-07-23 01:09:02.060456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:7083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.958 [2024-07-23 01:09:02.060474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.958 [2024-07-23 01:09:02.071834] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:17.958 [2024-07-23 01:09:02.071866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:21669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.958 [2024-07-23 01:09:02.071885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.958 [2024-07-23 01:09:02.084244] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:17.958 [2024-07-23 01:09:02.084276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:7478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.958 [2024-07-23 01:09:02.084305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.958 [2024-07-23 01:09:02.096023] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:17.958 [2024-07-23 01:09:02.096055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:24067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.958 [2024-07-23 01:09:02.096073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.958 [2024-07-23 01:09:02.107526] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:17.958 [2024-07-23 01:09:02.107557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:22019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.958 [2024-07-23 01:09:02.107574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.958 [2024-07-23 01:09:02.119875] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:17.958 [2024-07-23 01:09:02.119907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:19131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.958 [2024-07-23 01:09:02.119924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.958 [2024-07-23 01:09:02.131125] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:17.958 [2024-07-23 01:09:02.131157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:7062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.958 [2024-07-23 01:09:02.131174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.958 [2024-07-23 01:09:02.143097] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:17.958 [2024-07-23 01:09:02.143127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:20851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.958 [2024-07-23 01:09:02.143145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.958 [2024-07-23 01:09:02.154602] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:17.958 [2024-07-23 01:09:02.154657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:14880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.958 [2024-07-23 01:09:02.154676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.219 [2024-07-23 01:09:02.166790] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:18.219 [2024-07-23 01:09:02.166821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:17078 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.219 [2024-07-23 01:09:02.166839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.219 [2024-07-23 01:09:02.178324] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:18.219 [2024-07-23 01:09:02.178356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:22193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.219 [2024-07-23 01:09:02.178373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.219 [2024-07-23 01:09:02.190009] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:18.219 [2024-07-23 01:09:02.190049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:12453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.219 [2024-07-23 01:09:02.190067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.219 [2024-07-23 01:09:02.201431] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:18.219 [2024-07-23 01:09:02.201462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.219 [2024-07-23 01:09:02.201479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.219 [2024-07-23 01:09:02.212860] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:18.219 [2024-07-23 01:09:02.212892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:10666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.219 [2024-07-23 01:09:02.212909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.219 [2024-07-23 01:09:02.225351] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:18.219 [2024-07-23 01:09:02.225386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.219 [2024-07-23 01:09:02.225403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.220 [2024-07-23 01:09:02.236989] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:18.220 [2024-07-23 01:09:02.237023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:21030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.220 [2024-07-23 01:09:02.237041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.220 [2024-07-23 01:09:02.248414] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:18.220 [2024-07-23 01:09:02.248445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:13883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.220 [2024-07-23 01:09:02.248462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.220 [2024-07-23 01:09:02.260635] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:18.220 [2024-07-23 01:09:02.260665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:9502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.220 [2024-07-23 01:09:02.260683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.220 [2024-07-23 01:09:02.272188] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:18.220 [2024-07-23 01:09:02.272229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:20366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.220 [2024-07-23 01:09:02.272246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.220 [2024-07-23 01:09:02.284683] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:18.220 [2024-07-23 01:09:02.284714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:17393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.220 [2024-07-23 01:09:02.284742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.220 [2024-07-23 01:09:02.296304] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:18.220 [2024-07-23 01:09:02.296335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:9180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.220 [2024-07-23 01:09:02.296352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.220 [2024-07-23 01:09:02.307477] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:18.220 [2024-07-23 01:09:02.307508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.220 [2024-07-23 01:09:02.307525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.220 [2024-07-23 01:09:02.318920] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:18.220 [2024-07-23 01:09:02.318951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:22371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.220 [2024-07-23 01:09:02.318968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.220 [2024-07-23 01:09:02.331287] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:18.220 [2024-07-23 01:09:02.331328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:7542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.220 [2024-07-23 01:09:02.331346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.220 [2024-07-23 01:09:02.342927] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:18.220 [2024-07-23 01:09:02.342958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:3720 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.220 [2024-07-23 01:09:02.342976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.220 [2024-07-23 01:09:02.354840] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:18.220 [2024-07-23 01:09:02.354872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:25031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.220 [2024-07-23 01:09:02.354891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.220 [2024-07-23 01:09:02.366300] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:18.220 [2024-07-23 01:09:02.366330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:21528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.220 [2024-07-23 01:09:02.366348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.220 [2024-07-23 01:09:02.377769] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:18.220 [2024-07-23 01:09:02.377800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:16786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.220 [2024-07-23 01:09:02.377818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.220 [2024-07-23 01:09:02.390001] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:18.220 [2024-07-23 01:09:02.390042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:18024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.220 [2024-07-23 01:09:02.390060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.220 [2024-07-23 01:09:02.401663] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:18.220 [2024-07-23 01:09:02.401694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:12856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.220 [2024-07-23 01:09:02.401712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.220 [2024-07-23 01:09:02.412888] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:18.220 [2024-07-23 01:09:02.412920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:7056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.220 [2024-07-23 01:09:02.412954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.480 [2024-07-23 01:09:02.424574] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:18.480 [2024-07-23 01:09:02.424607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:10386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.480 [2024-07-23 01:09:02.424634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.480 [2024-07-23 01:09:02.436080] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:18.480 [2024-07-23 01:09:02.436112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:18581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.480 [2024-07-23 01:09:02.436129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.480 [2024-07-23 01:09:02.448797] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:18.480 [2024-07-23 01:09:02.448829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:12702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.480 [2024-07-23 01:09:02.448846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.480 [2024-07-23 01:09:02.459961] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:18.480 [2024-07-23 01:09:02.459992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:8092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.480 [2024-07-23 01:09:02.460010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.480 [2024-07-23 01:09:02.471680] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:18.480 [2024-07-23 01:09:02.471712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.480 [2024-07-23 01:09:02.471729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.480 [2024-07-23 01:09:02.483780] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:18.480 [2024-07-23 01:09:02.483812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:1303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.480 [2024-07-23 01:09:02.483830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.480 [2024-07-23 01:09:02.495177] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:18.480 [2024-07-23 01:09:02.495210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:15791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.480 [2024-07-23 01:09:02.495227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.480 [2024-07-23 01:09:02.506628] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:18.480 [2024-07-23 01:09:02.506659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:7165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.480 [2024-07-23 01:09:02.506677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.480 [2024-07-23 01:09:02.518236] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:18.480 [2024-07-23 01:09:02.518267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:5029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.480 [2024-07-23 01:09:02.518284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.480 [2024-07-23 01:09:02.530817] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:18.480 [2024-07-23 01:09:02.530849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:10476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.480 [2024-07-23 01:09:02.530867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.480 [2024-07-23 01:09:02.542461] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:18.480 [2024-07-23 01:09:02.542492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:10383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.480 [2024-07-23 01:09:02.542510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.480 [2024-07-23 01:09:02.554175] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:18.480 [2024-07-23 01:09:02.554219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.480 [2024-07-23 01:09:02.554236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.480 [2024-07-23 01:09:02.566011] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:18.480 [2024-07-23 01:09:02.566042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.480 [2024-07-23 01:09:02.566060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.480 [2024-07-23 01:09:02.577505] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:18.480 [2024-07-23 01:09:02.577537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:3064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.480 [2024-07-23 01:09:02.577554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.480 [2024-07-23 01:09:02.589008] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:18.480 [2024-07-23 01:09:02.589040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.480 [2024-07-23 01:09:02.589067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.480 [2024-07-23 01:09:02.601269] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:18.480 [2024-07-23 01:09:02.601300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:2676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.481 [2024-07-23 01:09:02.601318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.481 [2024-07-23 01:09:02.612968] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:18.481 [2024-07-23 01:09:02.612999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:8667 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.481 [2024-07-23 01:09:02.613016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.481 [2024-07-23 01:09:02.624511] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:18.481 [2024-07-23 01:09:02.624542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:24344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.481 [2024-07-23 01:09:02.624559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.481 [2024-07-23 01:09:02.635942] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:18.481 [2024-07-23 01:09:02.635973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:5511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.481 [2024-07-23 01:09:02.635990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.481 [2024-07-23 01:09:02.647349] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:18.481 [2024-07-23 01:09:02.647381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.481 [2024-07-23 01:09:02.647398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.481 [2024-07-23 01:09:02.659766] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:18.481 [2024-07-23 01:09:02.659797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:16938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.481 [2024-07-23 01:09:02.659815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.481 [2024-07-23 01:09:02.671481] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:18.481 [2024-07-23 01:09:02.671511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:9748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.481 [2024-07-23 01:09:02.671544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.741 [2024-07-23 01:09:02.682658] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:18.741 [2024-07-23 01:09:02.682690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.741 [2024-07-23 01:09:02.682708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.741 [2024-07-23 01:09:02.695291] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:18.742 [2024-07-23 01:09:02.695335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:15300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.742 [2024-07-23 01:09:02.695356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.742 [2024-07-23 01:09:02.706726] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:18.742 [2024-07-23 01:09:02.706759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:17759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.742 [2024-07-23 01:09:02.706776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.742 [2024-07-23 01:09:02.718707] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:18.742 [2024-07-23 01:09:02.718740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.742 [2024-07-23 01:09:02.718757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.742 [2024-07-23 01:09:02.730174] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:18.742 [2024-07-23 01:09:02.730205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:22128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.742 [2024-07-23 01:09:02.730222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.742 [2024-07-23 01:09:02.741564] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:18.742 [2024-07-23 01:09:02.741609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.742 [2024-07-23 01:09:02.741634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.742 [2024-07-23 01:09:02.753984] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:18.742 [2024-07-23 01:09:02.754031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:15831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.742 [2024-07-23 01:09:02.754048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.742 [2024-07-23 01:09:02.766022] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:18.742 [2024-07-23 01:09:02.766052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:16761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.742 [2024-07-23 01:09:02.766070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.742 [2024-07-23 01:09:02.777226] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:18.742 [2024-07-23 01:09:02.777271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.742 [2024-07-23 01:09:02.777288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.742 [2024-07-23 01:09:02.789467] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:18.742 [2024-07-23 01:09:02.789507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:21431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.742 [2024-07-23 01:09:02.789527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.742 [2024-07-23 01:09:02.802270] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:18.742 [2024-07-23 01:09:02.802305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:1887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.742 [2024-07-23 01:09:02.802325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.742 [2024-07-23 01:09:02.815035] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:18.742 [2024-07-23 01:09:02.815070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:21029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.742 [2024-07-23 01:09:02.815089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.742 [2024-07-23 01:09:02.827275] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:18.742 [2024-07-23 01:09:02.827312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:21560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.742 [2024-07-23 01:09:02.827332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.742 [2024-07-23 01:09:02.839307] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:18.742 [2024-07-23 01:09:02.839342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.742 [2024-07-23 01:09:02.839362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.742 [2024-07-23 01:09:02.852607] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:18.742 [2024-07-23 01:09:02.852649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:10548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.742 [2024-07-23 01:09:02.852687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.742 [2024-07-23 01:09:02.864477] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:18.742 [2024-07-23 01:09:02.864519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.742 [2024-07-23 01:09:02.864538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.742 [2024-07-23 01:09:02.876641] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:18.742 [2024-07-23 01:09:02.876689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:14242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.742 [2024-07-23 01:09:02.876706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.742 [2024-07-23 01:09:02.889822] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:18.742 [2024-07-23 01:09:02.889857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:9498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.742 [2024-07-23 01:09:02.889877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.742 [2024-07-23 01:09:02.902289] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:18.742 [2024-07-23 01:09:02.902325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:24581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.742 [2024-07-23 01:09:02.902353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.742 [2024-07-23 01:09:02.914318] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:18.742 [2024-07-23 01:09:02.914355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:3358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.742 [2024-07-23 01:09:02.914374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.742 [2024-07-23 01:09:02.926579] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:18.742 [2024-07-23 01:09:02.926624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:22216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.742 [2024-07-23 01:09:02.926646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.742 [2024-07-23 01:09:02.939472] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:18.742 [2024-07-23 01:09:02.939507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:11055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.742 [2024-07-23 01:09:02.939527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.003 [2024-07-23 01:09:02.952026] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:19.003 [2024-07-23 01:09:02.952063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:2031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.003 [2024-07-23 01:09:02.952083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.003 [2024-07-23 01:09:02.963943] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:19.003 [2024-07-23 01:09:02.963980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:25421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.003 [2024-07-23 01:09:02.963999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.003 [2024-07-23 01:09:02.976893] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:19.003 [2024-07-23 01:09:02.976946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:3521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.003 [2024-07-23 01:09:02.976967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.003 [2024-07-23 01:09:02.989202] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:19.003 [2024-07-23 01:09:02.989236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10680 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.004 [2024-07-23 01:09:02.989256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.004 [2024-07-23 01:09:03.001513] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:19.004 [2024-07-23 01:09:03.001547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.004 [2024-07-23 01:09:03.001566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.004 [2024-07-23 01:09:03.013584] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:19.004 [2024-07-23 01:09:03.013628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.004 [2024-07-23 01:09:03.013650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.004 [2024-07-23 01:09:03.026795] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:19.004 [2024-07-23 01:09:03.026825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:21775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.004 [2024-07-23 01:09:03.026842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.004 [2024-07-23 01:09:03.038814] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:19.004 [2024-07-23 01:09:03.038846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:9671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.004 [2024-07-23 01:09:03.038863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.004 [2024-07-23 01:09:03.050902] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:19.004 [2024-07-23 01:09:03.050933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:20658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.004 [2024-07-23 01:09:03.050950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.004 [2024-07-23 01:09:03.063998] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:19.004 [2024-07-23 01:09:03.064037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:18644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.004 [2024-07-23 01:09:03.064058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.004 [2024-07-23 01:09:03.076611] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:19.004 [2024-07-23 01:09:03.076655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:23467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.004 [2024-07-23 01:09:03.076689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.004 [2024-07-23 01:09:03.088494] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:19.004 [2024-07-23 01:09:03.088529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:10845 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.004 [2024-07-23 01:09:03.088548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.004 [2024-07-23 01:09:03.101474] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:19.004 [2024-07-23 01:09:03.101509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:15607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.004 [2024-07-23 01:09:03.101529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.004 [2024-07-23 01:09:03.113833] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:19.004 [2024-07-23 01:09:03.113864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:15277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.004 [2024-07-23 01:09:03.113897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.004 [2024-07-23 01:09:03.125979] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:19.004 [2024-07-23 01:09:03.126015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.004 [2024-07-23 01:09:03.126034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.004 [2024-07-23 01:09:03.138375] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:19.004 [2024-07-23 01:09:03.138410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.004 [2024-07-23 01:09:03.138430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.004 [2024-07-23 01:09:03.150524] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:19.004 [2024-07-23 01:09:03.150563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.004 [2024-07-23 01:09:03.150584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.004 [2024-07-23 01:09:03.164098] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:19.004 [2024-07-23 01:09:03.164135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:20946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.004 [2024-07-23 01:09:03.164154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.004 [2024-07-23 01:09:03.176345] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:19.004 [2024-07-23 01:09:03.176379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:14030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.004 [2024-07-23 01:09:03.176399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.004 [2024-07-23 01:09:03.188556] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:19.004 [2024-07-23 01:09:03.188592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.004 [2024-07-23 01:09:03.188611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.004 [2024-07-23 01:09:03.201703] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:19.004 [2024-07-23 01:09:03.201740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:19610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.004 [2024-07-23 01:09:03.201757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.269 [2024-07-23 01:09:03.214134] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:19.269 [2024-07-23 01:09:03.214171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:2908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.269 [2024-07-23 01:09:03.214191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.270 [2024-07-23 01:09:03.225740] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:19.270 [2024-07-23 01:09:03.225782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:10096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.270 [2024-07-23 01:09:03.225800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.270 [2024-07-23 01:09:03.238145] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:19.270 [2024-07-23 01:09:03.238180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.270 [2024-07-23 01:09:03.238199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.270 [2024-07-23 01:09:03.251278] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:19.270 [2024-07-23 01:09:03.251313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:19971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.270 [2024-07-23 01:09:03.251333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.270 [2024-07-23 01:09:03.263391] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:19.270 [2024-07-23 01:09:03.263430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.270 [2024-07-23 01:09:03.263450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.270 [2024-07-23 01:09:03.275704] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:19.270 [2024-07-23 01:09:03.275736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:7060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.270 [2024-07-23 01:09:03.275753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.270 [2024-07-23 01:09:03.288173] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:19.270 [2024-07-23 01:09:03.288208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:4590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.270 [2024-07-23 01:09:03.288228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.270 [2024-07-23 01:09:03.301435] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:19.270 [2024-07-23 01:09:03.301470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:4500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.270 [2024-07-23 01:09:03.301489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.270 [2024-07-23 01:09:03.313576] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:19.270 [2024-07-23 01:09:03.313611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:7607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.270 [2024-07-23 01:09:03.313641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.270 [2024-07-23 01:09:03.325338] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:19.270 [2024-07-23 01:09:03.325368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:15457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.270 [2024-07-23 01:09:03.325386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.270 [2024-07-23 01:09:03.338621] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:19.270 [2024-07-23 01:09:03.338669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:25104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.270 [2024-07-23 01:09:03.338687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.270 [2024-07-23 01:09:03.350699] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:19.270 [2024-07-23 01:09:03.350734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:24298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.270 [2024-07-23 01:09:03.350753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.270 [2024-07-23 01:09:03.363166] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:19.270 [2024-07-23 01:09:03.363202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:4457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.270 [2024-07-23 01:09:03.363221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.270 [2024-07-23 01:09:03.375251] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:19.270 [2024-07-23 01:09:03.375287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:17771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.270 [2024-07-23 01:09:03.375307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.270 [2024-07-23 01:09:03.388469] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:19.270 [2024-07-23 01:09:03.388504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:16753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.270 [2024-07-23 01:09:03.388523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.270 [2024-07-23 01:09:03.400681] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:19.270 [2024-07-23 01:09:03.400715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:6930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.270 [2024-07-23 01:09:03.400732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.270 [2024-07-23 01:09:03.413201] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:19.270 [2024-07-23 01:09:03.413236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:16967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.270 [2024-07-23 01:09:03.413255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.270 [2024-07-23 01:09:03.425295] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:19.270 [2024-07-23 01:09:03.425330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:6840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.270 [2024-07-23 01:09:03.425349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.270 [2024-07-23 01:09:03.437701] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:19.271 [2024-07-23 01:09:03.437734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:8357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.271 [2024-07-23 01:09:03.437762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.271 [2024-07-23 01:09:03.450121] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:19.271 [2024-07-23 01:09:03.450156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:23177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.271 [2024-07-23 01:09:03.450175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.271 [2024-07-23 01:09:03.463715] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:19.271 [2024-07-23 01:09:03.463747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:19225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.271 [2024-07-23 01:09:03.463764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.530 [2024-07-23 01:09:03.476458] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:19.530 [2024-07-23 01:09:03.476494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.530 [2024-07-23 01:09:03.476513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.530 [2024-07-23 01:09:03.488534] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:19.530 [2024-07-23 01:09:03.488570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:4653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.530 [2024-07-23 01:09:03.488590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.530 [2024-07-23 01:09:03.500957] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:19.530 [2024-07-23 01:09:03.500994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:25406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.530 [2024-07-23 01:09:03.501013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.530 [2024-07-23 01:09:03.512880] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b9e30) 00:29:19.530 [2024-07-23 01:09:03.512912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:13210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.530 [2024-07-23 01:09:03.512945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.530 00:29:19.530 Latency(us) 00:29:19.530 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:19.530 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:19.530 nvme0n1 : 2.00 20403.84 79.70 0.00 0.00 6265.22 2900.57 19320.98 00:29:19.530 =================================================================================================================== 00:29:19.530 Total : 20403.84 79.70 0.00 0.00 6265.22 2900.57 19320.98 00:29:19.530 0 00:29:19.530 01:09:03 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:19.530 01:09:03 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:19.530 01:09:03 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:19.530 01:09:03 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:19.530 | .driver_specific 00:29:19.530 | .nvme_error 00:29:19.530 | .status_code 00:29:19.530 | .command_transient_transport_error' 00:29:19.788 01:09:03 -- host/digest.sh@71 -- # (( 160 > 0 )) 00:29:19.788 01:09:03 -- host/digest.sh@73 -- # killprocess 3520822 00:29:19.788 01:09:03 -- common/autotest_common.sh@926 -- # '[' -z 3520822 ']' 00:29:19.788 01:09:03 -- common/autotest_common.sh@930 -- # kill -0 3520822 00:29:19.788 01:09:03 -- common/autotest_common.sh@931 -- # uname 00:29:19.788 01:09:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:19.788 01:09:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3520822 00:29:19.788 01:09:03 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:29:19.788 01:09:03 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:29:19.788 01:09:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3520822' 00:29:19.788 killing process with pid 3520822 00:29:19.788 01:09:03 -- common/autotest_common.sh@945 -- # kill 3520822 00:29:19.788 Received shutdown signal, test time was about 2.000000 seconds 00:29:19.788 00:29:19.788 Latency(us) 00:29:19.788 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:19.788 =================================================================================================================== 00:29:19.788 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:19.789 01:09:03 -- common/autotest_common.sh@950 -- # wait 3520822 00:29:20.046 01:09:04 -- host/digest.sh@108 -- # run_bperf_err randread 131072 16 00:29:20.046 01:09:04 -- host/digest.sh@54 -- # local rw bs qd 00:29:20.046 01:09:04 -- host/digest.sh@56 -- # rw=randread 00:29:20.046 01:09:04 -- host/digest.sh@56 -- # bs=131072 00:29:20.046 01:09:04 -- host/digest.sh@56 -- # qd=16 00:29:20.046 01:09:04 -- host/digest.sh@58 -- # bperfpid=3521490 00:29:20.046 01:09:04 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:29:20.046 01:09:04 -- host/digest.sh@60 -- # waitforlisten 3521490 /var/tmp/bperf.sock 00:29:20.046 01:09:04 -- common/autotest_common.sh@819 -- # '[' -z 3521490 ']' 00:29:20.047 01:09:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:20.047 01:09:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:20.047 01:09:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:20.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:20.047 01:09:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:20.047 01:09:04 -- common/autotest_common.sh@10 -- # set +x 00:29:20.047 [2024-07-23 01:09:04.060561] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:29:20.047 [2024-07-23 01:09:04.060690] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3521490 ] 00:29:20.047 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:20.047 Zero copy mechanism will not be used. 00:29:20.047 EAL: No free 2048 kB hugepages reported on node 1 00:29:20.047 [2024-07-23 01:09:04.118862] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:20.047 [2024-07-23 01:09:04.202448] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:20.980 01:09:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:20.981 01:09:05 -- common/autotest_common.sh@852 -- # return 0 00:29:20.981 01:09:05 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:20.981 01:09:05 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:21.238 01:09:05 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:21.238 01:09:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:21.238 01:09:05 -- common/autotest_common.sh@10 -- # set +x 00:29:21.238 01:09:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:21.238 01:09:05 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:21.238 01:09:05 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:21.496 nvme0n1 00:29:21.496 01:09:05 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:21.496 01:09:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:21.496 01:09:05 -- common/autotest_common.sh@10 -- # set +x 00:29:21.496 01:09:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:21.496 01:09:05 -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:21.496 01:09:05 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:21.754 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:21.754 Zero copy mechanism will not be used. 00:29:21.754 Running I/O for 2 seconds... 00:29:21.754 [2024-07-23 01:09:05.816219] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:21.754 [2024-07-23 01:09:05.816287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.754 [2024-07-23 01:09:05.816309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.754 [2024-07-23 01:09:05.828179] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:21.754 [2024-07-23 01:09:05.828217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.754 [2024-07-23 01:09:05.828240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.754 [2024-07-23 01:09:05.839281] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:21.754 [2024-07-23 01:09:05.839315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.754 [2024-07-23 01:09:05.839341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.754 [2024-07-23 01:09:05.850261] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:21.754 [2024-07-23 01:09:05.850296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.754 [2024-07-23 01:09:05.850323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.754 [2024-07-23 01:09:05.861439] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:21.754 [2024-07-23 01:09:05.861474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.754 [2024-07-23 01:09:05.861497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.754 [2024-07-23 01:09:05.872815] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:21.754 [2024-07-23 01:09:05.872844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.754 [2024-07-23 01:09:05.872868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.754 [2024-07-23 01:09:05.884063] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:21.754 [2024-07-23 01:09:05.884097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.754 [2024-07-23 01:09:05.884130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.754 [2024-07-23 01:09:05.895153] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:21.754 [2024-07-23 01:09:05.895187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.754 [2024-07-23 01:09:05.895208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.754 [2024-07-23 01:09:05.906310] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:21.754 [2024-07-23 01:09:05.906343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.754 [2024-07-23 01:09:05.906361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.754 [2024-07-23 01:09:05.917428] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:21.754 [2024-07-23 01:09:05.917462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.754 [2024-07-23 01:09:05.917481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.754 [2024-07-23 01:09:05.928584] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:21.754 [2024-07-23 01:09:05.928623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.754 [2024-07-23 01:09:05.928666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.754 [2024-07-23 01:09:05.939690] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:21.754 [2024-07-23 01:09:05.939719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.754 [2024-07-23 01:09:05.939737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.754 [2024-07-23 01:09:05.950690] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:21.754 [2024-07-23 01:09:05.950718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.754 [2024-07-23 01:09:05.950735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:22.013 [2024-07-23 01:09:05.961803] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:22.013 [2024-07-23 01:09:05.961831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.013 [2024-07-23 01:09:05.961854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:22.013 [2024-07-23 01:09:05.972919] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:22.013 [2024-07-23 01:09:05.972953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.013 [2024-07-23 01:09:05.972982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:22.013 [2024-07-23 01:09:05.984123] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:22.013 [2024-07-23 01:09:05.984163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.013 [2024-07-23 01:09:05.984184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.013 [2024-07-23 01:09:05.996041] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:22.013 [2024-07-23 01:09:05.996075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.013 [2024-07-23 01:09:05.996094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:22.013 [2024-07-23 01:09:06.007093] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:22.013 [2024-07-23 01:09:06.007134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.013 [2024-07-23 01:09:06.007153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:22.013 [2024-07-23 01:09:06.018518] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:22.013 [2024-07-23 01:09:06.018551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.013 [2024-07-23 01:09:06.018571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:22.013 [2024-07-23 01:09:06.030319] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:22.013 [2024-07-23 01:09:06.030354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.013 [2024-07-23 01:09:06.030373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.013 [2024-07-23 01:09:06.041457] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:22.013 [2024-07-23 01:09:06.041491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.013 [2024-07-23 01:09:06.041510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:22.013 [2024-07-23 01:09:06.052560] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:22.013 [2024-07-23 01:09:06.052594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.013 [2024-07-23 01:09:06.052620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:22.013 [2024-07-23 01:09:06.063755] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:22.013 [2024-07-23 01:09:06.063784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.013 [2024-07-23 01:09:06.063801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:22.013 [2024-07-23 01:09:06.074859] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:22.013 [2024-07-23 01:09:06.074887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.013 [2024-07-23 01:09:06.074908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.013 [2024-07-23 01:09:06.086567] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:22.013 [2024-07-23 01:09:06.086602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.013 [2024-07-23 01:09:06.086634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:22.013 [2024-07-23 01:09:06.097713] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:22.013 [2024-07-23 01:09:06.097742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.013 [2024-07-23 01:09:06.097758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:22.013 [2024-07-23 01:09:06.108976] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:22.013 [2024-07-23 01:09:06.109010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.013 [2024-07-23 01:09:06.109029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:22.013 [2024-07-23 01:09:06.120147] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:22.013 [2024-07-23 01:09:06.120180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.013 [2024-07-23 01:09:06.120200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.013 [2024-07-23 01:09:06.132210] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:22.013 [2024-07-23 01:09:06.132245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.013 [2024-07-23 01:09:06.132267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:22.013 [2024-07-23 01:09:06.143434] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:22.014 [2024-07-23 01:09:06.143468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.014 [2024-07-23 01:09:06.143487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:22.014 [2024-07-23 01:09:06.154693] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:22.014 [2024-07-23 01:09:06.154724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.014 [2024-07-23 01:09:06.154741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:22.014 [2024-07-23 01:09:06.165995] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:22.014 [2024-07-23 01:09:06.166040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.014 [2024-07-23 01:09:06.166059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.014 [2024-07-23 01:09:06.176715] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:22.014 [2024-07-23 01:09:06.176744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.014 [2024-07-23 01:09:06.176765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:22.014 [2024-07-23 01:09:06.187814] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:22.014 [2024-07-23 01:09:06.187843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.014 [2024-07-23 01:09:06.187860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:22.014 [2024-07-23 01:09:06.199056] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:22.014 [2024-07-23 01:09:06.199098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.014 [2024-07-23 01:09:06.199117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:22.014 [2024-07-23 01:09:06.210231] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:22.014 [2024-07-23 01:09:06.210264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.014 [2024-07-23 01:09:06.210283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.272 [2024-07-23 01:09:06.221355] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:22.272 [2024-07-23 01:09:06.221389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.272 [2024-07-23 01:09:06.221411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:22.272 [2024-07-23 01:09:06.232900] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:22.272 [2024-07-23 01:09:06.232935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.272 [2024-07-23 01:09:06.232979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:22.272 [2024-07-23 01:09:06.244065] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:22.272 [2024-07-23 01:09:06.244098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.272 [2024-07-23 01:09:06.244122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:22.272 [2024-07-23 01:09:06.255552] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:22.272 [2024-07-23 01:09:06.255584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.272 [2024-07-23 01:09:06.255603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.272 [2024-07-23 01:09:06.267043] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:22.272 [2024-07-23 01:09:06.267076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.272 [2024-07-23 01:09:06.267097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:22.272 [2024-07-23 01:09:06.278368] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:22.272 [2024-07-23 01:09:06.278401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.272 [2024-07-23 01:09:06.278423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:22.272 [2024-07-23 01:09:06.289595] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:22.272 [2024-07-23 01:09:06.289658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.272 [2024-07-23 01:09:06.289675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:22.272 [2024-07-23 01:09:06.301353] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:22.272 [2024-07-23 01:09:06.301387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.272 [2024-07-23 01:09:06.301412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.272 [2024-07-23 01:09:06.312860] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:22.272 [2024-07-23 01:09:06.312890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.272 [2024-07-23 01:09:06.312906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:22.272 [2024-07-23 01:09:06.323971] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:22.272 [2024-07-23 01:09:06.324004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.272 [2024-07-23 01:09:06.324034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:22.272 [2024-07-23 01:09:06.335140] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:22.272 [2024-07-23 01:09:06.335172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.272 [2024-07-23 01:09:06.335191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:22.272 [2024-07-23 01:09:06.346440] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:22.272 [2024-07-23 01:09:06.346472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.272 [2024-07-23 01:09:06.346491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.272 [2024-07-23 01:09:06.358305] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:22.272 [2024-07-23 01:09:06.358339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.272 [2024-07-23 01:09:06.358359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:22.272 [2024-07-23 01:09:06.369354] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:22.272 [2024-07-23 01:09:06.369387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.272 [2024-07-23 01:09:06.369415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:22.272 [2024-07-23 01:09:06.380394] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:22.272 [2024-07-23 01:09:06.380427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.272 [2024-07-23 01:09:06.380446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:22.272 [2024-07-23 01:09:06.391449] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:22.272 [2024-07-23 01:09:06.391482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.272 [2024-07-23 01:09:06.391502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.272 [2024-07-23 01:09:06.402539] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:22.272 [2024-07-23 01:09:06.402572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.272 [2024-07-23 01:09:06.402594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:22.272 [2024-07-23 01:09:06.413792] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:22.272 [2024-07-23 01:09:06.413821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.273 [2024-07-23 01:09:06.413840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:22.273 [2024-07-23 01:09:06.424985] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:22.273 [2024-07-23 01:09:06.425017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.273 [2024-07-23 01:09:06.425036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:22.273 [2024-07-23 01:09:06.436511] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:22.273 [2024-07-23 01:09:06.436544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.273 [2024-07-23 01:09:06.436563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.273 [2024-07-23 01:09:06.447575] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:22.273 [2024-07-23 01:09:06.447608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.273 [2024-07-23 01:09:06.447660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:22.273 [2024-07-23 01:09:06.458745] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:22.273 [2024-07-23 01:09:06.458772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.273 [2024-07-23 01:09:06.458796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:22.273 [2024-07-23 01:09:06.469746] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:22.273 [2024-07-23 01:09:06.469793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.273 [2024-07-23 01:09:06.469812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:22.532 [2024-07-23 01:09:06.481087] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:22.532 [2024-07-23 01:09:06.481120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.532 [2024-07-23 01:09:06.481141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.532 [2024-07-23 01:09:06.492891] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:22.532 [2024-07-23 01:09:06.492938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.532 [2024-07-23 01:09:06.492954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:22.532 [2024-07-23 01:09:06.504117] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:22.532 [2024-07-23 01:09:06.504151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.532 [2024-07-23 01:09:06.504179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:22.532 [2024-07-23 01:09:06.515081] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:22.532 [2024-07-23 01:09:06.515114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.532 [2024-07-23 01:09:06.515134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:22.532 [2024-07-23 01:09:06.526853] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:22.532 [2024-07-23 01:09:06.526882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.532 [2024-07-23 01:09:06.526900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.532 [2024-07-23 01:09:06.538137] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:22.532 [2024-07-23 01:09:06.538171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.532 [2024-07-23 01:09:06.538190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:22.532 [2024-07-23 01:09:06.549271] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:22.532 [2024-07-23 01:09:06.549304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.532 [2024-07-23 01:09:06.549324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:22.532 [2024-07-23 01:09:06.560515] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:22.532 [2024-07-23 01:09:06.560549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.532 [2024-07-23 01:09:06.560568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:22.532 [2024-07-23 01:09:06.571914] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:22.532 [2024-07-23 01:09:06.571954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.532 [2024-07-23 01:09:06.571971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.532 [2024-07-23 01:09:06.582954] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:22.532 [2024-07-23 01:09:06.583001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.532 [2024-07-23 01:09:06.583021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:22.532 [2024-07-23 01:09:06.593988] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:22.532 [2024-07-23 01:09:06.594037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.532 [2024-07-23 01:09:06.594057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:22.532 [2024-07-23 01:09:06.604999] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:22.532 [2024-07-23 01:09:06.605033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.532 [2024-07-23 01:09:06.605053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:22.532 [2024-07-23 01:09:06.616737] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:22.532 [2024-07-23 01:09:06.616766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.532 [2024-07-23 01:09:06.616783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.532 [2024-07-23 01:09:06.627744] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:22.532 [2024-07-23 01:09:06.627774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.532 [2024-07-23 01:09:06.627790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:22.532 [2024-07-23 01:09:06.638681] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:22.532 [2024-07-23 01:09:06.638710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.532 [2024-07-23 01:09:06.638726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:22.532 [2024-07-23 01:09:06.649779] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:22.532 [2024-07-23 01:09:06.649807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.532 [2024-07-23 01:09:06.649823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:22.532 [2024-07-23 01:09:06.660793] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:22.532 [2024-07-23 01:09:06.660821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.532 [2024-07-23 01:09:06.660843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.532 [2024-07-23 01:09:06.671848] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:22.532 [2024-07-23 01:09:06.671877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.532 [2024-07-23 01:09:06.671893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:22.532 [2024-07-23 01:09:06.683243] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:22.532 [2024-07-23 01:09:06.683277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.532 [2024-07-23 01:09:06.683297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:22.532 [2024-07-23 01:09:06.694235] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:22.532 [2024-07-23 01:09:06.694268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.532 [2024-07-23 01:09:06.694287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:22.532 [2024-07-23 01:09:06.706047] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:22.533 [2024-07-23 01:09:06.706081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.533 [2024-07-23 01:09:06.706101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.533 [2024-07-23 01:09:06.717690] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:22.533 [2024-07-23 01:09:06.717735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.533 [2024-07-23 01:09:06.717751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:22.533 [2024-07-23 01:09:06.728807] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:22.533 [2024-07-23 01:09:06.728835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.533 [2024-07-23 01:09:06.728852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:22.791 [2024-07-23 01:09:06.739873] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:22.791 [2024-07-23 01:09:06.739903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.791 [2024-07-23 01:09:06.739921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:22.791 [2024-07-23 01:09:06.750975] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:22.791 [2024-07-23 01:09:06.751008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.791 [2024-07-23 01:09:06.751027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.791 [2024-07-23 01:09:06.762719] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:22.791 [2024-07-23 01:09:06.762767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.791 [2024-07-23 01:09:06.762784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:22.791 [2024-07-23 01:09:06.773772] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:22.791 [2024-07-23 01:09:06.773800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.791 [2024-07-23 01:09:06.773817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:22.791 [2024-07-23 01:09:06.784751] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:22.791 [2024-07-23 01:09:06.784780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.791 [2024-07-23 01:09:06.784796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:22.791 [2024-07-23 01:09:06.795744] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:22.791 [2024-07-23 01:09:06.795773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.791 [2024-07-23 01:09:06.795789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.791 [2024-07-23 01:09:06.806699] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:22.791 [2024-07-23 01:09:06.806743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.791 [2024-07-23 01:09:06.806761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:22.791 [2024-07-23 01:09:06.817816] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:22.791 [2024-07-23 01:09:06.817843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.791 [2024-07-23 01:09:06.817860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:22.791 [2024-07-23 01:09:06.830092] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:22.791 [2024-07-23 01:09:06.830127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.791 [2024-07-23 01:09:06.830146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:22.791 [2024-07-23 01:09:06.841719] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:22.791 [2024-07-23 01:09:06.841748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.791 [2024-07-23 01:09:06.841765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.792 [2024-07-23 01:09:06.852682] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:22.792 [2024-07-23 01:09:06.852711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.792 [2024-07-23 01:09:06.852728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:22.792 [2024-07-23 01:09:06.863638] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:22.792 [2024-07-23 01:09:06.863684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.792 [2024-07-23 01:09:06.863701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:22.792 [2024-07-23 01:09:06.874788] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:22.792 [2024-07-23 01:09:06.874818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.792 [2024-07-23 01:09:06.874834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:22.792 [2024-07-23 01:09:06.885779] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:22.792 [2024-07-23 01:09:06.885808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.792 [2024-07-23 01:09:06.885825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.792 [2024-07-23 01:09:06.896758] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:22.792 [2024-07-23 01:09:06.896787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.792 [2024-07-23 01:09:06.896804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:22.792 [2024-07-23 01:09:06.907780] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:22.792 [2024-07-23 01:09:06.907809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.792 [2024-07-23 01:09:06.907825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:22.792 [2024-07-23 01:09:06.918762] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:22.792 [2024-07-23 01:09:06.918791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.792 [2024-07-23 01:09:06.918808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:22.792 [2024-07-23 01:09:06.929951] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:22.792 [2024-07-23 01:09:06.929998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.792 [2024-07-23 01:09:06.930018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.792 [2024-07-23 01:09:06.941890] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:22.792 [2024-07-23 01:09:06.941920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.792 [2024-07-23 01:09:06.941936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:22.792 [2024-07-23 01:09:06.954172] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:22.792 [2024-07-23 01:09:06.954213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.792 [2024-07-23 01:09:06.954234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:22.792 [2024-07-23 01:09:06.965207] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:22.792 [2024-07-23 01:09:06.965241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.792 [2024-07-23 01:09:06.965259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:22.792 [2024-07-23 01:09:06.976217] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:22.792 [2024-07-23 01:09:06.976251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.792 [2024-07-23 01:09:06.976270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.792 [2024-07-23 01:09:06.987451] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:22.792 [2024-07-23 01:09:06.987485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.792 [2024-07-23 01:09:06.987504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:23.051 [2024-07-23 01:09:06.998399] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:23.051 [2024-07-23 01:09:06.998432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.051 [2024-07-23 01:09:06.998451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:23.051 [2024-07-23 01:09:07.009519] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:23.051 [2024-07-23 01:09:07.009554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.051 [2024-07-23 01:09:07.009572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:23.051 [2024-07-23 01:09:07.020530] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:23.051 [2024-07-23 01:09:07.020563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.051 [2024-07-23 01:09:07.020581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.051 [2024-07-23 01:09:07.031555] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:23.051 [2024-07-23 01:09:07.031587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.051 [2024-07-23 01:09:07.031606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:23.051 [2024-07-23 01:09:07.042739] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:23.051 [2024-07-23 01:09:07.042767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.051 [2024-07-23 01:09:07.042783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:23.051 [2024-07-23 01:09:07.053928] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:23.051 [2024-07-23 01:09:07.053970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.051 [2024-07-23 01:09:07.053986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:23.051 [2024-07-23 01:09:07.065054] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:23.051 [2024-07-23 01:09:07.065087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.051 [2024-07-23 01:09:07.065106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.051 [2024-07-23 01:09:07.076111] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:23.051 [2024-07-23 01:09:07.076144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.051 [2024-07-23 01:09:07.076163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:23.051 [2024-07-23 01:09:07.087227] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:23.051 [2024-07-23 01:09:07.087260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.051 [2024-07-23 01:09:07.087279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:23.051 [2024-07-23 01:09:07.098400] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:23.051 [2024-07-23 01:09:07.098434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.051 [2024-07-23 01:09:07.098453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:23.051 [2024-07-23 01:09:07.109525] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:23.051 [2024-07-23 01:09:07.109559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.051 [2024-07-23 01:09:07.109578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.051 [2024-07-23 01:09:07.120498] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:23.051 [2024-07-23 01:09:07.120530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.051 [2024-07-23 01:09:07.120550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:23.051 [2024-07-23 01:09:07.131507] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:23.051 [2024-07-23 01:09:07.131540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.051 [2024-07-23 01:09:07.131558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:23.051 [2024-07-23 01:09:07.142602] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:23.051 [2024-07-23 01:09:07.142642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.051 [2024-07-23 01:09:07.142680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:23.051 [2024-07-23 01:09:07.153657] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:23.051 [2024-07-23 01:09:07.153685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.052 [2024-07-23 01:09:07.153702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.052 [2024-07-23 01:09:07.164817] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:23.052 [2024-07-23 01:09:07.164845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.052 [2024-07-23 01:09:07.164861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:23.052 [2024-07-23 01:09:07.177395] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:23.052 [2024-07-23 01:09:07.177430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.052 [2024-07-23 01:09:07.177450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:23.052 [2024-07-23 01:09:07.188513] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:23.052 [2024-07-23 01:09:07.188548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.052 [2024-07-23 01:09:07.188567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:23.052 [2024-07-23 01:09:07.199559] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:23.052 [2024-07-23 01:09:07.199593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.052 [2024-07-23 01:09:07.199620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.052 [2024-07-23 01:09:07.210582] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:23.052 [2024-07-23 01:09:07.210624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.052 [2024-07-23 01:09:07.210660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:23.052 [2024-07-23 01:09:07.221581] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:23.052 [2024-07-23 01:09:07.221625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.052 [2024-07-23 01:09:07.221646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:23.052 [2024-07-23 01:09:07.233251] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:23.052 [2024-07-23 01:09:07.233286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.052 [2024-07-23 01:09:07.233305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:23.052 [2024-07-23 01:09:07.244265] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:23.052 [2024-07-23 01:09:07.244305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.052 [2024-07-23 01:09:07.244324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.312 [2024-07-23 01:09:07.255423] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:23.312 [2024-07-23 01:09:07.255456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.312 [2024-07-23 01:09:07.255474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:23.312 [2024-07-23 01:09:07.266506] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:23.312 [2024-07-23 01:09:07.266541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.312 [2024-07-23 01:09:07.266560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:23.312 [2024-07-23 01:09:07.278405] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:23.312 [2024-07-23 01:09:07.278441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.312 [2024-07-23 01:09:07.278472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:23.312 [2024-07-23 01:09:07.289566] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:23.312 [2024-07-23 01:09:07.289600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.312 [2024-07-23 01:09:07.289627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.312 [2024-07-23 01:09:07.300722] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:23.312 [2024-07-23 01:09:07.300768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.312 [2024-07-23 01:09:07.300787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:23.312 [2024-07-23 01:09:07.312066] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:23.312 [2024-07-23 01:09:07.312101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.312 [2024-07-23 01:09:07.312120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:23.312 [2024-07-23 01:09:07.323927] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:23.312 [2024-07-23 01:09:07.323975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.312 [2024-07-23 01:09:07.323994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:23.312 [2024-07-23 01:09:07.334828] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:23.312 [2024-07-23 01:09:07.334857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.312 [2024-07-23 01:09:07.334874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.312 [2024-07-23 01:09:07.346045] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:23.312 [2024-07-23 01:09:07.346088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.312 [2024-07-23 01:09:07.346107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:23.312 [2024-07-23 01:09:07.357143] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:23.312 [2024-07-23 01:09:07.357176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.312 [2024-07-23 01:09:07.357194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:23.312 [2024-07-23 01:09:07.368367] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:23.312 [2024-07-23 01:09:07.368399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.312 [2024-07-23 01:09:07.368418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:23.312 [2024-07-23 01:09:07.379637] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:23.312 [2024-07-23 01:09:07.379683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.312 [2024-07-23 01:09:07.379700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.312 [2024-07-23 01:09:07.390650] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:23.312 [2024-07-23 01:09:07.390687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.312 [2024-07-23 01:09:07.390704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:23.312 [2024-07-23 01:09:07.401790] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:23.312 [2024-07-23 01:09:07.401821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.312 [2024-07-23 01:09:07.401839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:23.312 [2024-07-23 01:09:07.412974] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:23.312 [2024-07-23 01:09:07.413007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.313 [2024-07-23 01:09:07.413026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:23.313 [2024-07-23 01:09:07.424086] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:23.313 [2024-07-23 01:09:07.424119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.313 [2024-07-23 01:09:07.424138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.313 [2024-07-23 01:09:07.435152] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:23.313 [2024-07-23 01:09:07.435184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.313 [2024-07-23 01:09:07.435209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:23.313 [2024-07-23 01:09:07.446178] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:23.313 [2024-07-23 01:09:07.446212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.313 [2024-07-23 01:09:07.446230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:23.313 [2024-07-23 01:09:07.456886] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:23.313 [2024-07-23 01:09:07.456926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.313 [2024-07-23 01:09:07.456942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:23.313 [2024-07-23 01:09:07.468017] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:23.313 [2024-07-23 01:09:07.468050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.313 [2024-07-23 01:09:07.468069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.313 [2024-07-23 01:09:07.479173] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:23.313 [2024-07-23 01:09:07.479205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.313 [2024-07-23 01:09:07.479224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:23.313 [2024-07-23 01:09:07.490485] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:23.313 [2024-07-23 01:09:07.490518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.313 [2024-07-23 01:09:07.490537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:23.313 [2024-07-23 01:09:07.501687] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:23.313 [2024-07-23 01:09:07.501716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.313 [2024-07-23 01:09:07.501733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:23.313 [2024-07-23 01:09:07.512979] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:23.313 [2024-07-23 01:09:07.513012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.313 [2024-07-23 01:09:07.513031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.572 [2024-07-23 01:09:07.524830] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:23.572 [2024-07-23 01:09:07.524860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.572 [2024-07-23 01:09:07.524877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:23.572 [2024-07-23 01:09:07.537038] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:23.572 [2024-07-23 01:09:07.537074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.572 [2024-07-23 01:09:07.537094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:23.572 [2024-07-23 01:09:07.548055] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:23.572 [2024-07-23 01:09:07.548089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.572 [2024-07-23 01:09:07.548108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:23.572 [2024-07-23 01:09:07.559232] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:23.572 [2024-07-23 01:09:07.559267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.572 [2024-07-23 01:09:07.559286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.572 [2024-07-23 01:09:07.571092] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:23.572 [2024-07-23 01:09:07.571127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.572 [2024-07-23 01:09:07.571147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:23.572 [2024-07-23 01:09:07.582733] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:23.572 [2024-07-23 01:09:07.582763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.572 [2024-07-23 01:09:07.582779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:23.572 [2024-07-23 01:09:07.593798] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:23.572 [2024-07-23 01:09:07.593827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.572 [2024-07-23 01:09:07.593843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:23.572 [2024-07-23 01:09:07.604749] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:23.572 [2024-07-23 01:09:07.604777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.572 [2024-07-23 01:09:07.604793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.572 [2024-07-23 01:09:07.615781] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:23.572 [2024-07-23 01:09:07.615809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.572 [2024-07-23 01:09:07.615825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:23.572 [2024-07-23 01:09:07.626804] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:23.572 [2024-07-23 01:09:07.626833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.572 [2024-07-23 01:09:07.626854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:23.572 [2024-07-23 01:09:07.637838] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:23.572 [2024-07-23 01:09:07.637869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.572 [2024-07-23 01:09:07.637888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:23.572 [2024-07-23 01:09:07.649119] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:23.572 [2024-07-23 01:09:07.649153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.572 [2024-07-23 01:09:07.649178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.572 [2024-07-23 01:09:07.660327] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:23.572 [2024-07-23 01:09:07.660361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.572 [2024-07-23 01:09:07.660383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:23.572 [2024-07-23 01:09:07.671551] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:23.572 [2024-07-23 01:09:07.671584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.572 [2024-07-23 01:09:07.671603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:23.572 [2024-07-23 01:09:07.682845] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:23.572 [2024-07-23 01:09:07.682875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.572 [2024-07-23 01:09:07.682908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:23.572 [2024-07-23 01:09:07.694081] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:23.572 [2024-07-23 01:09:07.694115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.572 [2024-07-23 01:09:07.694144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.573 [2024-07-23 01:09:07.705268] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:23.573 [2024-07-23 01:09:07.705301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.573 [2024-07-23 01:09:07.705326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:23.573 [2024-07-23 01:09:07.716448] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:23.573 [2024-07-23 01:09:07.716481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.573 [2024-07-23 01:09:07.716512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:23.573 [2024-07-23 01:09:07.727611] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:23.573 [2024-07-23 01:09:07.727673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.573 [2024-07-23 01:09:07.727701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:23.573 [2024-07-23 01:09:07.738725] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:23.573 [2024-07-23 01:09:07.738768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.573 [2024-07-23 01:09:07.738787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.573 [2024-07-23 01:09:07.750087] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:23.573 [2024-07-23 01:09:07.750120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.573 [2024-07-23 01:09:07.750139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:23.573 [2024-07-23 01:09:07.761173] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:23.573 [2024-07-23 01:09:07.761205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.573 [2024-07-23 01:09:07.761223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:23.573 [2024-07-23 01:09:07.772230] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:23.573 [2024-07-23 01:09:07.772264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.573 [2024-07-23 01:09:07.772282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:23.831 [2024-07-23 01:09:07.783507] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:23.831 [2024-07-23 01:09:07.783540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.831 [2024-07-23 01:09:07.783564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.831 [2024-07-23 01:09:07.794313] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:23.831 [2024-07-23 01:09:07.794346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.831 [2024-07-23 01:09:07.794372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:23.831 [2024-07-23 01:09:07.805116] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa91ce0) 00:29:23.831 [2024-07-23 01:09:07.805149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.831 [2024-07-23 01:09:07.805174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:23.831 00:29:23.832 Latency(us) 00:29:23.832 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:23.832 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:23.832 nvme0n1 : 2.00 2753.25 344.16 0.00 0.00 5805.00 5048.70 13689.74 00:29:23.832 =================================================================================================================== 00:29:23.832 Total : 2753.25 344.16 0.00 0.00 5805.00 5048.70 13689.74 00:29:23.832 0 00:29:23.832 01:09:07 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:23.832 01:09:07 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:23.832 01:09:07 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:23.832 | .driver_specific 00:29:23.832 | .nvme_error 00:29:23.832 | .status_code 00:29:23.832 | .command_transient_transport_error' 00:29:23.832 01:09:07 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:24.089 01:09:08 -- host/digest.sh@71 -- # (( 178 > 0 )) 00:29:24.090 01:09:08 -- host/digest.sh@73 -- # killprocess 3521490 00:29:24.090 01:09:08 -- common/autotest_common.sh@926 -- # '[' -z 3521490 ']' 00:29:24.090 01:09:08 -- common/autotest_common.sh@930 -- # kill -0 3521490 00:29:24.090 01:09:08 -- common/autotest_common.sh@931 -- # uname 00:29:24.090 01:09:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:24.090 01:09:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3521490 00:29:24.090 01:09:08 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:29:24.090 01:09:08 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:29:24.090 01:09:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3521490' 00:29:24.090 killing process with pid 3521490 00:29:24.090 01:09:08 -- common/autotest_common.sh@945 -- # kill 3521490 00:29:24.090 Received shutdown signal, test time was about 2.000000 seconds 00:29:24.090 00:29:24.090 Latency(us) 00:29:24.090 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:24.090 =================================================================================================================== 00:29:24.090 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:24.090 01:09:08 -- common/autotest_common.sh@950 -- # wait 3521490 00:29:24.348 01:09:08 -- host/digest.sh@113 -- # run_bperf_err randwrite 4096 128 00:29:24.348 01:09:08 -- host/digest.sh@54 -- # local rw bs qd 00:29:24.348 01:09:08 -- host/digest.sh@56 -- # rw=randwrite 00:29:24.348 01:09:08 -- host/digest.sh@56 -- # bs=4096 00:29:24.348 01:09:08 -- host/digest.sh@56 -- # qd=128 00:29:24.348 01:09:08 -- host/digest.sh@58 -- # bperfpid=3522303 00:29:24.348 01:09:08 -- host/digest.sh@60 -- # waitforlisten 3522303 /var/tmp/bperf.sock 00:29:24.348 01:09:08 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:29:24.348 01:09:08 -- common/autotest_common.sh@819 -- # '[' -z 3522303 ']' 00:29:24.348 01:09:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:24.348 01:09:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:24.348 01:09:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:24.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:24.348 01:09:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:24.348 01:09:08 -- common/autotest_common.sh@10 -- # set +x 00:29:24.348 [2024-07-23 01:09:08.344968] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:29:24.348 [2024-07-23 01:09:08.345069] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3522303 ] 00:29:24.348 EAL: No free 2048 kB hugepages reported on node 1 00:29:24.348 [2024-07-23 01:09:08.409660] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:24.348 [2024-07-23 01:09:08.507440] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:25.282 01:09:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:25.282 01:09:09 -- common/autotest_common.sh@852 -- # return 0 00:29:25.282 01:09:09 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:25.282 01:09:09 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:25.540 01:09:09 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:25.540 01:09:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:25.540 01:09:09 -- common/autotest_common.sh@10 -- # set +x 00:29:25.540 01:09:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:25.540 01:09:09 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:25.540 01:09:09 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:25.797 nvme0n1 00:29:26.055 01:09:09 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:26.055 01:09:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:26.055 01:09:09 -- common/autotest_common.sh@10 -- # set +x 00:29:26.055 01:09:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:26.055 01:09:10 -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:26.055 01:09:10 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:26.055 Running I/O for 2 seconds... 00:29:26.055 [2024-07-23 01:09:10.126404] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:26.055 [2024-07-23 01:09:10.126823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.055 [2024-07-23 01:09:10.126860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:26.055 [2024-07-23 01:09:10.140517] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:26.055 [2024-07-23 01:09:10.140903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.055 [2024-07-23 01:09:10.140960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:26.055 [2024-07-23 01:09:10.154565] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:26.055 [2024-07-23 01:09:10.154954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:3409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.055 [2024-07-23 01:09:10.155005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:26.055 [2024-07-23 01:09:10.168508] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:26.055 [2024-07-23 01:09:10.168890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:17167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.055 [2024-07-23 01:09:10.168921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:26.055 [2024-07-23 01:09:10.182514] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:26.055 [2024-07-23 01:09:10.182904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:15317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.055 [2024-07-23 01:09:10.182958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:26.055 [2024-07-23 01:09:10.196492] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:26.055 [2024-07-23 01:09:10.196868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:9798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.055 [2024-07-23 01:09:10.196899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:26.055 [2024-07-23 01:09:10.210416] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:26.055 [2024-07-23 01:09:10.210751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:21777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.055 [2024-07-23 01:09:10.210780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:26.055 [2024-07-23 01:09:10.224233] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:26.055 [2024-07-23 01:09:10.224597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:8353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.055 [2024-07-23 01:09:10.224665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:26.055 [2024-07-23 01:09:10.238061] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:26.055 [2024-07-23 01:09:10.238395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:17106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.055 [2024-07-23 01:09:10.238429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:26.055 [2024-07-23 01:09:10.251875] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:26.055 [2024-07-23 01:09:10.252253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:11452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.055 [2024-07-23 01:09:10.252303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:26.312 [2024-07-23 01:09:10.265815] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:26.312 [2024-07-23 01:09:10.266152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:19785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.312 [2024-07-23 01:09:10.266186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:26.312 [2024-07-23 01:09:10.279650] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:26.312 [2024-07-23 01:09:10.280039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.312 [2024-07-23 01:09:10.280083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:26.312 [2024-07-23 01:09:10.293492] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:26.312 [2024-07-23 01:09:10.293867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:25281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.312 [2024-07-23 01:09:10.293898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:26.312 [2024-07-23 01:09:10.307364] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:26.312 [2024-07-23 01:09:10.307749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:3234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.312 [2024-07-23 01:09:10.307780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:26.312 [2024-07-23 01:09:10.321179] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:26.312 [2024-07-23 01:09:10.321552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:1611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.312 [2024-07-23 01:09:10.321603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:26.312 [2024-07-23 01:09:10.335019] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:26.312 [2024-07-23 01:09:10.335388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.312 [2024-07-23 01:09:10.335444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:26.312 [2024-07-23 01:09:10.348849] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:26.312 [2024-07-23 01:09:10.349231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:3441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.312 [2024-07-23 01:09:10.349268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:26.312 [2024-07-23 01:09:10.362674] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:26.312 [2024-07-23 01:09:10.363025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:16528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.312 [2024-07-23 01:09:10.363060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:26.312 [2024-07-23 01:09:10.376450] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:26.312 [2024-07-23 01:09:10.376811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:21727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.312 [2024-07-23 01:09:10.376861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:26.312 [2024-07-23 01:09:10.390316] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:26.312 [2024-07-23 01:09:10.390707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:2295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.312 [2024-07-23 01:09:10.390752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:26.312 [2024-07-23 01:09:10.404121] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:26.312 [2024-07-23 01:09:10.404484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:1054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.312 [2024-07-23 01:09:10.404524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:26.312 [2024-07-23 01:09:10.418012] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:26.312 [2024-07-23 01:09:10.418377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:19986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.312 [2024-07-23 01:09:10.418417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:26.312 [2024-07-23 01:09:10.431760] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:26.312 [2024-07-23 01:09:10.432119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.312 [2024-07-23 01:09:10.432155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:26.312 [2024-07-23 01:09:10.445559] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:26.312 [2024-07-23 01:09:10.445934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.312 [2024-07-23 01:09:10.445970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:26.312 [2024-07-23 01:09:10.459372] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:26.312 [2024-07-23 01:09:10.459757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.312 [2024-07-23 01:09:10.459788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:26.312 [2024-07-23 01:09:10.473184] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:26.312 [2024-07-23 01:09:10.473558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:22432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.312 [2024-07-23 01:09:10.473594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:26.312 [2024-07-23 01:09:10.487014] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:26.312 [2024-07-23 01:09:10.487375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:9215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.312 [2024-07-23 01:09:10.487415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:26.312 [2024-07-23 01:09:10.500794] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:26.312 [2024-07-23 01:09:10.501166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:23701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.312 [2024-07-23 01:09:10.501201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:26.569 [2024-07-23 01:09:10.514830] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:26.569 [2024-07-23 01:09:10.515210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:12183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.569 [2024-07-23 01:09:10.515245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:26.569 [2024-07-23 01:09:10.528747] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:26.569 [2024-07-23 01:09:10.529104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:3747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.569 [2024-07-23 01:09:10.529139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:26.569 [2024-07-23 01:09:10.542792] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:26.569 [2024-07-23 01:09:10.543161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:7750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.569 [2024-07-23 01:09:10.543198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:26.569 [2024-07-23 01:09:10.556626] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:26.569 [2024-07-23 01:09:10.557015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:3017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.569 [2024-07-23 01:09:10.557050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:26.569 [2024-07-23 01:09:10.570471] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:26.569 [2024-07-23 01:09:10.570865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:3598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.569 [2024-07-23 01:09:10.570920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:26.569 [2024-07-23 01:09:10.584278] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:26.569 [2024-07-23 01:09:10.584677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.569 [2024-07-23 01:09:10.584708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:26.569 [2024-07-23 01:09:10.598093] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:26.569 [2024-07-23 01:09:10.598467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:18837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.569 [2024-07-23 01:09:10.598502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:26.569 [2024-07-23 01:09:10.612026] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:26.569 [2024-07-23 01:09:10.612400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:16052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.569 [2024-07-23 01:09:10.612435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:26.569 [2024-07-23 01:09:10.626159] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:26.569 [2024-07-23 01:09:10.626534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:17369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.569 [2024-07-23 01:09:10.626569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:26.569 [2024-07-23 01:09:10.640112] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:26.569 [2024-07-23 01:09:10.640496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:16736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.569 [2024-07-23 01:09:10.640530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:26.569 [2024-07-23 01:09:10.654020] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:26.569 [2024-07-23 01:09:10.654393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.569 [2024-07-23 01:09:10.654428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:26.569 [2024-07-23 01:09:10.667906] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:26.569 [2024-07-23 01:09:10.668295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:18006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.569 [2024-07-23 01:09:10.668337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:26.569 [2024-07-23 01:09:10.681775] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:26.569 [2024-07-23 01:09:10.682149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:15718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.569 [2024-07-23 01:09:10.682184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:26.569 [2024-07-23 01:09:10.695745] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:26.569 [2024-07-23 01:09:10.696120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:9373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.569 [2024-07-23 01:09:10.696156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:26.569 [2024-07-23 01:09:10.709811] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:26.569 [2024-07-23 01:09:10.710202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:22413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.569 [2024-07-23 01:09:10.710242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:26.569 [2024-07-23 01:09:10.723709] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:26.569 [2024-07-23 01:09:10.724072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:1953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.569 [2024-07-23 01:09:10.724107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:26.569 [2024-07-23 01:09:10.737575] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:26.569 [2024-07-23 01:09:10.737907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.569 [2024-07-23 01:09:10.737955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:26.569 [2024-07-23 01:09:10.751470] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:26.569 [2024-07-23 01:09:10.751805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:18518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.569 [2024-07-23 01:09:10.751835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:26.569 [2024-07-23 01:09:10.765278] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:26.569 [2024-07-23 01:09:10.765652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:8050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.569 [2024-07-23 01:09:10.765702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:26.826 [2024-07-23 01:09:10.779252] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:26.826 [2024-07-23 01:09:10.779620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:12661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.826 [2024-07-23 01:09:10.779676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:26.826 [2024-07-23 01:09:10.793075] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:26.826 [2024-07-23 01:09:10.793441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:10408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.826 [2024-07-23 01:09:10.793477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:26.826 [2024-07-23 01:09:10.806891] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:26.826 [2024-07-23 01:09:10.807277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:13988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.826 [2024-07-23 01:09:10.807318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:26.826 [2024-07-23 01:09:10.820722] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:26.826 [2024-07-23 01:09:10.821082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:10963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.826 [2024-07-23 01:09:10.821117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:26.826 [2024-07-23 01:09:10.834556] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:26.826 [2024-07-23 01:09:10.834909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:18086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.826 [2024-07-23 01:09:10.834956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:26.826 [2024-07-23 01:09:10.848355] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:26.826 [2024-07-23 01:09:10.848746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:21609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.826 [2024-07-23 01:09:10.848791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:26.826 [2024-07-23 01:09:10.862149] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:26.826 [2024-07-23 01:09:10.862482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:12433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.826 [2024-07-23 01:09:10.862518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:26.826 [2024-07-23 01:09:10.875995] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:26.826 [2024-07-23 01:09:10.876367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:10343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.826 [2024-07-23 01:09:10.876402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:26.826 [2024-07-23 01:09:10.889759] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:26.826 [2024-07-23 01:09:10.890099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.826 [2024-07-23 01:09:10.890135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:26.826 [2024-07-23 01:09:10.903559] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:26.826 [2024-07-23 01:09:10.903956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:22529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.826 [2024-07-23 01:09:10.903992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:26.826 [2024-07-23 01:09:10.917334] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:26.826 [2024-07-23 01:09:10.917729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:9236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.826 [2024-07-23 01:09:10.917761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:26.826 [2024-07-23 01:09:10.931350] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:26.826 [2024-07-23 01:09:10.931745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.826 [2024-07-23 01:09:10.931792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:26.826 [2024-07-23 01:09:10.945319] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:26.826 [2024-07-23 01:09:10.945705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.826 [2024-07-23 01:09:10.945752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:26.826 [2024-07-23 01:09:10.959329] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:26.826 [2024-07-23 01:09:10.959725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:7438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.826 [2024-07-23 01:09:10.959758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:26.826 [2024-07-23 01:09:10.973249] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:26.826 [2024-07-23 01:09:10.973625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:14875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.826 [2024-07-23 01:09:10.973675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:26.826 [2024-07-23 01:09:10.987119] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:26.826 [2024-07-23 01:09:10.987480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:6444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.826 [2024-07-23 01:09:10.987520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:26.826 [2024-07-23 01:09:11.000880] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:26.826 [2024-07-23 01:09:11.001257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:22431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.826 [2024-07-23 01:09:11.001298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:26.826 [2024-07-23 01:09:11.014684] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:26.826 [2024-07-23 01:09:11.015039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:13981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.827 [2024-07-23 01:09:11.015075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:26.827 [2024-07-23 01:09:11.028510] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:27.084 [2024-07-23 01:09:11.028906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:8316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.084 [2024-07-23 01:09:11.028955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:27.084 [2024-07-23 01:09:11.042417] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:27.084 [2024-07-23 01:09:11.042792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.084 [2024-07-23 01:09:11.042824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:27.084 [2024-07-23 01:09:11.056262] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:27.084 [2024-07-23 01:09:11.056635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:5934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.084 [2024-07-23 01:09:11.056682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:27.084 [2024-07-23 01:09:11.069978] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:27.084 [2024-07-23 01:09:11.070344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.084 [2024-07-23 01:09:11.070380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:27.084 [2024-07-23 01:09:11.083810] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:27.084 [2024-07-23 01:09:11.084168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:11266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.084 [2024-07-23 01:09:11.084203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:27.084 [2024-07-23 01:09:11.097558] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:27.084 [2024-07-23 01:09:11.097943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:19501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.084 [2024-07-23 01:09:11.097979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:27.084 [2024-07-23 01:09:11.111422] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:27.084 [2024-07-23 01:09:11.111805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:22849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.084 [2024-07-23 01:09:11.111851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:27.084 [2024-07-23 01:09:11.125206] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:27.084 [2024-07-23 01:09:11.125574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.084 [2024-07-23 01:09:11.125610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:27.084 [2024-07-23 01:09:11.138658] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:27.084 [2024-07-23 01:09:11.139004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:22118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.084 [2024-07-23 01:09:11.139050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:27.084 [2024-07-23 01:09:11.151683] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:27.084 [2024-07-23 01:09:11.152030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:3760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.084 [2024-07-23 01:09:11.152075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:27.084 [2024-07-23 01:09:11.164751] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:27.084 [2024-07-23 01:09:11.165094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:23113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.084 [2024-07-23 01:09:11.165148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:27.084 [2024-07-23 01:09:11.177717] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:27.084 [2024-07-23 01:09:11.178077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:23663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.084 [2024-07-23 01:09:11.178108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:27.084 [2024-07-23 01:09:11.190726] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:27.084 [2024-07-23 01:09:11.191088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.084 [2024-07-23 01:09:11.191118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:27.084 [2024-07-23 01:09:11.203672] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:27.084 [2024-07-23 01:09:11.204012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:8919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.084 [2024-07-23 01:09:11.204063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:27.084 [2024-07-23 01:09:11.216619] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:27.084 [2024-07-23 01:09:11.216954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:3239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.084 [2024-07-23 01:09:11.216989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:27.084 [2024-07-23 01:09:11.229501] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:27.084 [2024-07-23 01:09:11.229843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:24911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.084 [2024-07-23 01:09:11.229893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:27.084 [2024-07-23 01:09:11.242376] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:27.084 [2024-07-23 01:09:11.242705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:14174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.084 [2024-07-23 01:09:11.242739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:27.084 [2024-07-23 01:09:11.255346] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:27.084 [2024-07-23 01:09:11.255694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:6247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.084 [2024-07-23 01:09:11.255744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:27.084 [2024-07-23 01:09:11.268333] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:27.084 [2024-07-23 01:09:11.268679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.084 [2024-07-23 01:09:11.268714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:27.084 [2024-07-23 01:09:11.281282] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:27.084 [2024-07-23 01:09:11.281652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:6531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.084 [2024-07-23 01:09:11.281684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:27.341 [2024-07-23 01:09:11.294493] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:27.341 [2024-07-23 01:09:11.294856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:9238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.341 [2024-07-23 01:09:11.294888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:27.341 [2024-07-23 01:09:11.307465] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:27.341 [2024-07-23 01:09:11.307825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:6547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.341 [2024-07-23 01:09:11.307856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:27.341 [2024-07-23 01:09:11.320417] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:27.341 [2024-07-23 01:09:11.320775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:10395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.341 [2024-07-23 01:09:11.320821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:27.341 [2024-07-23 01:09:11.333406] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:27.341 [2024-07-23 01:09:11.333753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.341 [2024-07-23 01:09:11.333785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:27.341 [2024-07-23 01:09:11.346362] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:27.341 [2024-07-23 01:09:11.346734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:24561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.341 [2024-07-23 01:09:11.346765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:27.341 [2024-07-23 01:09:11.359453] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:27.341 [2024-07-23 01:09:11.359766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.341 [2024-07-23 01:09:11.359796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:27.341 [2024-07-23 01:09:11.372300] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:27.341 [2024-07-23 01:09:11.372657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:1701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.341 [2024-07-23 01:09:11.372699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:27.341 [2024-07-23 01:09:11.385341] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:27.341 [2024-07-23 01:09:11.385728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.341 [2024-07-23 01:09:11.385772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:27.341 [2024-07-23 01:09:11.399103] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:27.342 [2024-07-23 01:09:11.399472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:10074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.342 [2024-07-23 01:09:11.399507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:27.342 [2024-07-23 01:09:11.412914] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:27.342 [2024-07-23 01:09:11.413268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:12241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.342 [2024-07-23 01:09:11.413302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:27.342 [2024-07-23 01:09:11.426742] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:27.342 [2024-07-23 01:09:11.427095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:20205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.342 [2024-07-23 01:09:11.427130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:27.342 [2024-07-23 01:09:11.440515] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:27.342 [2024-07-23 01:09:11.440866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:14876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.342 [2024-07-23 01:09:11.440894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:27.342 [2024-07-23 01:09:11.454337] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:27.342 [2024-07-23 01:09:11.454733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:10154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.342 [2024-07-23 01:09:11.454763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:27.342 [2024-07-23 01:09:11.468154] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:27.342 [2024-07-23 01:09:11.468484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.342 [2024-07-23 01:09:11.468519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:27.342 [2024-07-23 01:09:11.482008] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:27.342 [2024-07-23 01:09:11.482377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.342 [2024-07-23 01:09:11.482411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:27.342 [2024-07-23 01:09:11.495863] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:27.342 [2024-07-23 01:09:11.496244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:14542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.342 [2024-07-23 01:09:11.496278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:27.342 [2024-07-23 01:09:11.509776] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:27.342 [2024-07-23 01:09:11.510151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.342 [2024-07-23 01:09:11.510191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:27.342 [2024-07-23 01:09:11.523685] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:27.342 [2024-07-23 01:09:11.524051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:25594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.342 [2024-07-23 01:09:11.524085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:27.342 [2024-07-23 01:09:11.537415] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:27.342 [2024-07-23 01:09:11.537792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:23652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.342 [2024-07-23 01:09:11.537822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:27.600 [2024-07-23 01:09:11.551210] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:27.600 [2024-07-23 01:09:11.551581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:8603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.600 [2024-07-23 01:09:11.551623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:27.600 [2024-07-23 01:09:11.565164] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:27.600 [2024-07-23 01:09:11.565526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:11542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.600 [2024-07-23 01:09:11.565560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:27.600 [2024-07-23 01:09:11.578998] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:27.600 [2024-07-23 01:09:11.579372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:8626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.600 [2024-07-23 01:09:11.579406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:27.600 [2024-07-23 01:09:11.592827] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:27.600 [2024-07-23 01:09:11.593194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:12681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.600 [2024-07-23 01:09:11.593229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:27.600 [2024-07-23 01:09:11.606633] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:27.600 [2024-07-23 01:09:11.607048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:20317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.600 [2024-07-23 01:09:11.607086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:27.600 [2024-07-23 01:09:11.620528] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:27.600 [2024-07-23 01:09:11.620896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:20373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.600 [2024-07-23 01:09:11.620949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:27.600 [2024-07-23 01:09:11.634340] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:27.600 [2024-07-23 01:09:11.634737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.600 [2024-07-23 01:09:11.634768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:27.600 [2024-07-23 01:09:11.648005] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:27.600 [2024-07-23 01:09:11.648369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:4303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.600 [2024-07-23 01:09:11.648404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:27.600 [2024-07-23 01:09:11.661731] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:27.600 [2024-07-23 01:09:11.662084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.600 [2024-07-23 01:09:11.662118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:27.600 [2024-07-23 01:09:11.675527] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:27.600 [2024-07-23 01:09:11.675905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.600 [2024-07-23 01:09:11.675951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:27.600 [2024-07-23 01:09:11.689409] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:27.600 [2024-07-23 01:09:11.689788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:1176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.600 [2024-07-23 01:09:11.689820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:27.600 [2024-07-23 01:09:11.703157] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:27.600 [2024-07-23 01:09:11.703521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:21514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.600 [2024-07-23 01:09:11.703555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:27.600 [2024-07-23 01:09:11.716994] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:27.600 [2024-07-23 01:09:11.717366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:10861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.600 [2024-07-23 01:09:11.717407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:27.600 [2024-07-23 01:09:11.730760] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:27.600 [2024-07-23 01:09:11.731137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:23458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.600 [2024-07-23 01:09:11.731172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:27.600 [2024-07-23 01:09:11.744709] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:27.600 [2024-07-23 01:09:11.745068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:14016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.600 [2024-07-23 01:09:11.745102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:27.600 [2024-07-23 01:09:11.758573] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:27.600 [2024-07-23 01:09:11.758965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:16122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.600 [2024-07-23 01:09:11.759011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:27.600 [2024-07-23 01:09:11.772433] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:27.600 [2024-07-23 01:09:11.772795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:5285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.600 [2024-07-23 01:09:11.772827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:27.600 [2024-07-23 01:09:11.786443] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:27.600 [2024-07-23 01:09:11.786835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.600 [2024-07-23 01:09:11.786867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:27.600 [2024-07-23 01:09:11.800438] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:27.600 [2024-07-23 01:09:11.800814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:11288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.600 [2024-07-23 01:09:11.800846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:27.859 [2024-07-23 01:09:11.814269] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:27.859 [2024-07-23 01:09:11.814666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.859 [2024-07-23 01:09:11.814698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:27.859 [2024-07-23 01:09:11.828197] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:27.859 [2024-07-23 01:09:11.828573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:12713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.859 [2024-07-23 01:09:11.828631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:27.859 [2024-07-23 01:09:11.842159] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:27.859 [2024-07-23 01:09:11.842522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:15713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.859 [2024-07-23 01:09:11.842562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:27.859 [2024-07-23 01:09:11.856030] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:27.859 [2024-07-23 01:09:11.856398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:24885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.859 [2024-07-23 01:09:11.856432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:27.859 [2024-07-23 01:09:11.870016] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:27.859 [2024-07-23 01:09:11.870402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:13488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.859 [2024-07-23 01:09:11.870437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:27.859 [2024-07-23 01:09:11.884075] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:27.859 [2024-07-23 01:09:11.884435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:6025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.859 [2024-07-23 01:09:11.884474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:27.859 [2024-07-23 01:09:11.898123] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:27.859 [2024-07-23 01:09:11.898488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:11775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.859 [2024-07-23 01:09:11.898522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:27.859 [2024-07-23 01:09:11.911944] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:27.859 [2024-07-23 01:09:11.912305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:25373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.859 [2024-07-23 01:09:11.912344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:27.859 [2024-07-23 01:09:11.925803] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:27.859 [2024-07-23 01:09:11.926162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:23209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.859 [2024-07-23 01:09:11.926197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:27.859 [2024-07-23 01:09:11.939527] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:27.859 [2024-07-23 01:09:11.939911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.859 [2024-07-23 01:09:11.939955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:27.859 [2024-07-23 01:09:11.953375] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:27.859 [2024-07-23 01:09:11.953774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.859 [2024-07-23 01:09:11.953805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:27.859 [2024-07-23 01:09:11.967216] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:27.859 [2024-07-23 01:09:11.967580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.859 [2024-07-23 01:09:11.967628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:27.859 [2024-07-23 01:09:11.981135] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:27.859 [2024-07-23 01:09:11.981507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:9918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.859 [2024-07-23 01:09:11.981541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:27.859 [2024-07-23 01:09:11.994874] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:27.859 [2024-07-23 01:09:11.995257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:23867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.859 [2024-07-23 01:09:11.995302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:27.859 [2024-07-23 01:09:12.008667] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:27.859 [2024-07-23 01:09:12.009011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:17325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.859 [2024-07-23 01:09:12.009046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:27.859 [2024-07-23 01:09:12.022559] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:27.859 [2024-07-23 01:09:12.022953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:14228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.859 [2024-07-23 01:09:12.022984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:27.859 [2024-07-23 01:09:12.036449] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:27.859 [2024-07-23 01:09:12.036791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:8288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.859 [2024-07-23 01:09:12.036820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:27.859 [2024-07-23 01:09:12.050148] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:27.859 [2024-07-23 01:09:12.050513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:2650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.859 [2024-07-23 01:09:12.050548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:28.129 [2024-07-23 01:09:12.064018] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:28.129 [2024-07-23 01:09:12.064349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:16669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.129 [2024-07-23 01:09:12.064384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:28.129 [2024-07-23 01:09:12.077950] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:28.129 [2024-07-23 01:09:12.078320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:2142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.129 [2024-07-23 01:09:12.078354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:28.129 [2024-07-23 01:09:12.091836] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:28.129 [2024-07-23 01:09:12.092210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.129 [2024-07-23 01:09:12.092245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:28.129 [2024-07-23 01:09:12.105679] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4f780) with pdu=0x2000190fda78 00:29:28.129 [2024-07-23 01:09:12.106046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:17129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.129 [2024-07-23 01:09:12.106080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:28.129 00:29:28.129 Latency(us) 00:29:28.129 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:28.129 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:28.129 nvme0n1 : 2.01 18547.95 72.45 0.00 0.00 6884.45 5485.61 14175.19 00:29:28.129 =================================================================================================================== 00:29:28.129 Total : 18547.95 72.45 0.00 0.00 6884.45 5485.61 14175.19 00:29:28.129 0 00:29:28.129 01:09:12 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:28.129 01:09:12 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:28.129 01:09:12 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:28.129 | .driver_specific 00:29:28.129 | .nvme_error 00:29:28.129 | .status_code 00:29:28.129 | .command_transient_transport_error' 00:29:28.129 01:09:12 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:28.387 01:09:12 -- host/digest.sh@71 -- # (( 145 > 0 )) 00:29:28.387 01:09:12 -- host/digest.sh@73 -- # killprocess 3522303 00:29:28.387 01:09:12 -- common/autotest_common.sh@926 -- # '[' -z 3522303 ']' 00:29:28.387 01:09:12 -- common/autotest_common.sh@930 -- # kill -0 3522303 00:29:28.387 01:09:12 -- common/autotest_common.sh@931 -- # uname 00:29:28.387 01:09:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:28.387 01:09:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3522303 00:29:28.387 01:09:12 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:29:28.387 01:09:12 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:29:28.387 01:09:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3522303' 00:29:28.387 killing process with pid 3522303 00:29:28.387 01:09:12 -- common/autotest_common.sh@945 -- # kill 3522303 00:29:28.387 Received shutdown signal, test time was about 2.000000 seconds 00:29:28.387 00:29:28.387 Latency(us) 00:29:28.387 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:28.387 =================================================================================================================== 00:29:28.387 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:28.387 01:09:12 -- common/autotest_common.sh@950 -- # wait 3522303 00:29:28.645 01:09:12 -- host/digest.sh@114 -- # run_bperf_err randwrite 131072 16 00:29:28.645 01:09:12 -- host/digest.sh@54 -- # local rw bs qd 00:29:28.645 01:09:12 -- host/digest.sh@56 -- # rw=randwrite 00:29:28.645 01:09:12 -- host/digest.sh@56 -- # bs=131072 00:29:28.645 01:09:12 -- host/digest.sh@56 -- # qd=16 00:29:28.645 01:09:12 -- host/digest.sh@58 -- # bperfpid=3522971 00:29:28.645 01:09:12 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:29:28.645 01:09:12 -- host/digest.sh@60 -- # waitforlisten 3522971 /var/tmp/bperf.sock 00:29:28.645 01:09:12 -- common/autotest_common.sh@819 -- # '[' -z 3522971 ']' 00:29:28.645 01:09:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:28.645 01:09:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:28.645 01:09:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:28.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:28.645 01:09:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:28.645 01:09:12 -- common/autotest_common.sh@10 -- # set +x 00:29:28.645 [2024-07-23 01:09:12.667119] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:29:28.645 [2024-07-23 01:09:12.667194] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3522971 ] 00:29:28.645 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:28.645 Zero copy mechanism will not be used. 00:29:28.645 EAL: No free 2048 kB hugepages reported on node 1 00:29:28.645 [2024-07-23 01:09:12.731857] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:28.645 [2024-07-23 01:09:12.819238] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:29.581 01:09:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:29.581 01:09:13 -- common/autotest_common.sh@852 -- # return 0 00:29:29.581 01:09:13 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:29.581 01:09:13 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:29.838 01:09:13 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:29.838 01:09:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:29.838 01:09:13 -- common/autotest_common.sh@10 -- # set +x 00:29:29.838 01:09:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:29.838 01:09:13 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:29.838 01:09:13 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:30.133 nvme0n1 00:29:30.133 01:09:14 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:30.133 01:09:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:30.133 01:09:14 -- common/autotest_common.sh@10 -- # set +x 00:29:30.133 01:09:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:30.133 01:09:14 -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:30.133 01:09:14 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:30.133 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:30.133 Zero copy mechanism will not be used. 00:29:30.133 Running I/O for 2 seconds... 00:29:30.133 [2024-07-23 01:09:14.258514] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:30.133 [2024-07-23 01:09:14.259091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.133 [2024-07-23 01:09:14.259135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:30.133 [2024-07-23 01:09:14.274023] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:30.133 [2024-07-23 01:09:14.274553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.133 [2024-07-23 01:09:14.274589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:30.133 [2024-07-23 01:09:14.290102] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:30.133 [2024-07-23 01:09:14.290371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.133 [2024-07-23 01:09:14.290402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:30.133 [2024-07-23 01:09:14.305903] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:30.133 [2024-07-23 01:09:14.306358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.133 [2024-07-23 01:09:14.306388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.133 [2024-07-23 01:09:14.322000] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:30.133 [2024-07-23 01:09:14.322558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.133 [2024-07-23 01:09:14.322592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:30.390 [2024-07-23 01:09:14.337469] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:30.390 [2024-07-23 01:09:14.337879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.390 [2024-07-23 01:09:14.337910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:30.390 [2024-07-23 01:09:14.352959] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:30.390 [2024-07-23 01:09:14.353344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.390 [2024-07-23 01:09:14.353374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:30.390 [2024-07-23 01:09:14.368545] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:30.390 [2024-07-23 01:09:14.369024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.390 [2024-07-23 01:09:14.369054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.390 [2024-07-23 01:09:14.384602] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:30.390 [2024-07-23 01:09:14.385041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.390 [2024-07-23 01:09:14.385071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:30.390 [2024-07-23 01:09:14.399553] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:30.390 [2024-07-23 01:09:14.399987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.390 [2024-07-23 01:09:14.400016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:30.391 [2024-07-23 01:09:14.414828] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:30.391 [2024-07-23 01:09:14.415150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.391 [2024-07-23 01:09:14.415179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:30.391 [2024-07-23 01:09:14.429223] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:30.391 [2024-07-23 01:09:14.429668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.391 [2024-07-23 01:09:14.429698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.391 [2024-07-23 01:09:14.444360] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:30.391 [2024-07-23 01:09:14.444719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.391 [2024-07-23 01:09:14.444753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:30.391 [2024-07-23 01:09:14.459141] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:30.391 [2024-07-23 01:09:14.459577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.391 [2024-07-23 01:09:14.459634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:30.391 [2024-07-23 01:09:14.474714] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:30.391 [2024-07-23 01:09:14.475178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.391 [2024-07-23 01:09:14.475207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:30.391 [2024-07-23 01:09:14.489908] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:30.391 [2024-07-23 01:09:14.490277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.391 [2024-07-23 01:09:14.490306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.391 [2024-07-23 01:09:14.504392] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:30.391 [2024-07-23 01:09:14.504826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.391 [2024-07-23 01:09:14.504857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:30.391 [2024-07-23 01:09:14.519895] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:30.391 [2024-07-23 01:09:14.520255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.391 [2024-07-23 01:09:14.520285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:30.391 [2024-07-23 01:09:14.535145] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:30.391 [2024-07-23 01:09:14.535461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.391 [2024-07-23 01:09:14.535490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:30.391 [2024-07-23 01:09:14.550093] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:30.391 [2024-07-23 01:09:14.550531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.391 [2024-07-23 01:09:14.550560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.391 [2024-07-23 01:09:14.565800] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:30.391 [2024-07-23 01:09:14.566185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.391 [2024-07-23 01:09:14.566214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:30.391 [2024-07-23 01:09:14.581026] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:30.391 [2024-07-23 01:09:14.581400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.391 [2024-07-23 01:09:14.581430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:30.649 [2024-07-23 01:09:14.596041] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:30.649 [2024-07-23 01:09:14.596349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.649 [2024-07-23 01:09:14.596379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:30.649 [2024-07-23 01:09:14.611310] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:30.649 [2024-07-23 01:09:14.611780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.649 [2024-07-23 01:09:14.611812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.649 [2024-07-23 01:09:14.625991] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:30.649 [2024-07-23 01:09:14.626370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.649 [2024-07-23 01:09:14.626402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:30.649 [2024-07-23 01:09:14.641297] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:30.649 [2024-07-23 01:09:14.641765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.649 [2024-07-23 01:09:14.641796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:30.649 [2024-07-23 01:09:14.655998] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:30.649 [2024-07-23 01:09:14.656279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.649 [2024-07-23 01:09:14.656308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:30.649 [2024-07-23 01:09:14.671815] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:30.649 [2024-07-23 01:09:14.672164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.649 [2024-07-23 01:09:14.672193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.649 [2024-07-23 01:09:14.686710] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:30.649 [2024-07-23 01:09:14.687150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.649 [2024-07-23 01:09:14.687182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:30.649 [2024-07-23 01:09:14.702153] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:30.649 [2024-07-23 01:09:14.702483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.649 [2024-07-23 01:09:14.702512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:30.649 [2024-07-23 01:09:14.717772] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:30.649 [2024-07-23 01:09:14.718330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.649 [2024-07-23 01:09:14.718366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:30.649 [2024-07-23 01:09:14.733153] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:30.649 [2024-07-23 01:09:14.733513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.649 [2024-07-23 01:09:14.733544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.649 [2024-07-23 01:09:14.748154] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:30.649 [2024-07-23 01:09:14.748501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.650 [2024-07-23 01:09:14.748530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:30.650 [2024-07-23 01:09:14.763268] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:30.650 [2024-07-23 01:09:14.763592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.650 [2024-07-23 01:09:14.763645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:30.650 [2024-07-23 01:09:14.778095] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:30.650 [2024-07-23 01:09:14.778388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.650 [2024-07-23 01:09:14.778417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:30.650 [2024-07-23 01:09:14.793105] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:30.650 [2024-07-23 01:09:14.793549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.650 [2024-07-23 01:09:14.793578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.650 [2024-07-23 01:09:14.808788] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:30.650 [2024-07-23 01:09:14.809115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.650 [2024-07-23 01:09:14.809145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:30.650 [2024-07-23 01:09:14.823316] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:30.650 [2024-07-23 01:09:14.823695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.650 [2024-07-23 01:09:14.823724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:30.650 [2024-07-23 01:09:14.838280] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:30.650 [2024-07-23 01:09:14.838740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.650 [2024-07-23 01:09:14.838769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:30.908 [2024-07-23 01:09:14.852370] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:30.908 [2024-07-23 01:09:14.852856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.908 [2024-07-23 01:09:14.852888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.908 [2024-07-23 01:09:14.866921] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:30.908 [2024-07-23 01:09:14.867369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.908 [2024-07-23 01:09:14.867398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:30.908 [2024-07-23 01:09:14.881788] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:30.908 [2024-07-23 01:09:14.882243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.908 [2024-07-23 01:09:14.882273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:30.908 [2024-07-23 01:09:14.897364] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:30.908 [2024-07-23 01:09:14.897778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.908 [2024-07-23 01:09:14.897808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:30.908 [2024-07-23 01:09:14.912538] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:30.908 [2024-07-23 01:09:14.913023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.908 [2024-07-23 01:09:14.913053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.908 [2024-07-23 01:09:14.927445] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:30.908 [2024-07-23 01:09:14.927831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.908 [2024-07-23 01:09:14.927860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:30.908 [2024-07-23 01:09:14.942212] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:30.908 [2024-07-23 01:09:14.942605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.908 [2024-07-23 01:09:14.942641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:30.908 [2024-07-23 01:09:14.957312] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:30.908 [2024-07-23 01:09:14.957825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.908 [2024-07-23 01:09:14.957854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:30.908 [2024-07-23 01:09:14.971829] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:30.908 [2024-07-23 01:09:14.972202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.908 [2024-07-23 01:09:14.972232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.908 [2024-07-23 01:09:14.986529] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:30.908 [2024-07-23 01:09:14.986830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.908 [2024-07-23 01:09:14.986871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:30.908 [2024-07-23 01:09:15.000872] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:30.908 [2024-07-23 01:09:15.001254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.908 [2024-07-23 01:09:15.001284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:30.908 [2024-07-23 01:09:15.016179] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:30.908 [2024-07-23 01:09:15.016514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.908 [2024-07-23 01:09:15.016543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:30.908 [2024-07-23 01:09:15.030778] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:30.908 [2024-07-23 01:09:15.031179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.908 [2024-07-23 01:09:15.031224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.908 [2024-07-23 01:09:15.046236] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:30.908 [2024-07-23 01:09:15.046653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.908 [2024-07-23 01:09:15.046681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:30.908 [2024-07-23 01:09:15.061386] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:30.908 [2024-07-23 01:09:15.061826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.908 [2024-07-23 01:09:15.061859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:30.908 [2024-07-23 01:09:15.076999] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:30.908 [2024-07-23 01:09:15.077413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.908 [2024-07-23 01:09:15.077442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:30.908 [2024-07-23 01:09:15.092004] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:30.908 [2024-07-23 01:09:15.092386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.908 [2024-07-23 01:09:15.092415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.908 [2024-07-23 01:09:15.106800] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:30.908 [2024-07-23 01:09:15.107191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.908 [2024-07-23 01:09:15.107225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:31.167 [2024-07-23 01:09:15.120509] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:31.167 [2024-07-23 01:09:15.120901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.167 [2024-07-23 01:09:15.120929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:31.167 [2024-07-23 01:09:15.135860] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:31.167 [2024-07-23 01:09:15.136254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.167 [2024-07-23 01:09:15.136283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:31.167 [2024-07-23 01:09:15.150742] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:31.167 [2024-07-23 01:09:15.151179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.167 [2024-07-23 01:09:15.151207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.167 [2024-07-23 01:09:15.166071] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:31.167 [2024-07-23 01:09:15.166509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.167 [2024-07-23 01:09:15.166538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:31.167 [2024-07-23 01:09:15.181446] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:31.167 [2024-07-23 01:09:15.181815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.167 [2024-07-23 01:09:15.181845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:31.167 [2024-07-23 01:09:15.198365] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:31.167 [2024-07-23 01:09:15.198817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.167 [2024-07-23 01:09:15.198846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:31.167 [2024-07-23 01:09:15.214042] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:31.167 [2024-07-23 01:09:15.214424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.167 [2024-07-23 01:09:15.214464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.167 [2024-07-23 01:09:15.230088] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:31.167 [2024-07-23 01:09:15.230528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.167 [2024-07-23 01:09:15.230557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:31.167 [2024-07-23 01:09:15.244935] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:31.167 [2024-07-23 01:09:15.245380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.167 [2024-07-23 01:09:15.245408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:31.167 [2024-07-23 01:09:15.260782] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:31.167 [2024-07-23 01:09:15.261057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.167 [2024-07-23 01:09:15.261084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:31.167 [2024-07-23 01:09:15.276622] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:31.167 [2024-07-23 01:09:15.277116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.167 [2024-07-23 01:09:15.277145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.167 [2024-07-23 01:09:15.291624] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:31.167 [2024-07-23 01:09:15.291949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.167 [2024-07-23 01:09:15.291977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:31.167 [2024-07-23 01:09:15.307426] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:31.167 [2024-07-23 01:09:15.307884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.167 [2024-07-23 01:09:15.307938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:31.167 [2024-07-23 01:09:15.322986] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:31.167 [2024-07-23 01:09:15.323323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.167 [2024-07-23 01:09:15.323352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:31.167 [2024-07-23 01:09:15.337863] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:31.167 [2024-07-23 01:09:15.338244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.167 [2024-07-23 01:09:15.338277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.167 [2024-07-23 01:09:15.353093] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:31.167 [2024-07-23 01:09:15.353341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.167 [2024-07-23 01:09:15.353374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:31.167 [2024-07-23 01:09:15.367439] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:31.167 [2024-07-23 01:09:15.367840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.167 [2024-07-23 01:09:15.367881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:31.426 [2024-07-23 01:09:15.382575] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:31.426 [2024-07-23 01:09:15.382964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.426 [2024-07-23 01:09:15.383004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:31.426 [2024-07-23 01:09:15.397624] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:31.426 [2024-07-23 01:09:15.397843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.426 [2024-07-23 01:09:15.397871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.426 [2024-07-23 01:09:15.411916] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:31.426 [2024-07-23 01:09:15.412325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.426 [2024-07-23 01:09:15.412353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:31.426 [2024-07-23 01:09:15.425259] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:31.426 [2024-07-23 01:09:15.425681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.426 [2024-07-23 01:09:15.425709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:31.426 [2024-07-23 01:09:15.440598] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:31.426 [2024-07-23 01:09:15.440971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.426 [2024-07-23 01:09:15.440999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:31.426 [2024-07-23 01:09:15.455459] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:31.426 [2024-07-23 01:09:15.455846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.426 [2024-07-23 01:09:15.455879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.426 [2024-07-23 01:09:15.470391] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:31.426 [2024-07-23 01:09:15.470703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.426 [2024-07-23 01:09:15.470733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:31.426 [2024-07-23 01:09:15.485157] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:31.426 [2024-07-23 01:09:15.485488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.426 [2024-07-23 01:09:15.485516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:31.426 [2024-07-23 01:09:15.501143] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:31.426 [2024-07-23 01:09:15.501475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.426 [2024-07-23 01:09:15.501507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:31.426 [2024-07-23 01:09:15.515907] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:31.426 [2024-07-23 01:09:15.516360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.426 [2024-07-23 01:09:15.516388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.426 [2024-07-23 01:09:15.531065] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:31.426 [2024-07-23 01:09:15.531507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.426 [2024-07-23 01:09:15.531535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:31.426 [2024-07-23 01:09:15.546417] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:31.426 [2024-07-23 01:09:15.546855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.426 [2024-07-23 01:09:15.546893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:31.426 [2024-07-23 01:09:15.561954] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:31.426 [2024-07-23 01:09:15.562347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.426 [2024-07-23 01:09:15.562387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:31.426 [2024-07-23 01:09:15.576734] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:31.426 [2024-07-23 01:09:15.577037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.426 [2024-07-23 01:09:15.577068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.426 [2024-07-23 01:09:15.591880] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:31.426 [2024-07-23 01:09:15.592324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.426 [2024-07-23 01:09:15.592355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:31.426 [2024-07-23 01:09:15.607151] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:31.426 [2024-07-23 01:09:15.607542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.426 [2024-07-23 01:09:15.607584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:31.426 [2024-07-23 01:09:15.621809] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:31.426 [2024-07-23 01:09:15.622167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.426 [2024-07-23 01:09:15.622194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:31.684 [2024-07-23 01:09:15.637450] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:31.684 [2024-07-23 01:09:15.637835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.684 [2024-07-23 01:09:15.637872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.684 [2024-07-23 01:09:15.652650] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:31.684 [2024-07-23 01:09:15.653123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.684 [2024-07-23 01:09:15.653151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:31.684 [2024-07-23 01:09:15.668824] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:31.684 [2024-07-23 01:09:15.669237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.684 [2024-07-23 01:09:15.669267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:31.684 [2024-07-23 01:09:15.683729] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:31.684 [2024-07-23 01:09:15.684052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.684 [2024-07-23 01:09:15.684095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:31.684 [2024-07-23 01:09:15.698976] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:31.684 [2024-07-23 01:09:15.699473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.684 [2024-07-23 01:09:15.699501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.684 [2024-07-23 01:09:15.713436] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:31.684 [2024-07-23 01:09:15.713922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.684 [2024-07-23 01:09:15.713951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:31.684 [2024-07-23 01:09:15.729536] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:31.685 [2024-07-23 01:09:15.730078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.685 [2024-07-23 01:09:15.730112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:31.685 [2024-07-23 01:09:15.745008] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:31.685 [2024-07-23 01:09:15.745447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.685 [2024-07-23 01:09:15.745475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:31.685 [2024-07-23 01:09:15.760197] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:31.685 [2024-07-23 01:09:15.760548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.685 [2024-07-23 01:09:15.760579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.685 [2024-07-23 01:09:15.775227] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:31.685 [2024-07-23 01:09:15.775650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.685 [2024-07-23 01:09:15.775689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:31.685 [2024-07-23 01:09:15.789923] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:31.685 [2024-07-23 01:09:15.790294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.685 [2024-07-23 01:09:15.790322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:31.685 [2024-07-23 01:09:15.803860] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:31.685 [2024-07-23 01:09:15.804260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.685 [2024-07-23 01:09:15.804291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:31.685 [2024-07-23 01:09:15.819158] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:31.685 [2024-07-23 01:09:15.819476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.685 [2024-07-23 01:09:15.819504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.685 [2024-07-23 01:09:15.833591] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:31.685 [2024-07-23 01:09:15.834265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.685 [2024-07-23 01:09:15.834293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:31.685 [2024-07-23 01:09:15.848280] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:31.685 [2024-07-23 01:09:15.848871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.685 [2024-07-23 01:09:15.848901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:31.685 [2024-07-23 01:09:15.863789] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:31.685 [2024-07-23 01:09:15.864234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.685 [2024-07-23 01:09:15.864262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:31.685 [2024-07-23 01:09:15.878247] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:31.685 [2024-07-23 01:09:15.878632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.685 [2024-07-23 01:09:15.878676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.943 [2024-07-23 01:09:15.893133] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:31.943 [2024-07-23 01:09:15.893495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.943 [2024-07-23 01:09:15.893528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:31.943 [2024-07-23 01:09:15.908852] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:31.943 [2024-07-23 01:09:15.909339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.943 [2024-07-23 01:09:15.909367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:31.943 [2024-07-23 01:09:15.924002] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:31.943 [2024-07-23 01:09:15.924289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.943 [2024-07-23 01:09:15.924317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:31.943 [2024-07-23 01:09:15.938662] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:31.943 [2024-07-23 01:09:15.939051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.943 [2024-07-23 01:09:15.939080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.943 [2024-07-23 01:09:15.953979] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:31.943 [2024-07-23 01:09:15.954472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.943 [2024-07-23 01:09:15.954500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:31.943 [2024-07-23 01:09:15.968821] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:31.943 [2024-07-23 01:09:15.969166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.943 [2024-07-23 01:09:15.969194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:31.943 [2024-07-23 01:09:15.984127] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:31.943 [2024-07-23 01:09:15.984619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.943 [2024-07-23 01:09:15.984647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:31.943 [2024-07-23 01:09:15.997888] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:31.943 [2024-07-23 01:09:15.998176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.943 [2024-07-23 01:09:15.998205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.943 [2024-07-23 01:09:16.012826] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:31.943 [2024-07-23 01:09:16.013279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.943 [2024-07-23 01:09:16.013307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:31.943 [2024-07-23 01:09:16.028589] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:31.943 [2024-07-23 01:09:16.029050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.943 [2024-07-23 01:09:16.029080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:31.943 [2024-07-23 01:09:16.044299] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:31.943 [2024-07-23 01:09:16.044666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.943 [2024-07-23 01:09:16.044700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:31.943 [2024-07-23 01:09:16.058421] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:31.943 [2024-07-23 01:09:16.058800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.943 [2024-07-23 01:09:16.058835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.943 [2024-07-23 01:09:16.073087] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:31.943 [2024-07-23 01:09:16.073528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.943 [2024-07-23 01:09:16.073557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:31.943 [2024-07-23 01:09:16.087719] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:31.943 [2024-07-23 01:09:16.088129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.943 [2024-07-23 01:09:16.088158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:31.943 [2024-07-23 01:09:16.101917] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:31.943 [2024-07-23 01:09:16.102280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.943 [2024-07-23 01:09:16.102309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:31.943 [2024-07-23 01:09:16.116890] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:31.943 [2024-07-23 01:09:16.117361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.943 [2024-07-23 01:09:16.117390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.943 [2024-07-23 01:09:16.131713] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:31.943 [2024-07-23 01:09:16.132050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.943 [2024-07-23 01:09:16.132078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:32.201 [2024-07-23 01:09:16.146270] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:32.201 [2024-07-23 01:09:16.146650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.201 [2024-07-23 01:09:16.146677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:32.201 [2024-07-23 01:09:16.161510] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:32.201 [2024-07-23 01:09:16.161888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.201 [2024-07-23 01:09:16.161917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:32.201 [2024-07-23 01:09:16.176012] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:32.201 [2024-07-23 01:09:16.176438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.201 [2024-07-23 01:09:16.176467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.201 [2024-07-23 01:09:16.190362] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:32.201 [2024-07-23 01:09:16.190910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.201 [2024-07-23 01:09:16.190939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:32.201 [2024-07-23 01:09:16.205287] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:32.201 [2024-07-23 01:09:16.205653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.201 [2024-07-23 01:09:16.205682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:32.201 [2024-07-23 01:09:16.220363] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:32.201 [2024-07-23 01:09:16.220825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.201 [2024-07-23 01:09:16.220855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:32.201 [2024-07-23 01:09:16.236019] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4fa50) with pdu=0x2000190fef90 00:29:32.201 [2024-07-23 01:09:16.236422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.201 [2024-07-23 01:09:16.236451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.201 00:29:32.201 Latency(us) 00:29:32.201 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:32.201 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:32.201 nvme0n1 : 2.01 2052.21 256.53 0.00 0.00 7777.69 6019.60 16408.27 00:29:32.201 =================================================================================================================== 00:29:32.201 Total : 2052.21 256.53 0.00 0.00 7777.69 6019.60 16408.27 00:29:32.201 0 00:29:32.201 01:09:16 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:32.201 01:09:16 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:32.202 01:09:16 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:32.202 01:09:16 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:32.202 | .driver_specific 00:29:32.202 | .nvme_error 00:29:32.202 | .status_code 00:29:32.202 | .command_transient_transport_error' 00:29:32.459 01:09:16 -- host/digest.sh@71 -- # (( 132 > 0 )) 00:29:32.459 01:09:16 -- host/digest.sh@73 -- # killprocess 3522971 00:29:32.459 01:09:16 -- common/autotest_common.sh@926 -- # '[' -z 3522971 ']' 00:29:32.459 01:09:16 -- common/autotest_common.sh@930 -- # kill -0 3522971 00:29:32.459 01:09:16 -- common/autotest_common.sh@931 -- # uname 00:29:32.459 01:09:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:32.459 01:09:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3522971 00:29:32.459 01:09:16 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:29:32.459 01:09:16 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:29:32.459 01:09:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3522971' 00:29:32.459 killing process with pid 3522971 00:29:32.459 01:09:16 -- common/autotest_common.sh@945 -- # kill 3522971 00:29:32.459 Received shutdown signal, test time was about 2.000000 seconds 00:29:32.459 00:29:32.459 Latency(us) 00:29:32.459 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:32.459 =================================================================================================================== 00:29:32.459 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:32.459 01:09:16 -- common/autotest_common.sh@950 -- # wait 3522971 00:29:32.717 01:09:16 -- host/digest.sh@115 -- # killprocess 3520795 00:29:32.717 01:09:16 -- common/autotest_common.sh@926 -- # '[' -z 3520795 ']' 00:29:32.717 01:09:16 -- common/autotest_common.sh@930 -- # kill -0 3520795 00:29:32.717 01:09:16 -- common/autotest_common.sh@931 -- # uname 00:29:32.717 01:09:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:32.717 01:09:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3520795 00:29:32.717 01:09:16 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:32.717 01:09:16 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:32.717 01:09:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3520795' 00:29:32.717 killing process with pid 3520795 00:29:32.717 01:09:16 -- common/autotest_common.sh@945 -- # kill 3520795 00:29:32.717 01:09:16 -- common/autotest_common.sh@950 -- # wait 3520795 00:29:32.977 00:29:32.977 real 0m17.634s 00:29:32.977 user 0m35.754s 00:29:32.977 sys 0m4.244s 00:29:32.977 01:09:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:32.977 01:09:17 -- common/autotest_common.sh@10 -- # set +x 00:29:32.977 ************************************ 00:29:32.977 END TEST nvmf_digest_error 00:29:32.977 ************************************ 00:29:32.977 01:09:17 -- host/digest.sh@138 -- # trap - SIGINT SIGTERM EXIT 00:29:32.977 01:09:17 -- host/digest.sh@139 -- # nvmftestfini 00:29:32.977 01:09:17 -- nvmf/common.sh@476 -- # nvmfcleanup 00:29:32.977 01:09:17 -- nvmf/common.sh@116 -- # sync 00:29:32.977 01:09:17 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:29:32.977 01:09:17 -- nvmf/common.sh@119 -- # set +e 00:29:32.977 01:09:17 -- nvmf/common.sh@120 -- # for i in {1..20} 00:29:32.977 01:09:17 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:29:32.977 rmmod nvme_tcp 00:29:32.977 rmmod nvme_fabrics 00:29:32.977 rmmod nvme_keyring 00:29:32.977 01:09:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:29:32.977 01:09:17 -- nvmf/common.sh@123 -- # set -e 00:29:32.977 01:09:17 -- nvmf/common.sh@124 -- # return 0 00:29:32.977 01:09:17 -- nvmf/common.sh@477 -- # '[' -n 3520795 ']' 00:29:32.977 01:09:17 -- nvmf/common.sh@478 -- # killprocess 3520795 00:29:32.977 01:09:17 -- common/autotest_common.sh@926 -- # '[' -z 3520795 ']' 00:29:32.977 01:09:17 -- common/autotest_common.sh@930 -- # kill -0 3520795 00:29:32.977 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (3520795) - No such process 00:29:32.977 01:09:17 -- common/autotest_common.sh@953 -- # echo 'Process with pid 3520795 is not found' 00:29:32.977 Process with pid 3520795 is not found 00:29:32.977 01:09:17 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:29:32.977 01:09:17 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:29:32.977 01:09:17 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:29:32.977 01:09:17 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:32.977 01:09:17 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:29:32.977 01:09:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:32.977 01:09:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:32.977 01:09:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:35.512 01:09:19 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:29:35.512 00:29:35.512 real 0m36.930s 00:29:35.512 user 1m6.150s 00:29:35.512 sys 0m9.906s 00:29:35.512 01:09:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:35.512 01:09:19 -- common/autotest_common.sh@10 -- # set +x 00:29:35.512 ************************************ 00:29:35.512 END TEST nvmf_digest 00:29:35.512 ************************************ 00:29:35.512 01:09:19 -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:29:35.512 01:09:19 -- nvmf/nvmf.sh@115 -- # [[ 0 -eq 1 ]] 00:29:35.512 01:09:19 -- nvmf/nvmf.sh@120 -- # [[ phy == phy ]] 00:29:35.512 01:09:19 -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:35.512 01:09:19 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:29:35.512 01:09:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:35.512 01:09:19 -- common/autotest_common.sh@10 -- # set +x 00:29:35.512 ************************************ 00:29:35.512 START TEST nvmf_bdevperf 00:29:35.512 ************************************ 00:29:35.512 01:09:19 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:35.512 * Looking for test storage... 00:29:35.512 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:35.512 01:09:19 -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:35.512 01:09:19 -- nvmf/common.sh@7 -- # uname -s 00:29:35.512 01:09:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:35.512 01:09:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:35.512 01:09:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:35.512 01:09:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:35.512 01:09:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:35.512 01:09:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:35.512 01:09:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:35.512 01:09:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:35.512 01:09:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:35.512 01:09:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:35.512 01:09:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:35.512 01:09:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:35.512 01:09:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:35.512 01:09:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:35.512 01:09:19 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:35.512 01:09:19 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:35.512 01:09:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:35.512 01:09:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:35.512 01:09:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:35.512 01:09:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:35.512 01:09:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:35.512 01:09:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:35.512 01:09:19 -- paths/export.sh@5 -- # export PATH 00:29:35.512 01:09:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:35.512 01:09:19 -- nvmf/common.sh@46 -- # : 0 00:29:35.512 01:09:19 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:29:35.512 01:09:19 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:29:35.513 01:09:19 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:29:35.513 01:09:19 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:35.513 01:09:19 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:35.513 01:09:19 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:29:35.513 01:09:19 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:29:35.513 01:09:19 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:29:35.513 01:09:19 -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:35.513 01:09:19 -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:35.513 01:09:19 -- host/bdevperf.sh@24 -- # nvmftestinit 00:29:35.513 01:09:19 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:29:35.513 01:09:19 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:35.513 01:09:19 -- nvmf/common.sh@436 -- # prepare_net_devs 00:29:35.513 01:09:19 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:29:35.513 01:09:19 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:29:35.513 01:09:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:35.513 01:09:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:35.513 01:09:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:35.513 01:09:19 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:29:35.513 01:09:19 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:29:35.513 01:09:19 -- nvmf/common.sh@284 -- # xtrace_disable 00:29:35.513 01:09:19 -- common/autotest_common.sh@10 -- # set +x 00:29:37.416 01:09:21 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:29:37.416 01:09:21 -- nvmf/common.sh@290 -- # pci_devs=() 00:29:37.416 01:09:21 -- nvmf/common.sh@290 -- # local -a pci_devs 00:29:37.416 01:09:21 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:29:37.416 01:09:21 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:29:37.416 01:09:21 -- nvmf/common.sh@292 -- # pci_drivers=() 00:29:37.416 01:09:21 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:29:37.416 01:09:21 -- nvmf/common.sh@294 -- # net_devs=() 00:29:37.416 01:09:21 -- nvmf/common.sh@294 -- # local -ga net_devs 00:29:37.416 01:09:21 -- nvmf/common.sh@295 -- # e810=() 00:29:37.416 01:09:21 -- nvmf/common.sh@295 -- # local -ga e810 00:29:37.416 01:09:21 -- nvmf/common.sh@296 -- # x722=() 00:29:37.416 01:09:21 -- nvmf/common.sh@296 -- # local -ga x722 00:29:37.416 01:09:21 -- nvmf/common.sh@297 -- # mlx=() 00:29:37.416 01:09:21 -- nvmf/common.sh@297 -- # local -ga mlx 00:29:37.416 01:09:21 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:37.416 01:09:21 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:37.416 01:09:21 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:37.416 01:09:21 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:37.416 01:09:21 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:37.416 01:09:21 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:37.416 01:09:21 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:37.416 01:09:21 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:37.416 01:09:21 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:37.416 01:09:21 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:37.416 01:09:21 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:37.416 01:09:21 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:29:37.416 01:09:21 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:29:37.416 01:09:21 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:29:37.416 01:09:21 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:29:37.416 01:09:21 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:29:37.416 01:09:21 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:29:37.416 01:09:21 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:37.416 01:09:21 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:37.416 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:37.416 01:09:21 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:37.416 01:09:21 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:37.416 01:09:21 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:37.416 01:09:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:37.416 01:09:21 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:37.416 01:09:21 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:37.416 01:09:21 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:37.416 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:37.416 01:09:21 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:37.416 01:09:21 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:37.416 01:09:21 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:37.416 01:09:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:37.416 01:09:21 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:37.416 01:09:21 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:29:37.416 01:09:21 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:29:37.416 01:09:21 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:29:37.416 01:09:21 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:37.416 01:09:21 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:37.416 01:09:21 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:37.416 01:09:21 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:37.416 01:09:21 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:37.416 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:37.416 01:09:21 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:37.416 01:09:21 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:37.416 01:09:21 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:37.416 01:09:21 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:37.416 01:09:21 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:37.416 01:09:21 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:37.416 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:37.416 01:09:21 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:37.416 01:09:21 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:29:37.416 01:09:21 -- nvmf/common.sh@402 -- # is_hw=yes 00:29:37.416 01:09:21 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:29:37.416 01:09:21 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:29:37.416 01:09:21 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:29:37.416 01:09:21 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:37.416 01:09:21 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:37.416 01:09:21 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:37.416 01:09:21 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:29:37.416 01:09:21 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:37.416 01:09:21 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:37.416 01:09:21 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:29:37.416 01:09:21 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:37.416 01:09:21 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:37.416 01:09:21 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:29:37.416 01:09:21 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:29:37.416 01:09:21 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:29:37.416 01:09:21 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:37.416 01:09:21 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:37.416 01:09:21 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:37.416 01:09:21 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:29:37.416 01:09:21 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:37.416 01:09:21 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:37.416 01:09:21 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:37.416 01:09:21 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:29:37.416 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:37.416 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.117 ms 00:29:37.416 00:29:37.416 --- 10.0.0.2 ping statistics --- 00:29:37.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:37.416 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:29:37.416 01:09:21 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:37.416 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:37.416 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.092 ms 00:29:37.416 00:29:37.416 --- 10.0.0.1 ping statistics --- 00:29:37.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:37.416 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:29:37.416 01:09:21 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:37.416 01:09:21 -- nvmf/common.sh@410 -- # return 0 00:29:37.416 01:09:21 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:29:37.416 01:09:21 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:37.416 01:09:21 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:29:37.416 01:09:21 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:29:37.416 01:09:21 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:37.416 01:09:21 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:29:37.416 01:09:21 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:29:37.416 01:09:21 -- host/bdevperf.sh@25 -- # tgt_init 00:29:37.416 01:09:21 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:37.416 01:09:21 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:37.416 01:09:21 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:37.416 01:09:21 -- common/autotest_common.sh@10 -- # set +x 00:29:37.416 01:09:21 -- nvmf/common.sh@469 -- # nvmfpid=3525480 00:29:37.416 01:09:21 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:37.416 01:09:21 -- nvmf/common.sh@470 -- # waitforlisten 3525480 00:29:37.416 01:09:21 -- common/autotest_common.sh@819 -- # '[' -z 3525480 ']' 00:29:37.416 01:09:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:37.416 01:09:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:37.417 01:09:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:37.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:37.417 01:09:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:37.417 01:09:21 -- common/autotest_common.sh@10 -- # set +x 00:29:37.417 [2024-07-23 01:09:21.362175] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:29:37.417 [2024-07-23 01:09:21.362258] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:37.417 EAL: No free 2048 kB hugepages reported on node 1 00:29:37.417 [2024-07-23 01:09:21.428098] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:37.417 [2024-07-23 01:09:21.511220] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:37.417 [2024-07-23 01:09:21.511372] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:37.417 [2024-07-23 01:09:21.511389] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:37.417 [2024-07-23 01:09:21.511402] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:37.417 [2024-07-23 01:09:21.511456] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:37.417 [2024-07-23 01:09:21.511515] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:37.417 [2024-07-23 01:09:21.511518] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:38.350 01:09:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:38.350 01:09:22 -- common/autotest_common.sh@852 -- # return 0 00:29:38.350 01:09:22 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:38.350 01:09:22 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:38.350 01:09:22 -- common/autotest_common.sh@10 -- # set +x 00:29:38.350 01:09:22 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:38.350 01:09:22 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:38.350 01:09:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:38.350 01:09:22 -- common/autotest_common.sh@10 -- # set +x 00:29:38.350 [2024-07-23 01:09:22.332077] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:38.350 01:09:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:38.350 01:09:22 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:38.350 01:09:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:38.350 01:09:22 -- common/autotest_common.sh@10 -- # set +x 00:29:38.350 Malloc0 00:29:38.350 01:09:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:38.350 01:09:22 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:38.350 01:09:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:38.350 01:09:22 -- common/autotest_common.sh@10 -- # set +x 00:29:38.350 01:09:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:38.350 01:09:22 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:38.351 01:09:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:38.351 01:09:22 -- common/autotest_common.sh@10 -- # set +x 00:29:38.351 01:09:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:38.351 01:09:22 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:38.351 01:09:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:38.351 01:09:22 -- common/autotest_common.sh@10 -- # set +x 00:29:38.351 [2024-07-23 01:09:22.390776] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:38.351 01:09:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:38.351 01:09:22 -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:29:38.351 01:09:22 -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:29:38.351 01:09:22 -- nvmf/common.sh@520 -- # config=() 00:29:38.351 01:09:22 -- nvmf/common.sh@520 -- # local subsystem config 00:29:38.351 01:09:22 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:29:38.351 01:09:22 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:29:38.351 { 00:29:38.351 "params": { 00:29:38.351 "name": "Nvme$subsystem", 00:29:38.351 "trtype": "$TEST_TRANSPORT", 00:29:38.351 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:38.351 "adrfam": "ipv4", 00:29:38.351 "trsvcid": "$NVMF_PORT", 00:29:38.351 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:38.351 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:38.351 "hdgst": ${hdgst:-false}, 00:29:38.351 "ddgst": ${ddgst:-false} 00:29:38.351 }, 00:29:38.351 "method": "bdev_nvme_attach_controller" 00:29:38.351 } 00:29:38.351 EOF 00:29:38.351 )") 00:29:38.351 01:09:22 -- nvmf/common.sh@542 -- # cat 00:29:38.351 01:09:22 -- nvmf/common.sh@544 -- # jq . 00:29:38.351 01:09:22 -- nvmf/common.sh@545 -- # IFS=, 00:29:38.351 01:09:22 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:29:38.351 "params": { 00:29:38.351 "name": "Nvme1", 00:29:38.351 "trtype": "tcp", 00:29:38.351 "traddr": "10.0.0.2", 00:29:38.351 "adrfam": "ipv4", 00:29:38.351 "trsvcid": "4420", 00:29:38.351 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:38.351 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:38.351 "hdgst": false, 00:29:38.351 "ddgst": false 00:29:38.351 }, 00:29:38.351 "method": "bdev_nvme_attach_controller" 00:29:38.351 }' 00:29:38.351 [2024-07-23 01:09:22.437269] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:29:38.351 [2024-07-23 01:09:22.437340] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3525636 ] 00:29:38.351 EAL: No free 2048 kB hugepages reported on node 1 00:29:38.351 [2024-07-23 01:09:22.496301] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:38.609 [2024-07-23 01:09:22.585549] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:38.609 Running I/O for 1 seconds... 00:29:39.983 00:29:39.983 Latency(us) 00:29:39.983 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:39.983 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:39.983 Verification LBA range: start 0x0 length 0x4000 00:29:39.983 Nvme1n1 : 1.01 11934.92 46.62 0.00 0.00 10676.50 1146.88 17961.72 00:29:39.983 =================================================================================================================== 00:29:39.983 Total : 11934.92 46.62 0.00 0.00 10676.50 1146.88 17961.72 00:29:39.983 01:09:23 -- host/bdevperf.sh@30 -- # bdevperfpid=3525783 00:29:39.983 01:09:23 -- host/bdevperf.sh@32 -- # sleep 3 00:29:39.983 01:09:23 -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:29:39.983 01:09:23 -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:29:39.983 01:09:23 -- nvmf/common.sh@520 -- # config=() 00:29:39.983 01:09:23 -- nvmf/common.sh@520 -- # local subsystem config 00:29:39.983 01:09:23 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:29:39.983 01:09:23 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:29:39.983 { 00:29:39.983 "params": { 00:29:39.983 "name": "Nvme$subsystem", 00:29:39.983 "trtype": "$TEST_TRANSPORT", 00:29:39.983 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:39.983 "adrfam": "ipv4", 00:29:39.983 "trsvcid": "$NVMF_PORT", 00:29:39.983 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:39.983 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:39.983 "hdgst": ${hdgst:-false}, 00:29:39.983 "ddgst": ${ddgst:-false} 00:29:39.983 }, 00:29:39.983 "method": "bdev_nvme_attach_controller" 00:29:39.983 } 00:29:39.983 EOF 00:29:39.983 )") 00:29:39.983 01:09:24 -- nvmf/common.sh@542 -- # cat 00:29:39.983 01:09:24 -- nvmf/common.sh@544 -- # jq . 00:29:39.983 01:09:24 -- nvmf/common.sh@545 -- # IFS=, 00:29:39.984 01:09:24 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:29:39.984 "params": { 00:29:39.984 "name": "Nvme1", 00:29:39.984 "trtype": "tcp", 00:29:39.984 "traddr": "10.0.0.2", 00:29:39.984 "adrfam": "ipv4", 00:29:39.984 "trsvcid": "4420", 00:29:39.984 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:39.984 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:39.984 "hdgst": false, 00:29:39.984 "ddgst": false 00:29:39.984 }, 00:29:39.984 "method": "bdev_nvme_attach_controller" 00:29:39.984 }' 00:29:39.984 [2024-07-23 01:09:24.041088] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:29:39.984 [2024-07-23 01:09:24.041165] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3525783 ] 00:29:39.984 EAL: No free 2048 kB hugepages reported on node 1 00:29:39.984 [2024-07-23 01:09:24.100191] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:39.984 [2024-07-23 01:09:24.185448] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:40.242 Running I/O for 15 seconds... 00:29:43.530 01:09:27 -- host/bdevperf.sh@33 -- # kill -9 3525480 00:29:43.530 01:09:27 -- host/bdevperf.sh@35 -- # sleep 3 00:29:43.530 [2024-07-23 01:09:27.010912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:9776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.530 [2024-07-23 01:09:27.010985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.530 [2024-07-23 01:09:27.011022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.530 [2024-07-23 01:09:27.011040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.530 [2024-07-23 01:09:27.011058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:9240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.530 [2024-07-23 01:09:27.011075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.530 [2024-07-23 01:09:27.011092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:9248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.531 [2024-07-23 01:09:27.011109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.531 [2024-07-23 01:09:27.011144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:9272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.531 [2024-07-23 01:09:27.011161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.531 [2024-07-23 01:09:27.011180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:9296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.531 [2024-07-23 01:09:27.011198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.531 [2024-07-23 01:09:27.011216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:9304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.531 [2024-07-23 01:09:27.011245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.531 [2024-07-23 01:09:27.011265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:9336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.531 [2024-07-23 01:09:27.011282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.531 [2024-07-23 01:09:27.011300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:9352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.531 [2024-07-23 01:09:27.011318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.531 [2024-07-23 01:09:27.011337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:9792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.531 [2024-07-23 01:09:27.011353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.531 [2024-07-23 01:09:27.011373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:9800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.531 [2024-07-23 01:09:27.011391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.531 [2024-07-23 01:09:27.011409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:9808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.531 [2024-07-23 01:09:27.011426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.531 [2024-07-23 01:09:27.011444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.531 [2024-07-23 01:09:27.011460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.531 [2024-07-23 01:09:27.011476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:9840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.531 [2024-07-23 01:09:27.011491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.531 [2024-07-23 01:09:27.011508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:9848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.531 [2024-07-23 01:09:27.011524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.531 [2024-07-23 01:09:27.011541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:9856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.531 [2024-07-23 01:09:27.011556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.531 [2024-07-23 01:09:27.011574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:9864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.531 [2024-07-23 01:09:27.011589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.531 [2024-07-23 01:09:27.011607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:9872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.531 [2024-07-23 01:09:27.011634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.531 [2024-07-23 01:09:27.011666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:9880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.531 [2024-07-23 01:09:27.011681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.531 [2024-07-23 01:09:27.011702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:9888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.531 [2024-07-23 01:09:27.011718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.531 [2024-07-23 01:09:27.011733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:9896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.531 [2024-07-23 01:09:27.011747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.531 [2024-07-23 01:09:27.011762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:9904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.531 [2024-07-23 01:09:27.011776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.531 [2024-07-23 01:09:27.011791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:9912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.531 [2024-07-23 01:09:27.011805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.531 [2024-07-23 01:09:27.011820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:9960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.531 [2024-07-23 01:09:27.011834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.531 [2024-07-23 01:09:27.011850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:9968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.531 [2024-07-23 01:09:27.011863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.531 [2024-07-23 01:09:27.011879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:9976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.531 [2024-07-23 01:09:27.011907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.531 [2024-07-23 01:09:27.011923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:9376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.531 [2024-07-23 01:09:27.011936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.531 [2024-07-23 01:09:27.011950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:9384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.531 [2024-07-23 01:09:27.011981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.531 [2024-07-23 01:09:27.011998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:9400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.531 [2024-07-23 01:09:27.012014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.531 [2024-07-23 01:09:27.012030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:9408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.531 [2024-07-23 01:09:27.012045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.531 [2024-07-23 01:09:27.012062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:9424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.531 [2024-07-23 01:09:27.012077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.531 [2024-07-23 01:09:27.012094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:9432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.531 [2024-07-23 01:09:27.012113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.531 [2024-07-23 01:09:27.012131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:9440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.531 [2024-07-23 01:09:27.012147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.531 [2024-07-23 01:09:27.012164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:9464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.531 [2024-07-23 01:09:27.012179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.531 [2024-07-23 01:09:27.012196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.531 [2024-07-23 01:09:27.012211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.531 [2024-07-23 01:09:27.012229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.531 [2024-07-23 01:09:27.012245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.531 [2024-07-23 01:09:27.012262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:10056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.531 [2024-07-23 01:09:27.012277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.531 [2024-07-23 01:09:27.012295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:10064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.531 [2024-07-23 01:09:27.012310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.531 [2024-07-23 01:09:27.012326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:10072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.531 [2024-07-23 01:09:27.012341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.531 [2024-07-23 01:09:27.012359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:10080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.531 [2024-07-23 01:09:27.012374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.531 [2024-07-23 01:09:27.012391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:10088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.531 [2024-07-23 01:09:27.012407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.531 [2024-07-23 01:09:27.012423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:10096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.531 [2024-07-23 01:09:27.012438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.531 [2024-07-23 01:09:27.012455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:10104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.531 [2024-07-23 01:09:27.012470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.532 [2024-07-23 01:09:27.012487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:10112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.532 [2024-07-23 01:09:27.012502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.532 [2024-07-23 01:09:27.012523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:10120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.532 [2024-07-23 01:09:27.012539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.532 [2024-07-23 01:09:27.012556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.532 [2024-07-23 01:09:27.012571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.532 [2024-07-23 01:09:27.012589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:10136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.532 [2024-07-23 01:09:27.012604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.532 [2024-07-23 01:09:27.012630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.532 [2024-07-23 01:09:27.012662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.532 [2024-07-23 01:09:27.012678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:9488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.532 [2024-07-23 01:09:27.012693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.532 [2024-07-23 01:09:27.012708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:9504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.532 [2024-07-23 01:09:27.012721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.532 [2024-07-23 01:09:27.012736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:9512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.532 [2024-07-23 01:09:27.012750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.532 [2024-07-23 01:09:27.012766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:9536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.532 [2024-07-23 01:09:27.012779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.532 [2024-07-23 01:09:27.012795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:9544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.532 [2024-07-23 01:09:27.012809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.532 [2024-07-23 01:09:27.012824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:9560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.532 [2024-07-23 01:09:27.012838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.532 [2024-07-23 01:09:27.012853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:9568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.532 [2024-07-23 01:09:27.012867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.532 [2024-07-23 01:09:27.012882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:10144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.532 [2024-07-23 01:09:27.012896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.532 [2024-07-23 01:09:27.012928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:10152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.532 [2024-07-23 01:09:27.012944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.532 [2024-07-23 01:09:27.012969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:10160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.532 [2024-07-23 01:09:27.012985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.532 [2024-07-23 01:09:27.013002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:10168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.532 [2024-07-23 01:09:27.013017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.532 [2024-07-23 01:09:27.013034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:10176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.532 [2024-07-23 01:09:27.013050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.532 [2024-07-23 01:09:27.013066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:10184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.532 [2024-07-23 01:09:27.013081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.532 [2024-07-23 01:09:27.013098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:10192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.532 [2024-07-23 01:09:27.013113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.532 [2024-07-23 01:09:27.013130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.532 [2024-07-23 01:09:27.013145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.532 [2024-07-23 01:09:27.013162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:10208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.532 [2024-07-23 01:09:27.013177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.532 [2024-07-23 01:09:27.013194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:10216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.532 [2024-07-23 01:09:27.013210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.532 [2024-07-23 01:09:27.013226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:10224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.532 [2024-07-23 01:09:27.013241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.532 [2024-07-23 01:09:27.013258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:10232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.532 [2024-07-23 01:09:27.013274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.532 [2024-07-23 01:09:27.013291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:10240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.532 [2024-07-23 01:09:27.013306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.532 [2024-07-23 01:09:27.013323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:9576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.532 [2024-07-23 01:09:27.013338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.532 [2024-07-23 01:09:27.013354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:9584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.532 [2024-07-23 01:09:27.013375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.532 [2024-07-23 01:09:27.013394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.532 [2024-07-23 01:09:27.013409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.532 [2024-07-23 01:09:27.013426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.532 [2024-07-23 01:09:27.013441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.532 [2024-07-23 01:09:27.013459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.532 [2024-07-23 01:09:27.013474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.532 [2024-07-23 01:09:27.013491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:9632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.532 [2024-07-23 01:09:27.013506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.532 [2024-07-23 01:09:27.013523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:9640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.532 [2024-07-23 01:09:27.013539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.532 [2024-07-23 01:09:27.013556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:9648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.532 [2024-07-23 01:09:27.013572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.532 [2024-07-23 01:09:27.013590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:10248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.532 [2024-07-23 01:09:27.013605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.532 [2024-07-23 01:09:27.013631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:10256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.532 [2024-07-23 01:09:27.013648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.532 [2024-07-23 01:09:27.013680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:10264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.532 [2024-07-23 01:09:27.013694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.532 [2024-07-23 01:09:27.013709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:10272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.532 [2024-07-23 01:09:27.013723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.532 [2024-07-23 01:09:27.013737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.532 [2024-07-23 01:09:27.013751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.532 [2024-07-23 01:09:27.013766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:10288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.532 [2024-07-23 01:09:27.013779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.533 [2024-07-23 01:09:27.013798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:10296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.533 [2024-07-23 01:09:27.013812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.533 [2024-07-23 01:09:27.013828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:10304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.533 [2024-07-23 01:09:27.013842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.533 [2024-07-23 01:09:27.013857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:10312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.533 [2024-07-23 01:09:27.013871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.533 [2024-07-23 01:09:27.013886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:10320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.533 [2024-07-23 01:09:27.013916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.533 [2024-07-23 01:09:27.013934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:10328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.533 [2024-07-23 01:09:27.013949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.533 [2024-07-23 01:09:27.013966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:10336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.533 [2024-07-23 01:09:27.013982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.533 [2024-07-23 01:09:27.013998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:9656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.533 [2024-07-23 01:09:27.014014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.533 [2024-07-23 01:09:27.014031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.533 [2024-07-23 01:09:27.014046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.533 [2024-07-23 01:09:27.014063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:9696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.533 [2024-07-23 01:09:27.014079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.533 [2024-07-23 01:09:27.014096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:9704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.533 [2024-07-23 01:09:27.014111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.533 [2024-07-23 01:09:27.014128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:9728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.533 [2024-07-23 01:09:27.014143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.533 [2024-07-23 01:09:27.014160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:9736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.533 [2024-07-23 01:09:27.014175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.533 [2024-07-23 01:09:27.014192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:9752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.533 [2024-07-23 01:09:27.014211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.533 [2024-07-23 01:09:27.014229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:9760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.533 [2024-07-23 01:09:27.014244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.533 [2024-07-23 01:09:27.014261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:10344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.533 [2024-07-23 01:09:27.014276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.533 [2024-07-23 01:09:27.014293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:10352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.533 [2024-07-23 01:09:27.014308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.533 [2024-07-23 01:09:27.014325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:9768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.533 [2024-07-23 01:09:27.014340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.533 [2024-07-23 01:09:27.014358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.533 [2024-07-23 01:09:27.014374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.533 [2024-07-23 01:09:27.014391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:9816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.533 [2024-07-23 01:09:27.014407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.533 [2024-07-23 01:09:27.014424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:9824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.533 [2024-07-23 01:09:27.014439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.533 [2024-07-23 01:09:27.014455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.533 [2024-07-23 01:09:27.014470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.533 [2024-07-23 01:09:27.014488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:9928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.533 [2024-07-23 01:09:27.014503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.533 [2024-07-23 01:09:27.014520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:9936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.533 [2024-07-23 01:09:27.014535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.533 [2024-07-23 01:09:27.014552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:9944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.533 [2024-07-23 01:09:27.014567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.533 [2024-07-23 01:09:27.014584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:10360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.533 [2024-07-23 01:09:27.014599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.533 [2024-07-23 01:09:27.014623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:10368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.533 [2024-07-23 01:09:27.014644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.533 [2024-07-23 01:09:27.014662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.533 [2024-07-23 01:09:27.014691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.533 [2024-07-23 01:09:27.014706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:10384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.533 [2024-07-23 01:09:27.014719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.533 [2024-07-23 01:09:27.014734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:10392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.533 [2024-07-23 01:09:27.014747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.533 [2024-07-23 01:09:27.014762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:10400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.533 [2024-07-23 01:09:27.014776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.533 [2024-07-23 01:09:27.014790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.533 [2024-07-23 01:09:27.014803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.533 [2024-07-23 01:09:27.014818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:10416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.533 [2024-07-23 01:09:27.014831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.533 [2024-07-23 01:09:27.014846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:10424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.533 [2024-07-23 01:09:27.014860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.533 [2024-07-23 01:09:27.014875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:10432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.533 [2024-07-23 01:09:27.014888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.533 [2024-07-23 01:09:27.014924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:10440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.533 [2024-07-23 01:09:27.014941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.533 [2024-07-23 01:09:27.014958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:10448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.533 [2024-07-23 01:09:27.014974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.533 [2024-07-23 01:09:27.014992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:10456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.533 [2024-07-23 01:09:27.015007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.533 [2024-07-23 01:09:27.015025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:10464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.533 [2024-07-23 01:09:27.015040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.533 [2024-07-23 01:09:27.015061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:9952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.533 [2024-07-23 01:09:27.015078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.533 [2024-07-23 01:09:27.015096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:9984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.534 [2024-07-23 01:09:27.015112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.534 [2024-07-23 01:09:27.015129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:9992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.534 [2024-07-23 01:09:27.015144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.534 [2024-07-23 01:09:27.015162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.534 [2024-07-23 01:09:27.015178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.534 [2024-07-23 01:09:27.015196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:10024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.534 [2024-07-23 01:09:27.015212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.534 [2024-07-23 01:09:27.015230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:10032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.534 [2024-07-23 01:09:27.015247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.534 [2024-07-23 01:09:27.015265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:10040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.534 [2024-07-23 01:09:27.015281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.534 [2024-07-23 01:09:27.015298] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5450 is same with the state(5) to be set 00:29:43.534 [2024-07-23 01:09:27.015317] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:43.534 [2024-07-23 01:09:27.015330] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:43.534 [2024-07-23 01:09:27.015344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10048 len:8 PRP1 0x0 PRP2 0x0 00:29:43.534 [2024-07-23 01:09:27.015358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.534 [2024-07-23 01:09:27.015422] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1ac5450 was disconnected and freed. reset controller. 00:29:43.534 [2024-07-23 01:09:27.017835] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.534 [2024-07-23 01:09:27.017918] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:43.534 [2024-07-23 01:09:27.018703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.534 [2024-07-23 01:09:27.018876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.534 [2024-07-23 01:09:27.018902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:43.534 [2024-07-23 01:09:27.018936] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:43.534 [2024-07-23 01:09:27.019068] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:43.534 [2024-07-23 01:09:27.019227] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.534 [2024-07-23 01:09:27.019251] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.534 [2024-07-23 01:09:27.019269] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.534 [2024-07-23 01:09:27.021466] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.534 [2024-07-23 01:09:27.030760] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.534 [2024-07-23 01:09:27.031178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.534 [2024-07-23 01:09:27.031410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.534 [2024-07-23 01:09:27.031437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:43.534 [2024-07-23 01:09:27.031468] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:43.534 [2024-07-23 01:09:27.031660] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:43.534 [2024-07-23 01:09:27.031778] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.534 [2024-07-23 01:09:27.031802] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.534 [2024-07-23 01:09:27.031819] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.534 [2024-07-23 01:09:27.034062] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.534 [2024-07-23 01:09:27.043236] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.534 [2024-07-23 01:09:27.043634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.534 [2024-07-23 01:09:27.043865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.534 [2024-07-23 01:09:27.043894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:43.534 [2024-07-23 01:09:27.043912] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:43.534 [2024-07-23 01:09:27.044078] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:43.534 [2024-07-23 01:09:27.044230] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.534 [2024-07-23 01:09:27.044255] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.534 [2024-07-23 01:09:27.044272] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.534 [2024-07-23 01:09:27.046632] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.534 [2024-07-23 01:09:27.055700] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.534 [2024-07-23 01:09:27.056037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.534 [2024-07-23 01:09:27.056312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.534 [2024-07-23 01:09:27.056362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:43.534 [2024-07-23 01:09:27.056380] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:43.534 [2024-07-23 01:09:27.056546] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:43.534 [2024-07-23 01:09:27.056710] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.534 [2024-07-23 01:09:27.056741] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.534 [2024-07-23 01:09:27.056758] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.534 [2024-07-23 01:09:27.059088] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.534 [2024-07-23 01:09:27.068388] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.534 [2024-07-23 01:09:27.068786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.534 [2024-07-23 01:09:27.069010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.534 [2024-07-23 01:09:27.069036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:43.534 [2024-07-23 01:09:27.069068] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:43.534 [2024-07-23 01:09:27.069250] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:43.534 [2024-07-23 01:09:27.069475] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.534 [2024-07-23 01:09:27.069500] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.534 [2024-07-23 01:09:27.069517] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.534 [2024-07-23 01:09:27.072039] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.534 [2024-07-23 01:09:27.081089] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.534 [2024-07-23 01:09:27.081463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.534 [2024-07-23 01:09:27.081637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.534 [2024-07-23 01:09:27.081664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:43.534 [2024-07-23 01:09:27.081681] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:43.534 [2024-07-23 01:09:27.081864] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:43.534 [2024-07-23 01:09:27.082054] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.534 [2024-07-23 01:09:27.082079] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.534 [2024-07-23 01:09:27.082096] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.534 [2024-07-23 01:09:27.084410] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.534 [2024-07-23 01:09:27.093808] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.534 [2024-07-23 01:09:27.094153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.534 [2024-07-23 01:09:27.094325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.534 [2024-07-23 01:09:27.094369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:43.534 [2024-07-23 01:09:27.094387] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:43.534 [2024-07-23 01:09:27.094589] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:43.534 [2024-07-23 01:09:27.094824] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.534 [2024-07-23 01:09:27.094850] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.534 [2024-07-23 01:09:27.094872] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.534 [2024-07-23 01:09:27.097024] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.534 [2024-07-23 01:09:27.106456] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.534 [2024-07-23 01:09:27.106952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.534 [2024-07-23 01:09:27.107255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.534 [2024-07-23 01:09:27.107281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:43.535 [2024-07-23 01:09:27.107297] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:43.535 [2024-07-23 01:09:27.107493] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:43.535 [2024-07-23 01:09:27.107666] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.535 [2024-07-23 01:09:27.107691] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.535 [2024-07-23 01:09:27.107709] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.535 [2024-07-23 01:09:27.110129] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.535 [2024-07-23 01:09:27.118979] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.535 [2024-07-23 01:09:27.119431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.535 [2024-07-23 01:09:27.119670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.535 [2024-07-23 01:09:27.119697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:43.535 [2024-07-23 01:09:27.119713] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:43.535 [2024-07-23 01:09:27.119880] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:43.535 [2024-07-23 01:09:27.120033] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.535 [2024-07-23 01:09:27.120058] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.535 [2024-07-23 01:09:27.120074] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.535 [2024-07-23 01:09:27.122388] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.535 [2024-07-23 01:09:27.131430] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.535 [2024-07-23 01:09:27.131727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.535 [2024-07-23 01:09:27.131959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.535 [2024-07-23 01:09:27.131985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:43.535 [2024-07-23 01:09:27.132000] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:43.535 [2024-07-23 01:09:27.132162] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:43.535 [2024-07-23 01:09:27.132368] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.535 [2024-07-23 01:09:27.132393] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.535 [2024-07-23 01:09:27.132410] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.535 [2024-07-23 01:09:27.134718] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.535 [2024-07-23 01:09:27.144052] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.535 [2024-07-23 01:09:27.144441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.535 [2024-07-23 01:09:27.144654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.535 [2024-07-23 01:09:27.144684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:43.535 [2024-07-23 01:09:27.144702] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:43.535 [2024-07-23 01:09:27.144886] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:43.535 [2024-07-23 01:09:27.145038] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.535 [2024-07-23 01:09:27.145063] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.535 [2024-07-23 01:09:27.145081] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.535 [2024-07-23 01:09:27.147230] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.535 [2024-07-23 01:09:27.156803] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.535 [2024-07-23 01:09:27.157124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.535 [2024-07-23 01:09:27.157405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.535 [2024-07-23 01:09:27.157432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:43.535 [2024-07-23 01:09:27.157447] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:43.535 [2024-07-23 01:09:27.157625] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:43.535 [2024-07-23 01:09:27.157777] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.535 [2024-07-23 01:09:27.157803] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.535 [2024-07-23 01:09:27.157819] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.535 [2024-07-23 01:09:27.160258] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.535 [2024-07-23 01:09:27.169311] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.535 [2024-07-23 01:09:27.169658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.535 [2024-07-23 01:09:27.169820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.535 [2024-07-23 01:09:27.169849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:43.535 [2024-07-23 01:09:27.169867] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:43.535 [2024-07-23 01:09:27.170015] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:43.535 [2024-07-23 01:09:27.170185] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.535 [2024-07-23 01:09:27.170210] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.535 [2024-07-23 01:09:27.170227] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.535 [2024-07-23 01:09:27.172746] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.535 [2024-07-23 01:09:27.181956] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.535 [2024-07-23 01:09:27.182337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.535 [2024-07-23 01:09:27.182486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.535 [2024-07-23 01:09:27.182514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:43.535 [2024-07-23 01:09:27.182531] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:43.535 [2024-07-23 01:09:27.182718] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:43.535 [2024-07-23 01:09:27.182937] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.535 [2024-07-23 01:09:27.182963] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.535 [2024-07-23 01:09:27.182980] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.535 [2024-07-23 01:09:27.185438] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.535 [2024-07-23 01:09:27.194369] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.535 [2024-07-23 01:09:27.194733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.535 [2024-07-23 01:09:27.195012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.535 [2024-07-23 01:09:27.195067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:43.535 [2024-07-23 01:09:27.195085] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:43.535 [2024-07-23 01:09:27.195269] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:43.536 [2024-07-23 01:09:27.195422] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.536 [2024-07-23 01:09:27.195447] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.536 [2024-07-23 01:09:27.195464] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.536 [2024-07-23 01:09:27.197859] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.536 [2024-07-23 01:09:27.206834] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.536 [2024-07-23 01:09:27.207255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.536 [2024-07-23 01:09:27.207440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.536 [2024-07-23 01:09:27.207469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:43.536 [2024-07-23 01:09:27.207487] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:43.536 [2024-07-23 01:09:27.207665] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:43.536 [2024-07-23 01:09:27.207854] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.536 [2024-07-23 01:09:27.207880] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.536 [2024-07-23 01:09:27.207897] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.536 [2024-07-23 01:09:27.210334] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.536 [2024-07-23 01:09:27.219668] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.536 [2024-07-23 01:09:27.220040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.536 [2024-07-23 01:09:27.220275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.536 [2024-07-23 01:09:27.220326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:43.536 [2024-07-23 01:09:27.220345] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:43.536 [2024-07-23 01:09:27.220492] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:43.536 [2024-07-23 01:09:27.220692] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.536 [2024-07-23 01:09:27.220718] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.536 [2024-07-23 01:09:27.220735] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.536 [2024-07-23 01:09:27.223191] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.536 [2024-07-23 01:09:27.232154] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.536 [2024-07-23 01:09:27.232536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.536 [2024-07-23 01:09:27.232713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.536 [2024-07-23 01:09:27.232741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:43.536 [2024-07-23 01:09:27.232758] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:43.536 [2024-07-23 01:09:27.232921] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:43.536 [2024-07-23 01:09:27.233056] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.536 [2024-07-23 01:09:27.233082] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.536 [2024-07-23 01:09:27.233098] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.536 [2024-07-23 01:09:27.235391] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.536 [2024-07-23 01:09:27.244817] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.536 [2024-07-23 01:09:27.245141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.536 [2024-07-23 01:09:27.245377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.536 [2024-07-23 01:09:27.245403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:43.536 [2024-07-23 01:09:27.245419] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:43.536 [2024-07-23 01:09:27.245600] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:43.536 [2024-07-23 01:09:27.245782] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.536 [2024-07-23 01:09:27.245808] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.536 [2024-07-23 01:09:27.245824] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.536 [2024-07-23 01:09:27.248136] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.536 [2024-07-23 01:09:27.257373] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.536 [2024-07-23 01:09:27.257774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.536 [2024-07-23 01:09:27.257950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.536 [2024-07-23 01:09:27.257981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:43.536 [2024-07-23 01:09:27.257998] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:43.536 [2024-07-23 01:09:27.258132] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:43.536 [2024-07-23 01:09:27.258281] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.536 [2024-07-23 01:09:27.258307] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.536 [2024-07-23 01:09:27.258323] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.536 [2024-07-23 01:09:27.260628] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.536 [2024-07-23 01:09:27.269781] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.536 [2024-07-23 01:09:27.270183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.536 [2024-07-23 01:09:27.270350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.536 [2024-07-23 01:09:27.270391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:43.536 [2024-07-23 01:09:27.270410] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:43.536 [2024-07-23 01:09:27.270558] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:43.536 [2024-07-23 01:09:27.270760] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.536 [2024-07-23 01:09:27.270786] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.536 [2024-07-23 01:09:27.270803] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.536 [2024-07-23 01:09:27.273096] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.536 [2024-07-23 01:09:27.282568] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.536 [2024-07-23 01:09:27.282974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.536 [2024-07-23 01:09:27.283145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.536 [2024-07-23 01:09:27.283189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:43.536 [2024-07-23 01:09:27.283208] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:43.536 [2024-07-23 01:09:27.283356] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:43.536 [2024-07-23 01:09:27.283544] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.536 [2024-07-23 01:09:27.283569] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.536 [2024-07-23 01:09:27.283585] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.536 [2024-07-23 01:09:27.285853] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.536 [2024-07-23 01:09:27.295073] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.536 [2024-07-23 01:09:27.295451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.536 [2024-07-23 01:09:27.295629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.536 [2024-07-23 01:09:27.295657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:43.536 [2024-07-23 01:09:27.295681] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:43.536 [2024-07-23 01:09:27.295815] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:43.536 [2024-07-23 01:09:27.295975] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.536 [2024-07-23 01:09:27.296000] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.536 [2024-07-23 01:09:27.296017] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.536 [2024-07-23 01:09:27.298437] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.536 [2024-07-23 01:09:27.307542] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.536 [2024-07-23 01:09:27.307932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.536 [2024-07-23 01:09:27.308148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.536 [2024-07-23 01:09:27.308179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:43.536 [2024-07-23 01:09:27.308197] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:43.536 [2024-07-23 01:09:27.308381] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:43.536 [2024-07-23 01:09:27.308551] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.536 [2024-07-23 01:09:27.308577] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.536 [2024-07-23 01:09:27.308593] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.536 [2024-07-23 01:09:27.311007] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.537 [2024-07-23 01:09:27.320205] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.537 [2024-07-23 01:09:27.320563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.537 [2024-07-23 01:09:27.320792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.537 [2024-07-23 01:09:27.320819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:43.537 [2024-07-23 01:09:27.320836] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:43.537 [2024-07-23 01:09:27.321023] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:43.537 [2024-07-23 01:09:27.321194] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.537 [2024-07-23 01:09:27.321219] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.537 [2024-07-23 01:09:27.321236] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.537 [2024-07-23 01:09:27.323550] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.537 [2024-07-23 01:09:27.332784] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.537 [2024-07-23 01:09:27.333226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.537 [2024-07-23 01:09:27.333395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.537 [2024-07-23 01:09:27.333421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:43.537 [2024-07-23 01:09:27.333437] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:43.537 [2024-07-23 01:09:27.333590] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:43.537 [2024-07-23 01:09:27.333773] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.537 [2024-07-23 01:09:27.333799] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.537 [2024-07-23 01:09:27.333815] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.537 [2024-07-23 01:09:27.336127] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.537 [2024-07-23 01:09:27.345108] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.537 [2024-07-23 01:09:27.345492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.537 [2024-07-23 01:09:27.345646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.537 [2024-07-23 01:09:27.345692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:43.537 [2024-07-23 01:09:27.345711] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:43.537 [2024-07-23 01:09:27.345878] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:43.537 [2024-07-23 01:09:27.346049] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.537 [2024-07-23 01:09:27.346074] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.537 [2024-07-23 01:09:27.346090] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.537 [2024-07-23 01:09:27.348530] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.537 [2024-07-23 01:09:27.357704] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.537 [2024-07-23 01:09:27.358089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.537 [2024-07-23 01:09:27.358298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.537 [2024-07-23 01:09:27.358327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:43.537 [2024-07-23 01:09:27.358346] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:43.537 [2024-07-23 01:09:27.358529] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:43.537 [2024-07-23 01:09:27.358729] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.537 [2024-07-23 01:09:27.358755] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.537 [2024-07-23 01:09:27.358772] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.537 [2024-07-23 01:09:27.361013] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.537 [2024-07-23 01:09:27.370235] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.537 [2024-07-23 01:09:27.370569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.537 [2024-07-23 01:09:27.370766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.537 [2024-07-23 01:09:27.370796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:43.537 [2024-07-23 01:09:27.370814] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:43.537 [2024-07-23 01:09:27.370997] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:43.537 [2024-07-23 01:09:27.371209] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.537 [2024-07-23 01:09:27.371234] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.537 [2024-07-23 01:09:27.371251] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.537 [2024-07-23 01:09:27.373697] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.537 [2024-07-23 01:09:27.382891] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.537 [2024-07-23 01:09:27.383230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.537 [2024-07-23 01:09:27.383408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.537 [2024-07-23 01:09:27.383435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:43.537 [2024-07-23 01:09:27.383451] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:43.537 [2024-07-23 01:09:27.383665] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:43.537 [2024-07-23 01:09:27.383808] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.537 [2024-07-23 01:09:27.383833] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.537 [2024-07-23 01:09:27.383849] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.537 [2024-07-23 01:09:27.386143] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.537 [2024-07-23 01:09:27.395423] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.537 [2024-07-23 01:09:27.395783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.537 [2024-07-23 01:09:27.396078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.537 [2024-07-23 01:09:27.396132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:43.537 [2024-07-23 01:09:27.396150] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:43.537 [2024-07-23 01:09:27.396335] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:43.537 [2024-07-23 01:09:27.396560] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.537 [2024-07-23 01:09:27.396585] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.537 [2024-07-23 01:09:27.396601] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.537 [2024-07-23 01:09:27.398942] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.537 [2024-07-23 01:09:27.408083] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.537 [2024-07-23 01:09:27.408444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.537 [2024-07-23 01:09:27.408656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.537 [2024-07-23 01:09:27.408687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:43.537 [2024-07-23 01:09:27.408705] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:43.537 [2024-07-23 01:09:27.408871] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:43.537 [2024-07-23 01:09:27.409023] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.537 [2024-07-23 01:09:27.409053] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.537 [2024-07-23 01:09:27.409070] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.537 [2024-07-23 01:09:27.411420] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.537 [2024-07-23 01:09:27.420639] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.537 [2024-07-23 01:09:27.420950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.537 [2024-07-23 01:09:27.421196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.537 [2024-07-23 01:09:27.421222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:43.537 [2024-07-23 01:09:27.421252] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:43.537 [2024-07-23 01:09:27.421397] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:43.537 [2024-07-23 01:09:27.421651] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.537 [2024-07-23 01:09:27.421677] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.537 [2024-07-23 01:09:27.421693] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.537 [2024-07-23 01:09:27.424095] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.537 [2024-07-23 01:09:27.433259] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.537 [2024-07-23 01:09:27.433664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.537 [2024-07-23 01:09:27.433834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.537 [2024-07-23 01:09:27.433863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:43.538 [2024-07-23 01:09:27.433881] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:43.538 [2024-07-23 01:09:27.434119] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:43.538 [2024-07-23 01:09:27.434272] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.538 [2024-07-23 01:09:27.434297] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.538 [2024-07-23 01:09:27.434313] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.538 [2024-07-23 01:09:27.436539] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.538 [2024-07-23 01:09:27.445579] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.538 [2024-07-23 01:09:27.445957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.538 [2024-07-23 01:09:27.446299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.538 [2024-07-23 01:09:27.446323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:43.538 [2024-07-23 01:09:27.446337] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:43.538 [2024-07-23 01:09:27.446444] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:43.538 [2024-07-23 01:09:27.446558] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.538 [2024-07-23 01:09:27.446582] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.538 [2024-07-23 01:09:27.446604] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.538 [2024-07-23 01:09:27.448814] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.538 [2024-07-23 01:09:27.457980] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.538 [2024-07-23 01:09:27.458395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.538 [2024-07-23 01:09:27.458683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.538 [2024-07-23 01:09:27.458726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:43.538 [2024-07-23 01:09:27.458742] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:43.538 [2024-07-23 01:09:27.458903] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:43.538 [2024-07-23 01:09:27.459038] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.538 [2024-07-23 01:09:27.459062] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.538 [2024-07-23 01:09:27.459078] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.538 [2024-07-23 01:09:27.461443] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.538 [2024-07-23 01:09:27.470734] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.538 [2024-07-23 01:09:27.471108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.538 [2024-07-23 01:09:27.471401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.538 [2024-07-23 01:09:27.471430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:43.538 [2024-07-23 01:09:27.471448] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:43.538 [2024-07-23 01:09:27.471623] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:43.538 [2024-07-23 01:09:27.471794] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.538 [2024-07-23 01:09:27.471820] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.538 [2024-07-23 01:09:27.471836] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.538 [2024-07-23 01:09:27.473930] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.538 [2024-07-23 01:09:27.483335] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.538 [2024-07-23 01:09:27.483722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.538 [2024-07-23 01:09:27.483910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.538 [2024-07-23 01:09:27.483937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:43.538 [2024-07-23 01:09:27.483953] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:43.538 [2024-07-23 01:09:27.484158] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:43.538 [2024-07-23 01:09:27.484338] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.538 [2024-07-23 01:09:27.484364] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.538 [2024-07-23 01:09:27.484380] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.538 [2024-07-23 01:09:27.486725] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.538 [2024-07-23 01:09:27.495848] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.538 [2024-07-23 01:09:27.496210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.538 [2024-07-23 01:09:27.496473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.538 [2024-07-23 01:09:27.496502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:43.538 [2024-07-23 01:09:27.496520] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:43.538 [2024-07-23 01:09:27.496680] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:43.538 [2024-07-23 01:09:27.496815] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.538 [2024-07-23 01:09:27.496840] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.538 [2024-07-23 01:09:27.496856] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.538 [2024-07-23 01:09:27.499061] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.538 [2024-07-23 01:09:27.508406] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.538 [2024-07-23 01:09:27.508758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.538 [2024-07-23 01:09:27.508968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.538 [2024-07-23 01:09:27.508997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:43.538 [2024-07-23 01:09:27.509016] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:43.538 [2024-07-23 01:09:27.509200] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:43.538 [2024-07-23 01:09:27.509388] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.538 [2024-07-23 01:09:27.509413] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.538 [2024-07-23 01:09:27.509429] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.538 [2024-07-23 01:09:27.511678] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.538 [2024-07-23 01:09:27.520932] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.538 [2024-07-23 01:09:27.521311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.538 [2024-07-23 01:09:27.521534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.538 [2024-07-23 01:09:27.521561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:43.538 [2024-07-23 01:09:27.521591] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:43.538 [2024-07-23 01:09:27.521742] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:43.538 [2024-07-23 01:09:27.521900] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.538 [2024-07-23 01:09:27.521921] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.538 [2024-07-23 01:09:27.521950] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.538 [2024-07-23 01:09:27.524211] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.538 [2024-07-23 01:09:27.533551] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.538 [2024-07-23 01:09:27.533947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.538 [2024-07-23 01:09:27.534121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.538 [2024-07-23 01:09:27.534162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:43.538 [2024-07-23 01:09:27.534178] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:43.538 [2024-07-23 01:09:27.534347] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:43.538 [2024-07-23 01:09:27.534506] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.538 [2024-07-23 01:09:27.534532] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.538 [2024-07-23 01:09:27.534548] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.538 [2024-07-23 01:09:27.536778] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.538 [2024-07-23 01:09:27.546260] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.538 [2024-07-23 01:09:27.546703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.538 [2024-07-23 01:09:27.546926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.538 [2024-07-23 01:09:27.546953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:43.538 [2024-07-23 01:09:27.546985] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:43.538 [2024-07-23 01:09:27.547113] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:43.538 [2024-07-23 01:09:27.547314] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.539 [2024-07-23 01:09:27.547338] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.539 [2024-07-23 01:09:27.547355] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.539 [2024-07-23 01:09:27.549733] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.539 [2024-07-23 01:09:27.558851] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.539 [2024-07-23 01:09:27.559366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.539 [2024-07-23 01:09:27.559611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.539 [2024-07-23 01:09:27.559768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:43.539 [2024-07-23 01:09:27.559787] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:43.539 [2024-07-23 01:09:27.559953] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:43.539 [2024-07-23 01:09:27.560105] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.539 [2024-07-23 01:09:27.560130] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.539 [2024-07-23 01:09:27.560146] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.539 [2024-07-23 01:09:27.562457] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.539 [2024-07-23 01:09:27.571293] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.539 [2024-07-23 01:09:27.571714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.539 [2024-07-23 01:09:27.571923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.539 [2024-07-23 01:09:27.571953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:43.539 [2024-07-23 01:09:27.571971] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:43.539 [2024-07-23 01:09:27.572137] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:43.539 [2024-07-23 01:09:27.572307] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.539 [2024-07-23 01:09:27.572332] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.539 [2024-07-23 01:09:27.572348] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.539 [2024-07-23 01:09:27.574653] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.539 [2024-07-23 01:09:27.583773] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.539 [2024-07-23 01:09:27.584179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.539 [2024-07-23 01:09:27.584455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.539 [2024-07-23 01:09:27.584511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:43.539 [2024-07-23 01:09:27.584529] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:43.539 [2024-07-23 01:09:27.584742] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:43.539 [2024-07-23 01:09:27.584913] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.539 [2024-07-23 01:09:27.584938] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.539 [2024-07-23 01:09:27.584954] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.539 [2024-07-23 01:09:27.587341] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.539 [2024-07-23 01:09:27.596250] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.539 [2024-07-23 01:09:27.596567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.539 [2024-07-23 01:09:27.596797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.539 [2024-07-23 01:09:27.596824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:43.539 [2024-07-23 01:09:27.596855] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:43.539 [2024-07-23 01:09:27.596995] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:43.539 [2024-07-23 01:09:27.597148] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.539 [2024-07-23 01:09:27.597173] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.539 [2024-07-23 01:09:27.597189] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.539 [2024-07-23 01:09:27.599428] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.539 [2024-07-23 01:09:27.608893] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.539 [2024-07-23 01:09:27.609281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.539 [2024-07-23 01:09:27.609462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.539 [2024-07-23 01:09:27.609492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:43.539 [2024-07-23 01:09:27.609523] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:43.539 [2024-07-23 01:09:27.609683] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:43.539 [2024-07-23 01:09:27.609890] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.539 [2024-07-23 01:09:27.609915] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.539 [2024-07-23 01:09:27.609931] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.539 [2024-07-23 01:09:27.612189] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.539 [2024-07-23 01:09:27.621709] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.539 [2024-07-23 01:09:27.622237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.539 [2024-07-23 01:09:27.622546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.539 [2024-07-23 01:09:27.622570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:43.539 [2024-07-23 01:09:27.622586] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:43.539 [2024-07-23 01:09:27.622854] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:43.539 [2024-07-23 01:09:27.623062] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.539 [2024-07-23 01:09:27.623087] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.539 [2024-07-23 01:09:27.623103] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.539 [2024-07-23 01:09:27.625522] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.539 [2024-07-23 01:09:27.634264] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.539 [2024-07-23 01:09:27.634636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.539 [2024-07-23 01:09:27.634874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.539 [2024-07-23 01:09:27.634904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:43.539 [2024-07-23 01:09:27.634922] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:43.539 [2024-07-23 01:09:27.635089] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:43.539 [2024-07-23 01:09:27.635230] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.539 [2024-07-23 01:09:27.635255] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.539 [2024-07-23 01:09:27.635271] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.539 [2024-07-23 01:09:27.637441] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.539 [2024-07-23 01:09:27.646695] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.539 [2024-07-23 01:09:27.647091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.539 [2024-07-23 01:09:27.647343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.539 [2024-07-23 01:09:27.647390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:43.539 [2024-07-23 01:09:27.647414] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:43.539 [2024-07-23 01:09:27.647598] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:43.539 [2024-07-23 01:09:27.647753] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.539 [2024-07-23 01:09:27.647776] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.539 [2024-07-23 01:09:27.647792] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.539 [2024-07-23 01:09:27.650165] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.539 [2024-07-23 01:09:27.659385] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.539 [2024-07-23 01:09:27.659767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.539 [2024-07-23 01:09:27.659925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.539 [2024-07-23 01:09:27.659950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:43.539 [2024-07-23 01:09:27.659967] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:43.540 [2024-07-23 01:09:27.660100] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:43.540 [2024-07-23 01:09:27.660317] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.540 [2024-07-23 01:09:27.660342] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.540 [2024-07-23 01:09:27.660358] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.540 [2024-07-23 01:09:27.662730] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.540 [2024-07-23 01:09:27.671983] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.540 [2024-07-23 01:09:27.672358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.540 [2024-07-23 01:09:27.672581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.540 [2024-07-23 01:09:27.672629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:43.540 [2024-07-23 01:09:27.672666] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:43.540 [2024-07-23 01:09:27.672816] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:43.540 [2024-07-23 01:09:27.673003] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.540 [2024-07-23 01:09:27.673028] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.540 [2024-07-23 01:09:27.673044] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.540 [2024-07-23 01:09:27.675474] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.540 [2024-07-23 01:09:27.684538] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.540 [2024-07-23 01:09:27.684835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.540 [2024-07-23 01:09:27.685005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.540 [2024-07-23 01:09:27.685048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:43.540 [2024-07-23 01:09:27.685067] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:43.540 [2024-07-23 01:09:27.685220] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:43.540 [2024-07-23 01:09:27.685427] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.540 [2024-07-23 01:09:27.685452] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.540 [2024-07-23 01:09:27.685468] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.540 [2024-07-23 01:09:27.687963] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.540 [2024-07-23 01:09:27.697086] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.540 [2024-07-23 01:09:27.697501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.540 [2024-07-23 01:09:27.697694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.540 [2024-07-23 01:09:27.697721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:43.540 [2024-07-23 01:09:27.697737] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:43.540 [2024-07-23 01:09:27.697887] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:43.540 [2024-07-23 01:09:27.698103] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.540 [2024-07-23 01:09:27.698125] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.540 [2024-07-23 01:09:27.698138] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.540 [2024-07-23 01:09:27.700424] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.540 [2024-07-23 01:09:27.709463] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.540 [2024-07-23 01:09:27.709766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.540 [2024-07-23 01:09:27.709917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.540 [2024-07-23 01:09:27.709943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:43.540 [2024-07-23 01:09:27.709958] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:43.540 [2024-07-23 01:09:27.710056] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:43.540 [2024-07-23 01:09:27.710254] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.540 [2024-07-23 01:09:27.710280] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.540 [2024-07-23 01:09:27.710296] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.540 [2024-07-23 01:09:27.712796] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.540 [2024-07-23 01:09:27.722081] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.540 [2024-07-23 01:09:27.722558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.540 [2024-07-23 01:09:27.723360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.540 [2024-07-23 01:09:27.723395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:43.540 [2024-07-23 01:09:27.723425] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:43.540 [2024-07-23 01:09:27.723593] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:43.540 [2024-07-23 01:09:27.723745] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.540 [2024-07-23 01:09:27.723769] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.540 [2024-07-23 01:09:27.723787] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.540 [2024-07-23 01:09:27.726174] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.800 [2024-07-23 01:09:27.734374] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.800 [2024-07-23 01:09:27.734784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.800 [2024-07-23 01:09:27.734934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.800 [2024-07-23 01:09:27.734988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:43.800 [2024-07-23 01:09:27.735006] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:43.800 [2024-07-23 01:09:27.735190] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:43.800 [2024-07-23 01:09:27.735378] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.800 [2024-07-23 01:09:27.735404] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.800 [2024-07-23 01:09:27.735421] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.800 [2024-07-23 01:09:27.737684] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.800 [2024-07-23 01:09:27.746954] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.800 [2024-07-23 01:09:27.747306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.800 [2024-07-23 01:09:27.747491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.800 [2024-07-23 01:09:27.747520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:43.800 [2024-07-23 01:09:27.747538] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:43.800 [2024-07-23 01:09:27.747725] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:43.800 [2024-07-23 01:09:27.747831] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.800 [2024-07-23 01:09:27.747853] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.800 [2024-07-23 01:09:27.747868] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.800 [2024-07-23 01:09:27.750215] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.800 [2024-07-23 01:09:27.759481] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.800 [2024-07-23 01:09:27.759809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.800 [2024-07-23 01:09:27.759984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.800 [2024-07-23 01:09:27.760010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:43.800 [2024-07-23 01:09:27.760026] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:43.800 [2024-07-23 01:09:27.760158] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:43.800 [2024-07-23 01:09:27.760382] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.800 [2024-07-23 01:09:27.760412] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.800 [2024-07-23 01:09:27.760430] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.800 [2024-07-23 01:09:27.762862] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.800 [2024-07-23 01:09:27.772060] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.800 [2024-07-23 01:09:27.772425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.800 [2024-07-23 01:09:27.772636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.800 [2024-07-23 01:09:27.772676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:43.800 [2024-07-23 01:09:27.772695] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:43.800 [2024-07-23 01:09:27.772861] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:43.800 [2024-07-23 01:09:27.773068] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.800 [2024-07-23 01:09:27.773093] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.800 [2024-07-23 01:09:27.773110] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.800 [2024-07-23 01:09:27.775513] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.800 [2024-07-23 01:09:27.784554] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.800 [2024-07-23 01:09:27.784921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.800 [2024-07-23 01:09:27.785069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.800 [2024-07-23 01:09:27.785095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:43.800 [2024-07-23 01:09:27.785111] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:43.800 [2024-07-23 01:09:27.785299] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:43.800 [2024-07-23 01:09:27.785462] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.800 [2024-07-23 01:09:27.785488] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.800 [2024-07-23 01:09:27.785505] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.800 [2024-07-23 01:09:27.787846] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.800 [2024-07-23 01:09:27.796951] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.800 [2024-07-23 01:09:27.797420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.800 [2024-07-23 01:09:27.797661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.800 [2024-07-23 01:09:27.797689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:43.800 [2024-07-23 01:09:27.797706] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:43.800 [2024-07-23 01:09:27.797907] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:43.800 [2024-07-23 01:09:27.798059] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.800 [2024-07-23 01:09:27.798085] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.800 [2024-07-23 01:09:27.798108] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.800 [2024-07-23 01:09:27.800298] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.800 [2024-07-23 01:09:27.809443] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.800 [2024-07-23 01:09:27.809845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.801 [2024-07-23 01:09:27.810059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.801 [2024-07-23 01:09:27.810107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:43.801 [2024-07-23 01:09:27.810126] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:43.801 [2024-07-23 01:09:27.810292] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:43.801 [2024-07-23 01:09:27.810462] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.801 [2024-07-23 01:09:27.810486] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.801 [2024-07-23 01:09:27.810503] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.801 [2024-07-23 01:09:27.812787] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.801 [2024-07-23 01:09:27.822066] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.801 [2024-07-23 01:09:27.822601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.801 [2024-07-23 01:09:27.822849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.801 [2024-07-23 01:09:27.822879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:43.801 [2024-07-23 01:09:27.822897] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:43.801 [2024-07-23 01:09:27.823117] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:43.801 [2024-07-23 01:09:27.823305] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.801 [2024-07-23 01:09:27.823330] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.801 [2024-07-23 01:09:27.823347] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.801 [2024-07-23 01:09:27.825499] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.801 [2024-07-23 01:09:27.834518] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.801 [2024-07-23 01:09:27.834900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.801 [2024-07-23 01:09:27.835098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.801 [2024-07-23 01:09:27.835127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:43.801 [2024-07-23 01:09:27.835146] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:43.801 [2024-07-23 01:09:27.835294] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:43.801 [2024-07-23 01:09:27.835427] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.801 [2024-07-23 01:09:27.835451] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.801 [2024-07-23 01:09:27.835467] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.801 [2024-07-23 01:09:27.837901] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.801 [2024-07-23 01:09:27.847032] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.801 [2024-07-23 01:09:27.847386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.801 [2024-07-23 01:09:27.847597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.801 [2024-07-23 01:09:27.847638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:43.801 [2024-07-23 01:09:27.847667] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:43.801 [2024-07-23 01:09:27.847887] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:43.801 [2024-07-23 01:09:27.848040] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.801 [2024-07-23 01:09:27.848066] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.801 [2024-07-23 01:09:27.848082] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.801 [2024-07-23 01:09:27.850369] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.801 [2024-07-23 01:09:27.859691] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.801 [2024-07-23 01:09:27.860080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.801 [2024-07-23 01:09:27.860309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.801 [2024-07-23 01:09:27.860358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:43.801 [2024-07-23 01:09:27.860379] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:43.801 [2024-07-23 01:09:27.860545] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:43.801 [2024-07-23 01:09:27.860730] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.801 [2024-07-23 01:09:27.860756] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.801 [2024-07-23 01:09:27.860773] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.801 [2024-07-23 01:09:27.863070] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.801 [2024-07-23 01:09:27.872019] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.801 [2024-07-23 01:09:27.872422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.801 [2024-07-23 01:09:27.872621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.801 [2024-07-23 01:09:27.872676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:43.801 [2024-07-23 01:09:27.872694] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:43.801 [2024-07-23 01:09:27.872828] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:43.801 [2024-07-23 01:09:27.873025] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.801 [2024-07-23 01:09:27.873052] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.801 [2024-07-23 01:09:27.873068] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.801 [2024-07-23 01:09:27.875325] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.801 [2024-07-23 01:09:27.884580] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.801 [2024-07-23 01:09:27.884989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.801 [2024-07-23 01:09:27.885210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.801 [2024-07-23 01:09:27.885237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:43.801 [2024-07-23 01:09:27.885268] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:43.801 [2024-07-23 01:09:27.885450] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:43.801 [2024-07-23 01:09:27.885630] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.801 [2024-07-23 01:09:27.885673] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.801 [2024-07-23 01:09:27.885687] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.801 [2024-07-23 01:09:27.888046] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.801 [2024-07-23 01:09:27.897049] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.801 [2024-07-23 01:09:27.897504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.801 [2024-07-23 01:09:27.897712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.801 [2024-07-23 01:09:27.897742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:43.801 [2024-07-23 01:09:27.897760] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:43.801 [2024-07-23 01:09:27.897944] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:43.801 [2024-07-23 01:09:27.898115] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.801 [2024-07-23 01:09:27.898141] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.801 [2024-07-23 01:09:27.898157] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.801 [2024-07-23 01:09:27.900543] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.801 [2024-07-23 01:09:27.909623] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.801 [2024-07-23 01:09:27.909997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.801 [2024-07-23 01:09:27.910197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.801 [2024-07-23 01:09:27.910227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:43.801 [2024-07-23 01:09:27.910246] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:43.801 [2024-07-23 01:09:27.910466] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:43.801 [2024-07-23 01:09:27.910644] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.801 [2024-07-23 01:09:27.910688] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.801 [2024-07-23 01:09:27.910707] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.801 [2024-07-23 01:09:27.913023] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.801 [2024-07-23 01:09:27.922000] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.801 [2024-07-23 01:09:27.922392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.801 [2024-07-23 01:09:27.922630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.801 [2024-07-23 01:09:27.922660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:43.801 [2024-07-23 01:09:27.922679] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:43.802 [2024-07-23 01:09:27.922827] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:43.802 [2024-07-23 01:09:27.922961] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.802 [2024-07-23 01:09:27.922986] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.802 [2024-07-23 01:09:27.923003] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.802 [2024-07-23 01:09:27.925246] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.802 [2024-07-23 01:09:27.934358] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.802 [2024-07-23 01:09:27.934730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.802 [2024-07-23 01:09:27.934908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.802 [2024-07-23 01:09:27.934936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:43.802 [2024-07-23 01:09:27.934954] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:43.802 [2024-07-23 01:09:27.935120] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:43.802 [2024-07-23 01:09:27.935272] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.802 [2024-07-23 01:09:27.935298] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.802 [2024-07-23 01:09:27.935315] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.802 [2024-07-23 01:09:27.937695] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.802 [2024-07-23 01:09:27.947045] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.802 [2024-07-23 01:09:27.947432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.802 [2024-07-23 01:09:27.947629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.802 [2024-07-23 01:09:27.947659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:43.802 [2024-07-23 01:09:27.947677] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:43.802 [2024-07-23 01:09:27.947862] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:43.802 [2024-07-23 01:09:27.948013] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.802 [2024-07-23 01:09:27.948038] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.802 [2024-07-23 01:09:27.948055] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.802 [2024-07-23 01:09:27.950487] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.802 [2024-07-23 01:09:27.959704] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.802 [2024-07-23 01:09:27.960039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.802 [2024-07-23 01:09:27.960288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.802 [2024-07-23 01:09:27.960323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:43.802 [2024-07-23 01:09:27.960343] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:43.802 [2024-07-23 01:09:27.960563] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:43.802 [2024-07-23 01:09:27.960749] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.802 [2024-07-23 01:09:27.960776] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.802 [2024-07-23 01:09:27.960793] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.802 [2024-07-23 01:09:27.963271] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.802 [2024-07-23 01:09:27.972314] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.802 [2024-07-23 01:09:27.972686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.802 [2024-07-23 01:09:27.972837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.802 [2024-07-23 01:09:27.972865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:43.802 [2024-07-23 01:09:27.972883] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:43.802 [2024-07-23 01:09:27.973049] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:43.802 [2024-07-23 01:09:27.973200] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.802 [2024-07-23 01:09:27.973224] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.802 [2024-07-23 01:09:27.973241] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.802 [2024-07-23 01:09:27.975415] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.802 [2024-07-23 01:09:27.984920] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.802 [2024-07-23 01:09:27.985279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.802 [2024-07-23 01:09:27.985476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.802 [2024-07-23 01:09:27.985506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:43.802 [2024-07-23 01:09:27.985524] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:43.802 [2024-07-23 01:09:27.985665] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:43.802 [2024-07-23 01:09:27.985835] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.802 [2024-07-23 01:09:27.985859] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.802 [2024-07-23 01:09:27.985875] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.802 [2024-07-23 01:09:27.988117] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.802 [2024-07-23 01:09:27.997512] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.802 [2024-07-23 01:09:27.997859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.802 [2024-07-23 01:09:27.998074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.802 [2024-07-23 01:09:27.998120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:43.802 [2024-07-23 01:09:27.998144] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:43.802 [2024-07-23 01:09:27.998293] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:43.802 [2024-07-23 01:09:27.998481] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.802 [2024-07-23 01:09:27.998507] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.802 [2024-07-23 01:09:27.998524] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.062 [2024-07-23 01:09:28.001013] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.062 [2024-07-23 01:09:28.009972] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.062 [2024-07-23 01:09:28.010377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.062 [2024-07-23 01:09:28.010553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.062 [2024-07-23 01:09:28.010593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:44.062 [2024-07-23 01:09:28.010609] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:44.062 [2024-07-23 01:09:28.010811] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:44.062 [2024-07-23 01:09:28.010982] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.062 [2024-07-23 01:09:28.011005] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.062 [2024-07-23 01:09:28.011023] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.062 [2024-07-23 01:09:28.013410] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.062 [2024-07-23 01:09:28.022620] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.062 [2024-07-23 01:09:28.023033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.062 [2024-07-23 01:09:28.023247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.062 [2024-07-23 01:09:28.023275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:44.062 [2024-07-23 01:09:28.023294] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:44.062 [2024-07-23 01:09:28.023513] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:44.062 [2024-07-23 01:09:28.023680] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.062 [2024-07-23 01:09:28.023707] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.062 [2024-07-23 01:09:28.023724] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.062 [2024-07-23 01:09:28.025807] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.062 [2024-07-23 01:09:28.035174] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.062 [2024-07-23 01:09:28.035528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.062 [2024-07-23 01:09:28.035693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.062 [2024-07-23 01:09:28.035724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:44.062 [2024-07-23 01:09:28.035743] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:44.062 [2024-07-23 01:09:28.035880] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:44.062 [2024-07-23 01:09:28.036068] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.062 [2024-07-23 01:09:28.036094] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.062 [2024-07-23 01:09:28.036111] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.062 [2024-07-23 01:09:28.038481] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.062 [2024-07-23 01:09:28.047598] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.062 [2024-07-23 01:09:28.048079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.062 [2024-07-23 01:09:28.048274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.062 [2024-07-23 01:09:28.048307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:44.062 [2024-07-23 01:09:28.048344] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:44.062 [2024-07-23 01:09:28.048511] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:44.062 [2024-07-23 01:09:28.048711] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.062 [2024-07-23 01:09:28.048737] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.062 [2024-07-23 01:09:28.048754] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.062 [2024-07-23 01:09:28.051076] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.062 [2024-07-23 01:09:28.060103] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.062 [2024-07-23 01:09:28.060501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.062 [2024-07-23 01:09:28.060708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.062 [2024-07-23 01:09:28.060737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:44.062 [2024-07-23 01:09:28.060754] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:44.062 [2024-07-23 01:09:28.060954] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:44.062 [2024-07-23 01:09:28.061123] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.062 [2024-07-23 01:09:28.061149] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.062 [2024-07-23 01:09:28.061166] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.062 [2024-07-23 01:09:28.063535] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.062 [2024-07-23 01:09:28.072636] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.062 [2024-07-23 01:09:28.072987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.062 [2024-07-23 01:09:28.073170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.062 [2024-07-23 01:09:28.073200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:44.062 [2024-07-23 01:09:28.073217] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:44.062 [2024-07-23 01:09:28.073366] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:44.062 [2024-07-23 01:09:28.073577] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.062 [2024-07-23 01:09:28.073603] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.062 [2024-07-23 01:09:28.073635] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.062 [2024-07-23 01:09:28.075860] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.062 [2024-07-23 01:09:28.085320] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.062 [2024-07-23 01:09:28.085713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.062 [2024-07-23 01:09:28.085930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.062 [2024-07-23 01:09:28.085956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:44.062 [2024-07-23 01:09:28.085972] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:44.062 [2024-07-23 01:09:28.086135] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:44.062 [2024-07-23 01:09:28.086341] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.062 [2024-07-23 01:09:28.086367] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.062 [2024-07-23 01:09:28.086384] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.063 [2024-07-23 01:09:28.088693] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.063 [2024-07-23 01:09:28.097998] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.063 [2024-07-23 01:09:28.098369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.063 [2024-07-23 01:09:28.098557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.063 [2024-07-23 01:09:28.098587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:44.063 [2024-07-23 01:09:28.098605] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:44.063 [2024-07-23 01:09:28.098782] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:44.063 [2024-07-23 01:09:28.098969] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.063 [2024-07-23 01:09:28.098995] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.063 [2024-07-23 01:09:28.099013] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.063 [2024-07-23 01:09:28.101129] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.063 [2024-07-23 01:09:28.110501] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.063 [2024-07-23 01:09:28.110868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.063 [2024-07-23 01:09:28.111097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.063 [2024-07-23 01:09:28.111123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:44.063 [2024-07-23 01:09:28.111139] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:44.063 [2024-07-23 01:09:28.111320] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:44.063 [2024-07-23 01:09:28.111490] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.063 [2024-07-23 01:09:28.111525] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.063 [2024-07-23 01:09:28.111544] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.063 [2024-07-23 01:09:28.113891] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.063 [2024-07-23 01:09:28.123002] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.063 [2024-07-23 01:09:28.123327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.063 [2024-07-23 01:09:28.123539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.063 [2024-07-23 01:09:28.123570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:44.063 [2024-07-23 01:09:28.123588] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:44.063 [2024-07-23 01:09:28.123767] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:44.063 [2024-07-23 01:09:28.123902] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.063 [2024-07-23 01:09:28.123928] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.063 [2024-07-23 01:09:28.123945] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.063 [2024-07-23 01:09:28.126206] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.063 [2024-07-23 01:09:28.135540] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.063 [2024-07-23 01:09:28.135885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.063 [2024-07-23 01:09:28.136102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.063 [2024-07-23 01:09:28.136149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:44.063 [2024-07-23 01:09:28.136168] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:44.063 [2024-07-23 01:09:28.136334] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:44.063 [2024-07-23 01:09:28.136504] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.063 [2024-07-23 01:09:28.136530] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.063 [2024-07-23 01:09:28.136547] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.063 [2024-07-23 01:09:28.139037] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.063 [2024-07-23 01:09:28.148244] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.063 [2024-07-23 01:09:28.148603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.063 [2024-07-23 01:09:28.148810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.063 [2024-07-23 01:09:28.148838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:44.063 [2024-07-23 01:09:28.148854] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:44.063 [2024-07-23 01:09:28.149070] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:44.063 [2024-07-23 01:09:28.149260] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.063 [2024-07-23 01:09:28.149286] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.063 [2024-07-23 01:09:28.149308] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.063 [2024-07-23 01:09:28.151658] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.063 [2024-07-23 01:09:28.160825] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.063 [2024-07-23 01:09:28.161205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.063 [2024-07-23 01:09:28.161399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.063 [2024-07-23 01:09:28.161425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:44.063 [2024-07-23 01:09:28.161441] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:44.063 [2024-07-23 01:09:28.161599] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:44.063 [2024-07-23 01:09:28.161833] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.063 [2024-07-23 01:09:28.161859] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.063 [2024-07-23 01:09:28.161876] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.063 [2024-07-23 01:09:28.164066] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.063 [2024-07-23 01:09:28.173246] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.063 [2024-07-23 01:09:28.173596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.063 [2024-07-23 01:09:28.173843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.063 [2024-07-23 01:09:28.173887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:44.063 [2024-07-23 01:09:28.173905] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:44.063 [2024-07-23 01:09:28.174090] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:44.063 [2024-07-23 01:09:28.174242] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.063 [2024-07-23 01:09:28.174268] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.063 [2024-07-23 01:09:28.174284] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.063 [2024-07-23 01:09:28.176511] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.063 [2024-07-23 01:09:28.185878] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.063 [2024-07-23 01:09:28.186267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.063 [2024-07-23 01:09:28.186446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.063 [2024-07-23 01:09:28.186473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:44.063 [2024-07-23 01:09:28.186490] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:44.063 [2024-07-23 01:09:28.186679] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:44.063 [2024-07-23 01:09:28.186874] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.063 [2024-07-23 01:09:28.186901] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.063 [2024-07-23 01:09:28.186917] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.063 [2024-07-23 01:09:28.189255] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.063 [2024-07-23 01:09:28.198519] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.063 [2024-07-23 01:09:28.198858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.063 [2024-07-23 01:09:28.199068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.063 [2024-07-23 01:09:28.199108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:44.063 [2024-07-23 01:09:28.199126] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:44.063 [2024-07-23 01:09:28.199311] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:44.063 [2024-07-23 01:09:28.199463] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.063 [2024-07-23 01:09:28.199489] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.063 [2024-07-23 01:09:28.199506] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.063 [2024-07-23 01:09:28.201925] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.063 [2024-07-23 01:09:28.211255] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.063 [2024-07-23 01:09:28.211686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.063 [2024-07-23 01:09:28.211989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.064 [2024-07-23 01:09:28.212039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:44.064 [2024-07-23 01:09:28.212058] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:44.064 [2024-07-23 01:09:28.212242] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:44.064 [2024-07-23 01:09:28.212411] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.064 [2024-07-23 01:09:28.212437] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.064 [2024-07-23 01:09:28.212454] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.064 [2024-07-23 01:09:28.214780] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.064 [2024-07-23 01:09:28.223692] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.064 [2024-07-23 01:09:28.224091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.064 [2024-07-23 01:09:28.224354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.064 [2024-07-23 01:09:28.224384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:44.064 [2024-07-23 01:09:28.224403] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:44.064 [2024-07-23 01:09:28.224588] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:44.064 [2024-07-23 01:09:28.224753] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.064 [2024-07-23 01:09:28.224781] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.064 [2024-07-23 01:09:28.224798] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.064 [2024-07-23 01:09:28.227110] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.064 [2024-07-23 01:09:28.236190] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.064 [2024-07-23 01:09:28.236536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.064 [2024-07-23 01:09:28.236830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.064 [2024-07-23 01:09:28.236885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:44.064 [2024-07-23 01:09:28.236903] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:44.064 [2024-07-23 01:09:28.237052] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:44.064 [2024-07-23 01:09:28.237221] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.064 [2024-07-23 01:09:28.237247] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.064 [2024-07-23 01:09:28.237264] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.064 [2024-07-23 01:09:28.239714] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.064 [2024-07-23 01:09:28.248722] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.064 [2024-07-23 01:09:28.249099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.064 [2024-07-23 01:09:28.249293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.064 [2024-07-23 01:09:28.249319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:44.064 [2024-07-23 01:09:28.249335] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:44.064 [2024-07-23 01:09:28.249480] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:44.064 [2024-07-23 01:09:28.249646] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.064 [2024-07-23 01:09:28.249672] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.064 [2024-07-23 01:09:28.249688] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.064 [2024-07-23 01:09:28.251948] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.064 [2024-07-23 01:09:28.261226] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.064 [2024-07-23 01:09:28.261593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.064 [2024-07-23 01:09:28.261777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.064 [2024-07-23 01:09:28.261804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:44.064 [2024-07-23 01:09:28.261820] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:44.064 [2024-07-23 01:09:28.262047] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:44.064 [2024-07-23 01:09:28.262201] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.064 [2024-07-23 01:09:28.262228] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.064 [2024-07-23 01:09:28.262245] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.323 [2024-07-23 01:09:28.264530] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.324 [2024-07-23 01:09:28.273945] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.324 [2024-07-23 01:09:28.274350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.324 [2024-07-23 01:09:28.274558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.324 [2024-07-23 01:09:28.274589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:44.324 [2024-07-23 01:09:28.274607] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:44.324 [2024-07-23 01:09:28.274804] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:44.324 [2024-07-23 01:09:28.274956] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.324 [2024-07-23 01:09:28.274982] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.324 [2024-07-23 01:09:28.274998] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.324 [2024-07-23 01:09:28.277262] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.324 [2024-07-23 01:09:28.286266] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.324 [2024-07-23 01:09:28.286680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.324 [2024-07-23 01:09:28.286850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.324 [2024-07-23 01:09:28.286877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:44.324 [2024-07-23 01:09:28.286894] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:44.324 [2024-07-23 01:09:28.287044] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:44.324 [2024-07-23 01:09:28.287231] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.324 [2024-07-23 01:09:28.287258] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.324 [2024-07-23 01:09:28.287275] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.324 [2024-07-23 01:09:28.289633] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.324 [2024-07-23 01:09:28.298947] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.324 [2024-07-23 01:09:28.299449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.324 [2024-07-23 01:09:28.299694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.324 [2024-07-23 01:09:28.299723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:44.324 [2024-07-23 01:09:28.299742] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:44.324 [2024-07-23 01:09:28.299908] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:44.324 [2024-07-23 01:09:28.300113] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.324 [2024-07-23 01:09:28.300139] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.324 [2024-07-23 01:09:28.300156] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.324 [2024-07-23 01:09:28.302639] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.324 [2024-07-23 01:09:28.311532] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.324 [2024-07-23 01:09:28.311963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.324 [2024-07-23 01:09:28.312295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.324 [2024-07-23 01:09:28.312361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:44.324 [2024-07-23 01:09:28.312381] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:44.324 [2024-07-23 01:09:28.312566] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:44.324 [2024-07-23 01:09:28.312771] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.324 [2024-07-23 01:09:28.312799] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.324 [2024-07-23 01:09:28.312816] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.324 [2024-07-23 01:09:28.315254] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.324 [2024-07-23 01:09:28.323950] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.324 [2024-07-23 01:09:28.324430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.324 [2024-07-23 01:09:28.324664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.324 [2024-07-23 01:09:28.324694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:44.324 [2024-07-23 01:09:28.324711] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:44.324 [2024-07-23 01:09:28.324859] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:44.324 [2024-07-23 01:09:28.325010] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.324 [2024-07-23 01:09:28.325036] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.324 [2024-07-23 01:09:28.325053] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.324 [2024-07-23 01:09:28.327488] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.324 [2024-07-23 01:09:28.336672] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.324 [2024-07-23 01:09:28.337016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.324 [2024-07-23 01:09:28.337370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.324 [2024-07-23 01:09:28.337420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:44.324 [2024-07-23 01:09:28.337438] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:44.324 [2024-07-23 01:09:28.337636] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:44.324 [2024-07-23 01:09:28.337789] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.324 [2024-07-23 01:09:28.337815] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.324 [2024-07-23 01:09:28.337832] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.324 [2024-07-23 01:09:28.340236] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.324 [2024-07-23 01:09:28.349278] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.324 [2024-07-23 01:09:28.349667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.324 [2024-07-23 01:09:28.349868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.324 [2024-07-23 01:09:28.349896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:44.324 [2024-07-23 01:09:28.349920] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:44.324 [2024-07-23 01:09:28.350069] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:44.324 [2024-07-23 01:09:28.350275] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.324 [2024-07-23 01:09:28.350302] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.324 [2024-07-23 01:09:28.350318] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.324 [2024-07-23 01:09:28.352867] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.324 [2024-07-23 01:09:28.361820] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.324 [2024-07-23 01:09:28.362218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.324 [2024-07-23 01:09:28.362570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.324 [2024-07-23 01:09:28.362632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:44.324 [2024-07-23 01:09:28.362652] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:44.324 [2024-07-23 01:09:28.362836] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:44.324 [2024-07-23 01:09:28.362969] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.324 [2024-07-23 01:09:28.362993] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.324 [2024-07-23 01:09:28.363009] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.324 [2024-07-23 01:09:28.365413] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.324 [2024-07-23 01:09:28.374361] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.324 [2024-07-23 01:09:28.374744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.324 [2024-07-23 01:09:28.374920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.324 [2024-07-23 01:09:28.374963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:44.324 [2024-07-23 01:09:28.374981] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:44.324 [2024-07-23 01:09:28.375147] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:44.324 [2024-07-23 01:09:28.375262] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.324 [2024-07-23 01:09:28.375286] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.324 [2024-07-23 01:09:28.375303] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.324 [2024-07-23 01:09:28.377563] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.324 [2024-07-23 01:09:28.387094] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.325 [2024-07-23 01:09:28.387445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.325 [2024-07-23 01:09:28.387654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.325 [2024-07-23 01:09:28.387684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:44.325 [2024-07-23 01:09:28.387701] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:44.325 [2024-07-23 01:09:28.387819] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:44.325 [2024-07-23 01:09:28.387989] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.325 [2024-07-23 01:09:28.388013] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.325 [2024-07-23 01:09:28.388029] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.325 [2024-07-23 01:09:28.390398] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.325 [2024-07-23 01:09:28.399631] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.325 [2024-07-23 01:09:28.399988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.325 [2024-07-23 01:09:28.400269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.325 [2024-07-23 01:09:28.400315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:44.325 [2024-07-23 01:09:28.400333] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:44.325 [2024-07-23 01:09:28.400500] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:44.325 [2024-07-23 01:09:28.400702] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.325 [2024-07-23 01:09:28.400728] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.325 [2024-07-23 01:09:28.400745] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.325 [2024-07-23 01:09:28.403169] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.325 [2024-07-23 01:09:28.412074] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.325 [2024-07-23 01:09:28.412418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.325 [2024-07-23 01:09:28.412646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.325 [2024-07-23 01:09:28.412674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:44.325 [2024-07-23 01:09:28.412690] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:44.325 [2024-07-23 01:09:28.412917] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:44.325 [2024-07-23 01:09:28.413069] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.325 [2024-07-23 01:09:28.413095] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.325 [2024-07-23 01:09:28.413111] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.325 [2024-07-23 01:09:28.415264] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.325 [2024-07-23 01:09:28.424700] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.325 [2024-07-23 01:09:28.425087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.325 [2024-07-23 01:09:28.425398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.325 [2024-07-23 01:09:28.425460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:44.325 [2024-07-23 01:09:28.425479] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:44.325 [2024-07-23 01:09:28.425678] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:44.325 [2024-07-23 01:09:28.425890] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.325 [2024-07-23 01:09:28.425917] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.325 [2024-07-23 01:09:28.425934] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.325 [2024-07-23 01:09:28.428192] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.325 [2024-07-23 01:09:28.437161] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.325 [2024-07-23 01:09:28.437530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.325 [2024-07-23 01:09:28.437760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.325 [2024-07-23 01:09:28.437786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:44.325 [2024-07-23 01:09:28.437818] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:44.325 [2024-07-23 01:09:28.437984] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:44.325 [2024-07-23 01:09:28.438190] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.325 [2024-07-23 01:09:28.438216] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.325 [2024-07-23 01:09:28.438232] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.325 [2024-07-23 01:09:28.440585] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.325 [2024-07-23 01:09:28.449822] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.325 [2024-07-23 01:09:28.450277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.325 [2024-07-23 01:09:28.450408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.325 [2024-07-23 01:09:28.450432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:44.325 [2024-07-23 01:09:28.450448] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:44.325 [2024-07-23 01:09:28.450697] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:44.325 [2024-07-23 01:09:28.450872] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.325 [2024-07-23 01:09:28.450908] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.325 [2024-07-23 01:09:28.450925] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.325 [2024-07-23 01:09:28.453096] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.325 [2024-07-23 01:09:28.462302] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.325 [2024-07-23 01:09:28.462684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.325 [2024-07-23 01:09:28.462868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.325 [2024-07-23 01:09:28.462897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:44.325 [2024-07-23 01:09:28.462914] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:44.325 [2024-07-23 01:09:28.463063] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:44.325 [2024-07-23 01:09:28.463196] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.325 [2024-07-23 01:09:28.463225] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.325 [2024-07-23 01:09:28.463242] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.325 [2024-07-23 01:09:28.465523] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.325 [2024-07-23 01:09:28.474964] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.325 [2024-07-23 01:09:28.475315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.325 [2024-07-23 01:09:28.475543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.325 [2024-07-23 01:09:28.475570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:44.325 [2024-07-23 01:09:28.475586] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:44.325 [2024-07-23 01:09:28.475779] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:44.325 [2024-07-23 01:09:28.475950] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.325 [2024-07-23 01:09:28.475975] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.325 [2024-07-23 01:09:28.475991] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.325 [2024-07-23 01:09:28.478304] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.325 [2024-07-23 01:09:28.487537] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.325 [2024-07-23 01:09:28.487929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.325 [2024-07-23 01:09:28.488177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.325 [2024-07-23 01:09:28.488219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:44.325 [2024-07-23 01:09:28.488235] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:44.325 [2024-07-23 01:09:28.488417] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:44.325 [2024-07-23 01:09:28.488605] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.325 [2024-07-23 01:09:28.488643] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.325 [2024-07-23 01:09:28.488660] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.325 [2024-07-23 01:09:28.490958] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.325 [2024-07-23 01:09:28.500046] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.325 [2024-07-23 01:09:28.500425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.325 [2024-07-23 01:09:28.500601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.325 [2024-07-23 01:09:28.500644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:44.325 [2024-07-23 01:09:28.500663] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:44.326 [2024-07-23 01:09:28.500867] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:44.326 [2024-07-23 01:09:28.501036] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.326 [2024-07-23 01:09:28.501061] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.326 [2024-07-23 01:09:28.501083] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.326 [2024-07-23 01:09:28.503364] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.326 [2024-07-23 01:09:28.512548] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.326 [2024-07-23 01:09:28.512888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.326 [2024-07-23 01:09:28.513263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.326 [2024-07-23 01:09:28.513316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:44.326 [2024-07-23 01:09:28.513334] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:44.326 [2024-07-23 01:09:28.513518] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:44.326 [2024-07-23 01:09:28.513683] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.326 [2024-07-23 01:09:28.513708] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.326 [2024-07-23 01:09:28.513723] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.326 [2024-07-23 01:09:28.516038] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.585 [2024-07-23 01:09:28.524979] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.585 [2024-07-23 01:09:28.525418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.585 [2024-07-23 01:09:28.525657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.585 [2024-07-23 01:09:28.525685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:44.585 [2024-07-23 01:09:28.525702] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:44.585 [2024-07-23 01:09:28.525872] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:44.585 [2024-07-23 01:09:28.526078] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.585 [2024-07-23 01:09:28.526104] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.585 [2024-07-23 01:09:28.526121] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.585 [2024-07-23 01:09:28.528631] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.585 [2024-07-23 01:09:28.537626] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.585 [2024-07-23 01:09:28.537943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.585 [2024-07-23 01:09:28.538134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.585 [2024-07-23 01:09:28.538176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:44.585 [2024-07-23 01:09:28.538192] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:44.585 [2024-07-23 01:09:28.538372] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:44.585 [2024-07-23 01:09:28.538559] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.585 [2024-07-23 01:09:28.538585] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.585 [2024-07-23 01:09:28.538601] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.585 [2024-07-23 01:09:28.540807] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.585 [2024-07-23 01:09:28.550380] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.585 [2024-07-23 01:09:28.550782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.586 [2024-07-23 01:09:28.551010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.586 [2024-07-23 01:09:28.551083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:44.586 [2024-07-23 01:09:28.551102] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:44.586 [2024-07-23 01:09:28.551287] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:44.586 [2024-07-23 01:09:28.551456] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.586 [2024-07-23 01:09:28.551482] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.586 [2024-07-23 01:09:28.551499] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.586 [2024-07-23 01:09:28.553898] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.586 [2024-07-23 01:09:28.562948] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.586 [2024-07-23 01:09:28.563472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.586 [2024-07-23 01:09:28.563711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.586 [2024-07-23 01:09:28.563738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:44.586 [2024-07-23 01:09:28.563754] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:44.586 [2024-07-23 01:09:28.563889] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:44.586 [2024-07-23 01:09:28.564058] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.586 [2024-07-23 01:09:28.564083] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.586 [2024-07-23 01:09:28.564101] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.586 [2024-07-23 01:09:28.566505] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.586 [2024-07-23 01:09:28.575648] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.586 [2024-07-23 01:09:28.576028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.586 [2024-07-23 01:09:28.576232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.586 [2024-07-23 01:09:28.576257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:44.586 [2024-07-23 01:09:28.576274] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:44.586 [2024-07-23 01:09:28.576419] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:44.586 [2024-07-23 01:09:28.576606] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.586 [2024-07-23 01:09:28.576644] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.586 [2024-07-23 01:09:28.576662] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.586 [2024-07-23 01:09:28.579067] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.586 [2024-07-23 01:09:28.588340] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.586 [2024-07-23 01:09:28.588727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.586 [2024-07-23 01:09:28.589048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.586 [2024-07-23 01:09:28.589100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:44.586 [2024-07-23 01:09:28.589118] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:44.586 [2024-07-23 01:09:28.589284] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:44.586 [2024-07-23 01:09:28.589472] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.586 [2024-07-23 01:09:28.589497] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.586 [2024-07-23 01:09:28.589514] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.586 [2024-07-23 01:09:28.591751] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.586 [2024-07-23 01:09:28.600991] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.586 [2024-07-23 01:09:28.601320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.586 [2024-07-23 01:09:28.601502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.586 [2024-07-23 01:09:28.601530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:44.586 [2024-07-23 01:09:28.601548] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:44.586 [2024-07-23 01:09:28.601691] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:44.586 [2024-07-23 01:09:28.601880] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.586 [2024-07-23 01:09:28.601906] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.586 [2024-07-23 01:09:28.601923] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.586 [2024-07-23 01:09:28.604240] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.586 [2024-07-23 01:09:28.613643] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.586 [2024-07-23 01:09:28.614004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.586 [2024-07-23 01:09:28.614224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.586 [2024-07-23 01:09:28.614250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:44.586 [2024-07-23 01:09:28.614266] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:44.586 [2024-07-23 01:09:28.614452] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:44.586 [2024-07-23 01:09:28.614601] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.586 [2024-07-23 01:09:28.614640] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.586 [2024-07-23 01:09:28.614658] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.586 [2024-07-23 01:09:28.617117] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.586 [2024-07-23 01:09:28.626019] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.586 [2024-07-23 01:09:28.626351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.586 [2024-07-23 01:09:28.626537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.586 [2024-07-23 01:09:28.626567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:44.586 [2024-07-23 01:09:28.626585] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:44.586 [2024-07-23 01:09:28.626801] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:44.586 [2024-07-23 01:09:28.626971] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.586 [2024-07-23 01:09:28.626997] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.586 [2024-07-23 01:09:28.627013] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.586 [2024-07-23 01:09:28.629346] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.586 [2024-07-23 01:09:28.638517] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.586 [2024-07-23 01:09:28.638847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.586 [2024-07-23 01:09:28.639087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.586 [2024-07-23 01:09:28.639118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:44.586 [2024-07-23 01:09:28.639136] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:44.586 [2024-07-23 01:09:28.639340] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:44.586 [2024-07-23 01:09:28.639473] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.586 [2024-07-23 01:09:28.639498] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.586 [2024-07-23 01:09:28.639515] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.586 [2024-07-23 01:09:28.641935] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.586 [2024-07-23 01:09:28.651245] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.586 [2024-07-23 01:09:28.651722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.586 [2024-07-23 01:09:28.651916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.586 [2024-07-23 01:09:28.651945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:44.586 [2024-07-23 01:09:28.651963] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:44.586 [2024-07-23 01:09:28.652167] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:44.586 [2024-07-23 01:09:28.652300] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.586 [2024-07-23 01:09:28.652327] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.586 [2024-07-23 01:09:28.652343] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.586 [2024-07-23 01:09:28.654835] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.586 [2024-07-23 01:09:28.663895] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.586 [2024-07-23 01:09:28.664265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.586 [2024-07-23 01:09:28.664449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.586 [2024-07-23 01:09:28.664482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:44.587 [2024-07-23 01:09:28.664501] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:44.587 [2024-07-23 01:09:28.664663] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:44.587 [2024-07-23 01:09:28.664852] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.587 [2024-07-23 01:09:28.664878] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.587 [2024-07-23 01:09:28.664895] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.587 [2024-07-23 01:09:28.667101] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.587 [2024-07-23 01:09:28.676632] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.587 [2024-07-23 01:09:28.677007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.587 [2024-07-23 01:09:28.677328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.587 [2024-07-23 01:09:28.677388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:44.587 [2024-07-23 01:09:28.677407] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:44.587 [2024-07-23 01:09:28.677591] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:44.587 [2024-07-23 01:09:28.677738] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.587 [2024-07-23 01:09:28.677764] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.587 [2024-07-23 01:09:28.677780] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.587 [2024-07-23 01:09:28.679916] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.587 [2024-07-23 01:09:28.689167] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.587 [2024-07-23 01:09:28.689510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.587 [2024-07-23 01:09:28.689752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.587 [2024-07-23 01:09:28.689781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:44.587 [2024-07-23 01:09:28.689799] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:44.587 [2024-07-23 01:09:28.689966] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:44.587 [2024-07-23 01:09:28.690153] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.587 [2024-07-23 01:09:28.690179] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.587 [2024-07-23 01:09:28.690196] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.587 [2024-07-23 01:09:28.692461] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.587 [2024-07-23 01:09:28.701640] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.587 [2024-07-23 01:09:28.702001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.587 [2024-07-23 01:09:28.702182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.587 [2024-07-23 01:09:28.702223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:44.587 [2024-07-23 01:09:28.702246] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:44.587 [2024-07-23 01:09:28.702407] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:44.587 [2024-07-23 01:09:28.702577] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.587 [2024-07-23 01:09:28.702603] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.587 [2024-07-23 01:09:28.702632] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.587 [2024-07-23 01:09:28.704803] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.587 [2024-07-23 01:09:28.714125] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.587 [2024-07-23 01:09:28.714668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.587 [2024-07-23 01:09:28.714885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.587 [2024-07-23 01:09:28.714910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:44.587 [2024-07-23 01:09:28.714927] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:44.587 [2024-07-23 01:09:28.715097] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:44.587 [2024-07-23 01:09:28.715267] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.587 [2024-07-23 01:09:28.715293] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.587 [2024-07-23 01:09:28.715310] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.587 [2024-07-23 01:09:28.717743] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.587 [2024-07-23 01:09:28.726769] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.587 [2024-07-23 01:09:28.727233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.587 [2024-07-23 01:09:28.727459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.587 [2024-07-23 01:09:28.727506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:44.587 [2024-07-23 01:09:28.727526] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:44.587 [2024-07-23 01:09:28.727703] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:44.587 [2024-07-23 01:09:28.727873] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.587 [2024-07-23 01:09:28.727899] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.587 [2024-07-23 01:09:28.727916] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.587 [2024-07-23 01:09:28.730231] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.587 [2024-07-23 01:09:28.739193] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.587 [2024-07-23 01:09:28.739575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.587 [2024-07-23 01:09:28.739772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.587 [2024-07-23 01:09:28.739803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:44.587 [2024-07-23 01:09:28.739821] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:44.587 [2024-07-23 01:09:28.740012] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:44.587 [2024-07-23 01:09:28.740200] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.587 [2024-07-23 01:09:28.740226] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.587 [2024-07-23 01:09:28.740243] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.587 [2024-07-23 01:09:28.742521] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.587 [2024-07-23 01:09:28.751669] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.587 [2024-07-23 01:09:28.752144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.587 [2024-07-23 01:09:28.752328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.587 [2024-07-23 01:09:28.752357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:44.587 [2024-07-23 01:09:28.752375] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:44.587 [2024-07-23 01:09:28.752558] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:44.587 [2024-07-23 01:09:28.752700] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.587 [2024-07-23 01:09:28.752726] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.587 [2024-07-23 01:09:28.752743] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.587 [2024-07-23 01:09:28.755224] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.587 [2024-07-23 01:09:28.764306] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.587 [2024-07-23 01:09:28.764720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.587 [2024-07-23 01:09:28.764894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.587 [2024-07-23 01:09:28.764920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:44.587 [2024-07-23 01:09:28.764936] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:44.587 [2024-07-23 01:09:28.765114] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:44.587 [2024-07-23 01:09:28.765313] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.587 [2024-07-23 01:09:28.765335] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.587 [2024-07-23 01:09:28.765349] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.587 [2024-07-23 01:09:28.767710] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.587 [2024-07-23 01:09:28.776987] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.587 [2024-07-23 01:09:28.777448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.587 [2024-07-23 01:09:28.777692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.587 [2024-07-23 01:09:28.777718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:44.587 [2024-07-23 01:09:28.777734] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:44.587 [2024-07-23 01:09:28.777894] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:44.587 [2024-07-23 01:09:28.778067] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.587 [2024-07-23 01:09:28.778094] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.588 [2024-07-23 01:09:28.778110] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.588 [2024-07-23 01:09:28.780325] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.847 [2024-07-23 01:09:28.789606] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.847 [2024-07-23 01:09:28.789959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.847 [2024-07-23 01:09:28.790201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.847 [2024-07-23 01:09:28.790230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:44.847 [2024-07-23 01:09:28.790248] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:44.847 [2024-07-23 01:09:28.790467] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:44.847 [2024-07-23 01:09:28.790678] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.847 [2024-07-23 01:09:28.790702] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.847 [2024-07-23 01:09:28.790717] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.847 [2024-07-23 01:09:28.793112] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.847 [2024-07-23 01:09:28.802252] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.847 [2024-07-23 01:09:28.802634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.847 [2024-07-23 01:09:28.802812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.847 [2024-07-23 01:09:28.802838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:44.847 [2024-07-23 01:09:28.802855] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:44.847 [2024-07-23 01:09:28.803006] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:44.847 [2024-07-23 01:09:28.803176] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.847 [2024-07-23 01:09:28.803201] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.847 [2024-07-23 01:09:28.803217] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.847 [2024-07-23 01:09:28.805729] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.847 [2024-07-23 01:09:28.814664] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.847 [2024-07-23 01:09:28.814958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.847 [2024-07-23 01:09:28.815149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.847 [2024-07-23 01:09:28.815174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:44.847 [2024-07-23 01:09:28.815190] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:44.847 [2024-07-23 01:09:28.815342] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:44.847 [2024-07-23 01:09:28.815525] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.847 [2024-07-23 01:09:28.815550] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.847 [2024-07-23 01:09:28.815564] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.847 [2024-07-23 01:09:28.817786] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.847 [2024-07-23 01:09:28.827281] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.848 [2024-07-23 01:09:28.827639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.848 [2024-07-23 01:09:28.827803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.848 [2024-07-23 01:09:28.827829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:44.848 [2024-07-23 01:09:28.827845] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:44.848 [2024-07-23 01:09:28.828025] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:44.848 [2024-07-23 01:09:28.828214] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.848 [2024-07-23 01:09:28.828238] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.848 [2024-07-23 01:09:28.828255] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.848 [2024-07-23 01:09:28.830284] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.848 [2024-07-23 01:09:28.839975] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.848 [2024-07-23 01:09:28.840342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.848 [2024-07-23 01:09:28.840554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.848 [2024-07-23 01:09:28.840583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:44.848 [2024-07-23 01:09:28.840601] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:44.848 [2024-07-23 01:09:28.840750] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:44.848 [2024-07-23 01:09:28.840872] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.848 [2024-07-23 01:09:28.840894] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.848 [2024-07-23 01:09:28.840930] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.848 [2024-07-23 01:09:28.843168] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.848 [2024-07-23 01:09:28.852557] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.848 [2024-07-23 01:09:28.852887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.848 [2024-07-23 01:09:28.853125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.848 [2024-07-23 01:09:28.853166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:44.848 [2024-07-23 01:09:28.853184] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:44.848 [2024-07-23 01:09:28.853350] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:44.848 [2024-07-23 01:09:28.853502] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.848 [2024-07-23 01:09:28.853526] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.848 [2024-07-23 01:09:28.853548] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.848 [2024-07-23 01:09:28.855796] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.848 [2024-07-23 01:09:28.864926] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.848 [2024-07-23 01:09:28.865369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.848 [2024-07-23 01:09:28.865595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.848 [2024-07-23 01:09:28.865632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:44.848 [2024-07-23 01:09:28.865667] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:44.848 [2024-07-23 01:09:28.865832] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:44.848 [2024-07-23 01:09:28.866030] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.848 [2024-07-23 01:09:28.866055] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.848 [2024-07-23 01:09:28.866071] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.848 [2024-07-23 01:09:28.868355] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.848 [2024-07-23 01:09:28.877563] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.848 [2024-07-23 01:09:28.877925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.848 [2024-07-23 01:09:28.878100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.848 [2024-07-23 01:09:28.878129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:44.848 [2024-07-23 01:09:28.878147] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:44.848 [2024-07-23 01:09:28.878295] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:44.848 [2024-07-23 01:09:28.878429] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.848 [2024-07-23 01:09:28.878454] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.848 [2024-07-23 01:09:28.878470] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.848 [2024-07-23 01:09:28.880931] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.848 [2024-07-23 01:09:28.889997] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.848 [2024-07-23 01:09:28.890404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.848 [2024-07-23 01:09:28.890622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.848 [2024-07-23 01:09:28.890651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:44.848 [2024-07-23 01:09:28.890669] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:44.848 [2024-07-23 01:09:28.890872] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:44.848 [2024-07-23 01:09:28.891024] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.848 [2024-07-23 01:09:28.891050] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.848 [2024-07-23 01:09:28.891067] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.848 [2024-07-23 01:09:28.893275] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.848 [2024-07-23 01:09:28.902453] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.848 [2024-07-23 01:09:28.902843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.848 [2024-07-23 01:09:28.903061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.848 [2024-07-23 01:09:28.903087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:44.848 [2024-07-23 01:09:28.903104] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:44.848 [2024-07-23 01:09:28.903329] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:44.848 [2024-07-23 01:09:28.903517] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.848 [2024-07-23 01:09:28.903543] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.848 [2024-07-23 01:09:28.903559] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.848 [2024-07-23 01:09:28.906064] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.848 [2024-07-23 01:09:28.914990] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.848 [2024-07-23 01:09:28.915408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.848 [2024-07-23 01:09:28.915590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.848 [2024-07-23 01:09:28.915641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:44.848 [2024-07-23 01:09:28.915660] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:44.848 [2024-07-23 01:09:28.915850] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:44.848 [2024-07-23 01:09:28.915984] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.848 [2024-07-23 01:09:28.916009] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.848 [2024-07-23 01:09:28.916025] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.848 [2024-07-23 01:09:28.918445] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.848 [2024-07-23 01:09:28.927911] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.848 [2024-07-23 01:09:28.928267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.848 [2024-07-23 01:09:28.928484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.848 [2024-07-23 01:09:28.928513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:44.848 [2024-07-23 01:09:28.928531] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:44.848 [2024-07-23 01:09:28.928709] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:44.848 [2024-07-23 01:09:28.928880] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.848 [2024-07-23 01:09:28.928904] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.848 [2024-07-23 01:09:28.928921] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.848 [2024-07-23 01:09:28.931200] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.848 [2024-07-23 01:09:28.940506] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.848 [2024-07-23 01:09:28.940902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.848 [2024-07-23 01:09:28.941123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.848 [2024-07-23 01:09:28.941174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:44.849 [2024-07-23 01:09:28.941192] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:44.849 [2024-07-23 01:09:28.941304] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:44.849 [2024-07-23 01:09:28.941474] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.849 [2024-07-23 01:09:28.941499] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.849 [2024-07-23 01:09:28.941515] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.849 [2024-07-23 01:09:28.943710] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.849 [2024-07-23 01:09:28.953133] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.849 [2024-07-23 01:09:28.953546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.849 [2024-07-23 01:09:28.953748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.849 [2024-07-23 01:09:28.953778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:44.849 [2024-07-23 01:09:28.953796] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:44.849 [2024-07-23 01:09:28.953962] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:44.849 [2024-07-23 01:09:28.954123] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.849 [2024-07-23 01:09:28.954148] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.849 [2024-07-23 01:09:28.954164] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.849 [2024-07-23 01:09:28.956553] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.849 [2024-07-23 01:09:28.965561] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.849 [2024-07-23 01:09:28.965912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.849 [2024-07-23 01:09:28.966124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.849 [2024-07-23 01:09:28.966164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:44.849 [2024-07-23 01:09:28.966180] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:44.849 [2024-07-23 01:09:28.966318] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:44.849 [2024-07-23 01:09:28.966481] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.849 [2024-07-23 01:09:28.966507] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.849 [2024-07-23 01:09:28.966523] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.849 [2024-07-23 01:09:28.968902] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.849 [2024-07-23 01:09:28.978186] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.849 [2024-07-23 01:09:28.978578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.849 [2024-07-23 01:09:28.978807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.849 [2024-07-23 01:09:28.978834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:44.849 [2024-07-23 01:09:28.978850] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:44.849 [2024-07-23 01:09:28.978983] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:44.849 [2024-07-23 01:09:28.979150] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.849 [2024-07-23 01:09:28.979174] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.849 [2024-07-23 01:09:28.979191] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.849 [2024-07-23 01:09:28.981638] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.849 [2024-07-23 01:09:28.990791] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.849 [2024-07-23 01:09:28.991162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.849 [2024-07-23 01:09:28.991385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.849 [2024-07-23 01:09:28.991414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:44.849 [2024-07-23 01:09:28.991432] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:44.849 [2024-07-23 01:09:28.991597] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:44.849 [2024-07-23 01:09:28.991778] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.849 [2024-07-23 01:09:28.991803] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.849 [2024-07-23 01:09:28.991819] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.849 [2024-07-23 01:09:28.994213] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.849 [2024-07-23 01:09:29.003331] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.849 [2024-07-23 01:09:29.003686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.849 [2024-07-23 01:09:29.003851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.849 [2024-07-23 01:09:29.003879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:44.849 [2024-07-23 01:09:29.003905] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:44.849 [2024-07-23 01:09:29.004070] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:44.849 [2024-07-23 01:09:29.004258] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.849 [2024-07-23 01:09:29.004283] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.849 [2024-07-23 01:09:29.004299] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.849 [2024-07-23 01:09:29.006579] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.849 [2024-07-23 01:09:29.015937] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.849 [2024-07-23 01:09:29.016289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.849 [2024-07-23 01:09:29.016490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.849 [2024-07-23 01:09:29.016516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:44.849 [2024-07-23 01:09:29.016532] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:44.849 [2024-07-23 01:09:29.016737] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:44.849 [2024-07-23 01:09:29.016908] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.849 [2024-07-23 01:09:29.016933] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.849 [2024-07-23 01:09:29.016949] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.849 [2024-07-23 01:09:29.019162] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.849 [2024-07-23 01:09:29.028637] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.849 [2024-07-23 01:09:29.029064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.849 [2024-07-23 01:09:29.029255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.849 [2024-07-23 01:09:29.029283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:44.849 [2024-07-23 01:09:29.029301] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:44.849 [2024-07-23 01:09:29.029431] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:44.849 [2024-07-23 01:09:29.029601] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.849 [2024-07-23 01:09:29.029646] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.849 [2024-07-23 01:09:29.029664] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.849 [2024-07-23 01:09:29.031934] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.849 [2024-07-23 01:09:29.041256] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.849 [2024-07-23 01:09:29.041701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.849 [2024-07-23 01:09:29.041869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.849 [2024-07-23 01:09:29.041910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:44.849 [2024-07-23 01:09:29.041929] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:44.849 [2024-07-23 01:09:29.042094] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:44.849 [2024-07-23 01:09:29.042249] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.849 [2024-07-23 01:09:29.042274] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.849 [2024-07-23 01:09:29.042291] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.849 [2024-07-23 01:09:29.044543] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.109 [2024-07-23 01:09:29.053878] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.109 [2024-07-23 01:09:29.054241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.109 [2024-07-23 01:09:29.054450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.109 [2024-07-23 01:09:29.054479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:45.109 [2024-07-23 01:09:29.054503] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:45.109 [2024-07-23 01:09:29.054667] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:45.109 [2024-07-23 01:09:29.054813] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.109 [2024-07-23 01:09:29.054838] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.109 [2024-07-23 01:09:29.054854] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.109 [2024-07-23 01:09:29.057120] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.109 [2024-07-23 01:09:29.066466] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.109 [2024-07-23 01:09:29.066809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.109 [2024-07-23 01:09:29.067171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.109 [2024-07-23 01:09:29.067223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:45.110 [2024-07-23 01:09:29.067240] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:45.110 [2024-07-23 01:09:29.067424] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:45.110 [2024-07-23 01:09:29.067559] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.110 [2024-07-23 01:09:29.067583] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.110 [2024-07-23 01:09:29.067599] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.110 [2024-07-23 01:09:29.069819] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.110 [2024-07-23 01:09:29.079101] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.110 [2024-07-23 01:09:29.079507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.110 [2024-07-23 01:09:29.079677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.110 [2024-07-23 01:09:29.079707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:45.110 [2024-07-23 01:09:29.079725] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:45.110 [2024-07-23 01:09:29.079873] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:45.110 [2024-07-23 01:09:29.080043] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.110 [2024-07-23 01:09:29.080068] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.110 [2024-07-23 01:09:29.080084] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.110 [2024-07-23 01:09:29.082399] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.110 [2024-07-23 01:09:29.091719] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.110 [2024-07-23 01:09:29.092053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.110 [2024-07-23 01:09:29.092347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.110 [2024-07-23 01:09:29.092393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:45.110 [2024-07-23 01:09:29.092411] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:45.110 [2024-07-23 01:09:29.092528] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:45.110 [2024-07-23 01:09:29.092710] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.110 [2024-07-23 01:09:29.092735] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.110 [2024-07-23 01:09:29.092752] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.110 [2024-07-23 01:09:29.095040] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.110 [2024-07-23 01:09:29.104335] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.110 [2024-07-23 01:09:29.104657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.110 [2024-07-23 01:09:29.104866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.110 [2024-07-23 01:09:29.104895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:45.110 [2024-07-23 01:09:29.104912] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:45.110 [2024-07-23 01:09:29.105041] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:45.110 [2024-07-23 01:09:29.105212] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.110 [2024-07-23 01:09:29.105237] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.110 [2024-07-23 01:09:29.105253] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.110 [2024-07-23 01:09:29.107678] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.110 [2024-07-23 01:09:29.116916] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.110 [2024-07-23 01:09:29.117284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.110 [2024-07-23 01:09:29.117463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.110 [2024-07-23 01:09:29.117491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:45.110 [2024-07-23 01:09:29.117509] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:45.110 [2024-07-23 01:09:29.117724] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:45.110 [2024-07-23 01:09:29.117877] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.110 [2024-07-23 01:09:29.117901] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.110 [2024-07-23 01:09:29.117917] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.110 [2024-07-23 01:09:29.120409] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.110 [2024-07-23 01:09:29.129578] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.110 [2024-07-23 01:09:29.129993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.110 [2024-07-23 01:09:29.130170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.110 [2024-07-23 01:09:29.130196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:45.110 [2024-07-23 01:09:29.130212] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:45.110 [2024-07-23 01:09:29.130389] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:45.110 [2024-07-23 01:09:29.130583] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.110 [2024-07-23 01:09:29.130608] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.110 [2024-07-23 01:09:29.130635] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.110 [2024-07-23 01:09:29.132877] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.110 [2024-07-23 01:09:29.142177] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.110 [2024-07-23 01:09:29.142561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.110 [2024-07-23 01:09:29.142767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.110 [2024-07-23 01:09:29.142793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:45.110 [2024-07-23 01:09:29.142809] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:45.110 [2024-07-23 01:09:29.142988] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:45.110 [2024-07-23 01:09:29.143105] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.110 [2024-07-23 01:09:29.143129] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.110 [2024-07-23 01:09:29.143145] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.110 [2024-07-23 01:09:29.145478] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.110 [2024-07-23 01:09:29.154656] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.110 [2024-07-23 01:09:29.155055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.110 [2024-07-23 01:09:29.155219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.110 [2024-07-23 01:09:29.155248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:45.110 [2024-07-23 01:09:29.155266] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:45.110 [2024-07-23 01:09:29.155468] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:45.110 [2024-07-23 01:09:29.155666] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.110 [2024-07-23 01:09:29.155692] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.110 [2024-07-23 01:09:29.155708] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.110 [2024-07-23 01:09:29.157985] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.110 [2024-07-23 01:09:29.167160] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.110 [2024-07-23 01:09:29.167588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.110 [2024-07-23 01:09:29.167830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.110 [2024-07-23 01:09:29.167859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:45.110 [2024-07-23 01:09:29.167877] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:45.110 [2024-07-23 01:09:29.168026] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:45.110 [2024-07-23 01:09:29.168141] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.110 [2024-07-23 01:09:29.168171] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.110 [2024-07-23 01:09:29.168188] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.110 [2024-07-23 01:09:29.170537] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.110 [2024-07-23 01:09:29.179768] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.110 [2024-07-23 01:09:29.180144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.110 [2024-07-23 01:09:29.180333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.110 [2024-07-23 01:09:29.180397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:45.110 [2024-07-23 01:09:29.180415] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:45.110 [2024-07-23 01:09:29.180580] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:45.110 [2024-07-23 01:09:29.180687] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.111 [2024-07-23 01:09:29.180712] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.111 [2024-07-23 01:09:29.180728] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.111 [2024-07-23 01:09:29.183111] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.111 [2024-07-23 01:09:29.192569] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.111 [2024-07-23 01:09:29.192883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.111 [2024-07-23 01:09:29.193165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.111 [2024-07-23 01:09:29.193216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:45.111 [2024-07-23 01:09:29.193235] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:45.111 [2024-07-23 01:09:29.193401] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:45.111 [2024-07-23 01:09:29.193606] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.111 [2024-07-23 01:09:29.193640] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.111 [2024-07-23 01:09:29.193656] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.111 [2024-07-23 01:09:29.195914] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.111 [2024-07-23 01:09:29.205047] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.111 [2024-07-23 01:09:29.205391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.111 [2024-07-23 01:09:29.205604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.111 [2024-07-23 01:09:29.205643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:45.111 [2024-07-23 01:09:29.205662] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:45.111 [2024-07-23 01:09:29.205865] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:45.111 [2024-07-23 01:09:29.206053] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.111 [2024-07-23 01:09:29.206078] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.111 [2024-07-23 01:09:29.206100] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.111 [2024-07-23 01:09:29.208451] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.111 [2024-07-23 01:09:29.217544] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.111 [2024-07-23 01:09:29.217957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.111 [2024-07-23 01:09:29.218171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.111 [2024-07-23 01:09:29.218204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:45.111 [2024-07-23 01:09:29.218238] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:45.111 [2024-07-23 01:09:29.218350] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:45.111 [2024-07-23 01:09:29.218556] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.111 [2024-07-23 01:09:29.218580] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.111 [2024-07-23 01:09:29.218596] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.111 [2024-07-23 01:09:29.220965] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.111 [2024-07-23 01:09:29.230203] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.111 [2024-07-23 01:09:29.230594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.111 [2024-07-23 01:09:29.230789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.111 [2024-07-23 01:09:29.230819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:45.111 [2024-07-23 01:09:29.230836] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:45.111 [2024-07-23 01:09:29.230948] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:45.111 [2024-07-23 01:09:29.231122] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.111 [2024-07-23 01:09:29.231146] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.111 [2024-07-23 01:09:29.231162] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.111 [2024-07-23 01:09:29.233477] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.111 [2024-07-23 01:09:29.242921] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.111 [2024-07-23 01:09:29.243298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.111 [2024-07-23 01:09:29.243508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.111 [2024-07-23 01:09:29.243537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:45.111 [2024-07-23 01:09:29.243555] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:45.111 [2024-07-23 01:09:29.243732] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:45.111 [2024-07-23 01:09:29.243921] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.111 [2024-07-23 01:09:29.243946] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.111 [2024-07-23 01:09:29.243962] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.111 [2024-07-23 01:09:29.246478] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.111 [2024-07-23 01:09:29.255478] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.111 [2024-07-23 01:09:29.255854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.111 [2024-07-23 01:09:29.256078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.111 [2024-07-23 01:09:29.256105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:45.111 [2024-07-23 01:09:29.256121] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:45.111 [2024-07-23 01:09:29.256303] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:45.111 [2024-07-23 01:09:29.256491] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.111 [2024-07-23 01:09:29.256515] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.111 [2024-07-23 01:09:29.256532] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.111 [2024-07-23 01:09:29.258926] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.111 [2024-07-23 01:09:29.268145] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.111 [2024-07-23 01:09:29.268691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.111 [2024-07-23 01:09:29.268855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.111 [2024-07-23 01:09:29.268884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:45.111 [2024-07-23 01:09:29.268902] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:45.111 [2024-07-23 01:09:29.269087] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:45.111 [2024-07-23 01:09:29.269257] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.111 [2024-07-23 01:09:29.269281] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.111 [2024-07-23 01:09:29.269298] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.111 [2024-07-23 01:09:29.271745] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.111 [2024-07-23 01:09:29.280838] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.111 [2024-07-23 01:09:29.281219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.111 [2024-07-23 01:09:29.281429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.111 [2024-07-23 01:09:29.281458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:45.111 [2024-07-23 01:09:29.281477] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:45.111 [2024-07-23 01:09:29.281654] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:45.111 [2024-07-23 01:09:29.281878] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.111 [2024-07-23 01:09:29.281903] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.111 [2024-07-23 01:09:29.281920] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.111 [2024-07-23 01:09:29.284412] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.111 [2024-07-23 01:09:29.293363] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.111 [2024-07-23 01:09:29.293707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.111 [2024-07-23 01:09:29.293921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.111 [2024-07-23 01:09:29.293950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:45.111 [2024-07-23 01:09:29.293968] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:45.111 [2024-07-23 01:09:29.294152] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:45.111 [2024-07-23 01:09:29.294340] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.111 [2024-07-23 01:09:29.294365] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.111 [2024-07-23 01:09:29.294382] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.111 [2024-07-23 01:09:29.296737] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.111 [2024-07-23 01:09:29.305900] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.112 [2024-07-23 01:09:29.306223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.112 [2024-07-23 01:09:29.306409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.112 [2024-07-23 01:09:29.306438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:45.112 [2024-07-23 01:09:29.306456] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:45.112 [2024-07-23 01:09:29.306651] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:45.112 [2024-07-23 01:09:29.306840] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.112 [2024-07-23 01:09:29.306865] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.112 [2024-07-23 01:09:29.306881] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.112 [2024-07-23 01:09:29.309373] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.371 [2024-07-23 01:09:29.318418] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.371 [2024-07-23 01:09:29.318823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.371 [2024-07-23 01:09:29.318972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.371 [2024-07-23 01:09:29.318998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:45.371 [2024-07-23 01:09:29.319014] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:45.371 [2024-07-23 01:09:29.319127] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:45.371 [2024-07-23 01:09:29.319266] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.371 [2024-07-23 01:09:29.319291] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.371 [2024-07-23 01:09:29.319307] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.371 [2024-07-23 01:09:29.321667] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.371 [2024-07-23 01:09:29.330865] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.371 [2024-07-23 01:09:29.331260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.372 [2024-07-23 01:09:29.331443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.372 [2024-07-23 01:09:29.331472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:45.372 [2024-07-23 01:09:29.331490] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:45.372 [2024-07-23 01:09:29.331651] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:45.372 [2024-07-23 01:09:29.331838] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.372 [2024-07-23 01:09:29.331863] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.372 [2024-07-23 01:09:29.331880] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.372 [2024-07-23 01:09:29.334031] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.372 [2024-07-23 01:09:29.343376] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.372 [2024-07-23 01:09:29.343751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.372 [2024-07-23 01:09:29.343980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.372 [2024-07-23 01:09:29.344006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:45.372 [2024-07-23 01:09:29.344038] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:45.372 [2024-07-23 01:09:29.344224] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:45.372 [2024-07-23 01:09:29.344394] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.372 [2024-07-23 01:09:29.344419] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.372 [2024-07-23 01:09:29.344435] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.372 [2024-07-23 01:09:29.346722] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.372 [2024-07-23 01:09:29.356023] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.372 [2024-07-23 01:09:29.356412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.372 [2024-07-23 01:09:29.356640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.372 [2024-07-23 01:09:29.356671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:45.372 [2024-07-23 01:09:29.356690] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:45.372 [2024-07-23 01:09:29.356874] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:45.372 [2024-07-23 01:09:29.357026] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.372 [2024-07-23 01:09:29.357051] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.372 [2024-07-23 01:09:29.357067] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.372 [2024-07-23 01:09:29.359290] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.372 [2024-07-23 01:09:29.368438] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.372 [2024-07-23 01:09:29.368856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.372 [2024-07-23 01:09:29.369080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.372 [2024-07-23 01:09:29.369133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:45.372 [2024-07-23 01:09:29.369153] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:45.372 [2024-07-23 01:09:29.369283] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:45.372 [2024-07-23 01:09:29.369472] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.372 [2024-07-23 01:09:29.369497] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.372 [2024-07-23 01:09:29.369513] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.372 [2024-07-23 01:09:29.371854] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.372 [2024-07-23 01:09:29.381209] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.372 [2024-07-23 01:09:29.381544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.372 [2024-07-23 01:09:29.381738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.372 [2024-07-23 01:09:29.381769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:45.372 [2024-07-23 01:09:29.381787] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:45.372 [2024-07-23 01:09:29.381971] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:45.372 [2024-07-23 01:09:29.382195] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.372 [2024-07-23 01:09:29.382220] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.372 [2024-07-23 01:09:29.382236] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.372 [2024-07-23 01:09:29.384667] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.372 [2024-07-23 01:09:29.393723] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.372 [2024-07-23 01:09:29.394109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.372 [2024-07-23 01:09:29.394344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.372 [2024-07-23 01:09:29.394392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:45.372 [2024-07-23 01:09:29.394411] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:45.372 [2024-07-23 01:09:29.394558] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:45.372 [2024-07-23 01:09:29.394739] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.372 [2024-07-23 01:09:29.394765] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.372 [2024-07-23 01:09:29.394781] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.372 [2024-07-23 01:09:29.397328] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.372 [2024-07-23 01:09:29.406372] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.372 [2024-07-23 01:09:29.406792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.372 [2024-07-23 01:09:29.407010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.372 [2024-07-23 01:09:29.407036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:45.372 [2024-07-23 01:09:29.407072] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:45.372 [2024-07-23 01:09:29.407257] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:45.372 [2024-07-23 01:09:29.407446] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.372 [2024-07-23 01:09:29.407471] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.372 [2024-07-23 01:09:29.407487] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.372 [2024-07-23 01:09:29.409846] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.372 [2024-07-23 01:09:29.418970] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.372 [2024-07-23 01:09:29.419299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.372 [2024-07-23 01:09:29.419508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.372 [2024-07-23 01:09:29.419537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:45.372 [2024-07-23 01:09:29.419555] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:45.372 [2024-07-23 01:09:29.419731] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:45.372 [2024-07-23 01:09:29.419884] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.372 [2024-07-23 01:09:29.419909] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.372 [2024-07-23 01:09:29.419925] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.372 [2024-07-23 01:09:29.422308] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.372 [2024-07-23 01:09:29.431440] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.372 [2024-07-23 01:09:29.431795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.372 [2024-07-23 01:09:29.432095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.372 [2024-07-23 01:09:29.432145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:45.372 [2024-07-23 01:09:29.432164] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:45.372 [2024-07-23 01:09:29.432384] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:45.372 [2024-07-23 01:09:29.432536] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.372 [2024-07-23 01:09:29.432561] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.372 [2024-07-23 01:09:29.432577] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.372 [2024-07-23 01:09:29.434880] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.372 [2024-07-23 01:09:29.443775] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.372 [2024-07-23 01:09:29.444144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.372 [2024-07-23 01:09:29.444452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.372 [2024-07-23 01:09:29.444511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:45.372 [2024-07-23 01:09:29.444529] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:45.373 [2024-07-23 01:09:29.444711] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:45.373 [2024-07-23 01:09:29.444864] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.373 [2024-07-23 01:09:29.444888] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.373 [2024-07-23 01:09:29.444905] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.373 [2024-07-23 01:09:29.447344] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.373 [2024-07-23 01:09:29.456537] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.373 [2024-07-23 01:09:29.456896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.373 [2024-07-23 01:09:29.457114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.373 [2024-07-23 01:09:29.457160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:45.373 [2024-07-23 01:09:29.457179] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:45.373 [2024-07-23 01:09:29.457344] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:45.373 [2024-07-23 01:09:29.457495] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.373 [2024-07-23 01:09:29.457520] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.373 [2024-07-23 01:09:29.457536] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.373 [2024-07-23 01:09:29.459984] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.373 [2024-07-23 01:09:29.468837] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.373 [2024-07-23 01:09:29.469165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.373 [2024-07-23 01:09:29.469489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.373 [2024-07-23 01:09:29.469539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:45.373 [2024-07-23 01:09:29.469557] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:45.373 [2024-07-23 01:09:29.469758] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:45.373 [2024-07-23 01:09:29.469965] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.373 [2024-07-23 01:09:29.469990] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.373 [2024-07-23 01:09:29.470006] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.373 [2024-07-23 01:09:29.472301] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.373 [2024-07-23 01:09:29.481437] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.373 [2024-07-23 01:09:29.481812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.373 [2024-07-23 01:09:29.482042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.373 [2024-07-23 01:09:29.482067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:45.373 [2024-07-23 01:09:29.482083] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:45.373 [2024-07-23 01:09:29.482236] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:45.373 [2024-07-23 01:09:29.482429] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.373 [2024-07-23 01:09:29.482454] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.373 [2024-07-23 01:09:29.482471] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.373 [2024-07-23 01:09:29.484839] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.373 [2024-07-23 01:09:29.494074] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.373 [2024-07-23 01:09:29.494443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.373 [2024-07-23 01:09:29.494636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.373 [2024-07-23 01:09:29.494665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:45.373 [2024-07-23 01:09:29.494683] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:45.373 [2024-07-23 01:09:29.494812] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:45.373 [2024-07-23 01:09:29.495000] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.373 [2024-07-23 01:09:29.495024] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.373 [2024-07-23 01:09:29.495041] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.373 [2024-07-23 01:09:29.497372] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.373 [2024-07-23 01:09:29.506716] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.373 [2024-07-23 01:09:29.507190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.373 [2024-07-23 01:09:29.507444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.373 [2024-07-23 01:09:29.507469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:45.373 [2024-07-23 01:09:29.507484] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:45.373 [2024-07-23 01:09:29.507660] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:45.373 [2024-07-23 01:09:29.507849] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.373 [2024-07-23 01:09:29.507874] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.373 [2024-07-23 01:09:29.507890] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.373 [2024-07-23 01:09:29.510312] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.373 [2024-07-23 01:09:29.519371] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.373 [2024-07-23 01:09:29.519704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.373 [2024-07-23 01:09:29.519858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.373 [2024-07-23 01:09:29.519887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:45.373 [2024-07-23 01:09:29.519905] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:45.373 [2024-07-23 01:09:29.520107] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:45.373 [2024-07-23 01:09:29.520295] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.373 [2024-07-23 01:09:29.520324] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.373 [2024-07-23 01:09:29.520342] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.373 [2024-07-23 01:09:29.522350] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.373 [2024-07-23 01:09:29.531927] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.373 [2024-07-23 01:09:29.532304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.373 [2024-07-23 01:09:29.532459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.373 [2024-07-23 01:09:29.532488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:45.373 [2024-07-23 01:09:29.532508] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:45.373 [2024-07-23 01:09:29.532667] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:45.373 [2024-07-23 01:09:29.532838] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.373 [2024-07-23 01:09:29.532872] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.373 [2024-07-23 01:09:29.532889] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.373 [2024-07-23 01:09:29.534892] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.373 [2024-07-23 01:09:29.544394] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.373 [2024-07-23 01:09:29.544762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.373 [2024-07-23 01:09:29.544936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.373 [2024-07-23 01:09:29.545002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:45.373 [2024-07-23 01:09:29.545021] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:45.373 [2024-07-23 01:09:29.545170] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:45.373 [2024-07-23 01:09:29.545394] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.373 [2024-07-23 01:09:29.545418] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.373 [2024-07-23 01:09:29.545435] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.373 [2024-07-23 01:09:29.547721] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.373 [2024-07-23 01:09:29.556994] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.373 [2024-07-23 01:09:29.557349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.373 [2024-07-23 01:09:29.557492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.373 [2024-07-23 01:09:29.557533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:45.373 [2024-07-23 01:09:29.557548] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:45.373 [2024-07-23 01:09:29.557755] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:45.373 [2024-07-23 01:09:29.557944] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.373 [2024-07-23 01:09:29.557968] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.373 [2024-07-23 01:09:29.557990] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.374 [2024-07-23 01:09:29.560287] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.374 [2024-07-23 01:09:29.569405] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.374 [2024-07-23 01:09:29.569762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.374 [2024-07-23 01:09:29.569942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.374 [2024-07-23 01:09:29.569972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:45.374 [2024-07-23 01:09:29.569990] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:45.374 [2024-07-23 01:09:29.570138] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:45.374 [2024-07-23 01:09:29.570271] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.374 [2024-07-23 01:09:29.570296] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.374 [2024-07-23 01:09:29.570312] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.374 [2024-07-23 01:09:29.572554] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.633 [2024-07-23 01:09:29.582006] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.633 [2024-07-23 01:09:29.582357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.633 [2024-07-23 01:09:29.582564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.633 [2024-07-23 01:09:29.582592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:45.633 [2024-07-23 01:09:29.582610] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:45.633 [2024-07-23 01:09:29.582769] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:45.633 [2024-07-23 01:09:29.582921] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.633 [2024-07-23 01:09:29.582946] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.633 [2024-07-23 01:09:29.582962] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.633 [2024-07-23 01:09:29.585238] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.633 [2024-07-23 01:09:29.594712] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.633 [2024-07-23 01:09:29.595064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.633 [2024-07-23 01:09:29.595258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.633 [2024-07-23 01:09:29.595287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:45.633 [2024-07-23 01:09:29.595305] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:45.633 [2024-07-23 01:09:29.595524] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:45.633 [2024-07-23 01:09:29.595722] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.633 [2024-07-23 01:09:29.595748] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.633 [2024-07-23 01:09:29.595765] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.633 [2024-07-23 01:09:29.598045] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.633 [2024-07-23 01:09:29.607342] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.633 [2024-07-23 01:09:29.607680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.633 [2024-07-23 01:09:29.607872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.633 [2024-07-23 01:09:29.607902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:45.633 [2024-07-23 01:09:29.607920] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:45.633 [2024-07-23 01:09:29.608050] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:45.633 [2024-07-23 01:09:29.608257] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.633 [2024-07-23 01:09:29.608282] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.633 [2024-07-23 01:09:29.608298] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.633 [2024-07-23 01:09:29.610400] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.633 [2024-07-23 01:09:29.619836] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.633 [2024-07-23 01:09:29.620219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.634 [2024-07-23 01:09:29.620436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.634 [2024-07-23 01:09:29.620465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:45.634 [2024-07-23 01:09:29.620483] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:45.634 [2024-07-23 01:09:29.620623] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:45.634 [2024-07-23 01:09:29.620776] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.634 [2024-07-23 01:09:29.620805] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.634 [2024-07-23 01:09:29.620822] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.634 [2024-07-23 01:09:29.623007] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.634 [2024-07-23 01:09:29.632647] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.634 [2024-07-23 01:09:29.633039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.634 [2024-07-23 01:09:29.633369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.634 [2024-07-23 01:09:29.633421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:45.634 [2024-07-23 01:09:29.633439] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:45.634 [2024-07-23 01:09:29.633569] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:45.634 [2024-07-23 01:09:29.633767] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.634 [2024-07-23 01:09:29.633793] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.634 [2024-07-23 01:09:29.633809] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.634 [2024-07-23 01:09:29.636052] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.634 [2024-07-23 01:09:29.645537] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.634 [2024-07-23 01:09:29.645938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.634 [2024-07-23 01:09:29.646211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.634 [2024-07-23 01:09:29.646261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:45.634 [2024-07-23 01:09:29.646279] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:45.634 [2024-07-23 01:09:29.646462] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:45.634 [2024-07-23 01:09:29.646643] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.634 [2024-07-23 01:09:29.646669] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.634 [2024-07-23 01:09:29.646686] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.634 [2024-07-23 01:09:29.648941] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.634 [2024-07-23 01:09:29.658004] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.634 [2024-07-23 01:09:29.658312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.634 [2024-07-23 01:09:29.658606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.634 [2024-07-23 01:09:29.658644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:45.634 [2024-07-23 01:09:29.658663] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:45.634 [2024-07-23 01:09:29.658847] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:45.634 [2024-07-23 01:09:29.658999] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.634 [2024-07-23 01:09:29.659024] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.634 [2024-07-23 01:09:29.659040] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.634 [2024-07-23 01:09:29.661276] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.634 [2024-07-23 01:09:29.670754] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.634 [2024-07-23 01:09:29.671294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.634 [2024-07-23 01:09:29.671592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.634 [2024-07-23 01:09:29.671626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:45.634 [2024-07-23 01:09:29.671659] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:45.634 [2024-07-23 01:09:29.671817] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:45.634 [2024-07-23 01:09:29.672050] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.634 [2024-07-23 01:09:29.672076] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.634 [2024-07-23 01:09:29.672094] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.634 [2024-07-23 01:09:29.674334] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.634 [2024-07-23 01:09:29.683209] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.634 [2024-07-23 01:09:29.683619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.634 [2024-07-23 01:09:29.683805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.634 [2024-07-23 01:09:29.683836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:45.634 [2024-07-23 01:09:29.683854] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:45.634 [2024-07-23 01:09:29.684003] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:45.634 [2024-07-23 01:09:29.684136] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.634 [2024-07-23 01:09:29.684162] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.634 [2024-07-23 01:09:29.684179] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.634 [2024-07-23 01:09:29.686331] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.634 [2024-07-23 01:09:29.695803] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.634 [2024-07-23 01:09:29.696152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.634 [2024-07-23 01:09:29.696328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.634 [2024-07-23 01:09:29.696356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:45.634 [2024-07-23 01:09:29.696372] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:45.634 [2024-07-23 01:09:29.696531] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:45.634 [2024-07-23 01:09:29.696715] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.634 [2024-07-23 01:09:29.696740] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.634 [2024-07-23 01:09:29.696757] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.634 [2024-07-23 01:09:29.698980] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.634 [2024-07-23 01:09:29.708761] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.634 [2024-07-23 01:09:29.709049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.634 [2024-07-23 01:09:29.709417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.634 [2024-07-23 01:09:29.709470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:45.634 [2024-07-23 01:09:29.709488] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:45.634 [2024-07-23 01:09:29.709686] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:45.634 [2024-07-23 01:09:29.709838] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.634 [2024-07-23 01:09:29.709864] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.634 [2024-07-23 01:09:29.709881] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.634 [2024-07-23 01:09:29.712233] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.634 [2024-07-23 01:09:29.721369] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.634 [2024-07-23 01:09:29.721747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.634 [2024-07-23 01:09:29.721964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.634 [2024-07-23 01:09:29.721994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:45.634 [2024-07-23 01:09:29.722012] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:45.634 [2024-07-23 01:09:29.722161] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:45.634 [2024-07-23 01:09:29.722330] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.634 [2024-07-23 01:09:29.722356] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.634 [2024-07-23 01:09:29.722373] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.634 [2024-07-23 01:09:29.724809] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.634 [2024-07-23 01:09:29.733934] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.634 [2024-07-23 01:09:29.734320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.634 [2024-07-23 01:09:29.734505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.634 [2024-07-23 01:09:29.734536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:45.634 [2024-07-23 01:09:29.734554] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:45.635 [2024-07-23 01:09:29.734713] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:45.635 [2024-07-23 01:09:29.734866] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.635 [2024-07-23 01:09:29.734890] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.635 [2024-07-23 01:09:29.734906] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.635 [2024-07-23 01:09:29.737402] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.635 [2024-07-23 01:09:29.746386] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.635 [2024-07-23 01:09:29.746780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.635 [2024-07-23 01:09:29.746989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.635 [2024-07-23 01:09:29.747017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:45.635 [2024-07-23 01:09:29.747036] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:45.635 [2024-07-23 01:09:29.747237] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:45.635 [2024-07-23 01:09:29.747408] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.635 [2024-07-23 01:09:29.747435] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.635 [2024-07-23 01:09:29.747452] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.635 [2024-07-23 01:09:29.749636] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.635 [2024-07-23 01:09:29.758977] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.635 [2024-07-23 01:09:29.759356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.635 [2024-07-23 01:09:29.759567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.635 [2024-07-23 01:09:29.759596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:45.635 [2024-07-23 01:09:29.759631] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:45.635 [2024-07-23 01:09:29.759782] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:45.635 [2024-07-23 01:09:29.759970] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.635 [2024-07-23 01:09:29.759996] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.635 [2024-07-23 01:09:29.760013] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.635 [2024-07-23 01:09:29.762202] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.635 [2024-07-23 01:09:29.771436] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.635 [2024-07-23 01:09:29.771823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.635 [2024-07-23 01:09:29.772040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.635 [2024-07-23 01:09:29.772065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:45.635 [2024-07-23 01:09:29.772082] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:45.635 [2024-07-23 01:09:29.772283] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:45.635 [2024-07-23 01:09:29.772464] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.635 [2024-07-23 01:09:29.772490] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.635 [2024-07-23 01:09:29.772507] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.635 [2024-07-23 01:09:29.775066] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.635 [2024-07-23 01:09:29.783607] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.635 [2024-07-23 01:09:29.783994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.635 [2024-07-23 01:09:29.784192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.635 [2024-07-23 01:09:29.784223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:45.635 [2024-07-23 01:09:29.784241] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:45.635 [2024-07-23 01:09:29.784408] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:45.635 [2024-07-23 01:09:29.784577] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.635 [2024-07-23 01:09:29.784603] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.635 [2024-07-23 01:09:29.784633] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.635 [2024-07-23 01:09:29.786916] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.635 [2024-07-23 01:09:29.796201] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.635 [2024-07-23 01:09:29.796664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.635 [2024-07-23 01:09:29.796881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.635 [2024-07-23 01:09:29.796911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:45.635 [2024-07-23 01:09:29.796930] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:45.635 [2024-07-23 01:09:29.797138] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:45.635 [2024-07-23 01:09:29.797308] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.635 [2024-07-23 01:09:29.797334] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.635 [2024-07-23 01:09:29.797350] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.635 [2024-07-23 01:09:29.799783] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.635 [2024-07-23 01:09:29.808715] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.635 [2024-07-23 01:09:29.809188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.635 [2024-07-23 01:09:29.809374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.635 [2024-07-23 01:09:29.809402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:45.635 [2024-07-23 01:09:29.809419] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:45.635 [2024-07-23 01:09:29.809568] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:45.635 [2024-07-23 01:09:29.809766] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.635 [2024-07-23 01:09:29.809792] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.635 [2024-07-23 01:09:29.809809] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.635 [2024-07-23 01:09:29.812124] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.635 [2024-07-23 01:09:29.821413] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.635 [2024-07-23 01:09:29.821784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.635 [2024-07-23 01:09:29.821980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.635 [2024-07-23 01:09:29.822019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:45.635 [2024-07-23 01:09:29.822035] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:45.635 [2024-07-23 01:09:29.822197] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:45.635 [2024-07-23 01:09:29.822330] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.635 [2024-07-23 01:09:29.822356] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.635 [2024-07-23 01:09:29.822373] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.635 [2024-07-23 01:09:29.824696] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.635 [2024-07-23 01:09:29.834045] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.895 [2024-07-23 01:09:29.834443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.895 [2024-07-23 01:09:29.834627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.895 [2024-07-23 01:09:29.834654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:45.895 [2024-07-23 01:09:29.834686] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:45.895 [2024-07-23 01:09:29.834857] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:45.895 [2024-07-23 01:09:29.835027] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.895 [2024-07-23 01:09:29.835053] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.895 [2024-07-23 01:09:29.835069] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.895 [2024-07-23 01:09:29.837295] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.895 [2024-07-23 01:09:29.846549] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.895 [2024-07-23 01:09:29.846954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.895 [2024-07-23 01:09:29.847311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.895 [2024-07-23 01:09:29.847363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:45.895 [2024-07-23 01:09:29.847381] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:45.895 [2024-07-23 01:09:29.847547] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:45.895 [2024-07-23 01:09:29.847731] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.895 [2024-07-23 01:09:29.847756] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.895 [2024-07-23 01:09:29.847772] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.895 [2024-07-23 01:09:29.849961] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.895 [2024-07-23 01:09:29.858790] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.895 [2024-07-23 01:09:29.859170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.895 [2024-07-23 01:09:29.859378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.895 [2024-07-23 01:09:29.859426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:45.895 [2024-07-23 01:09:29.859445] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:45.895 [2024-07-23 01:09:29.859661] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:45.895 [2024-07-23 01:09:29.859849] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.895 [2024-07-23 01:09:29.859875] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.895 [2024-07-23 01:09:29.859892] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.895 [2024-07-23 01:09:29.862224] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.895 [2024-07-23 01:09:29.871253] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.895 [2024-07-23 01:09:29.871623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.895 [2024-07-23 01:09:29.871824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.895 [2024-07-23 01:09:29.871852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:45.895 [2024-07-23 01:09:29.871869] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:45.895 [2024-07-23 01:09:29.872032] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:45.895 [2024-07-23 01:09:29.872217] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.895 [2024-07-23 01:09:29.872243] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.895 [2024-07-23 01:09:29.872259] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.895 [2024-07-23 01:09:29.874591] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.895 [2024-07-23 01:09:29.883894] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.895 [2024-07-23 01:09:29.884292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.895 [2024-07-23 01:09:29.884468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.895 [2024-07-23 01:09:29.884509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:45.895 [2024-07-23 01:09:29.884526] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:45.895 [2024-07-23 01:09:29.884697] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:45.895 [2024-07-23 01:09:29.884851] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.896 [2024-07-23 01:09:29.884876] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.896 [2024-07-23 01:09:29.884893] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.896 [2024-07-23 01:09:29.887187] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.896 [2024-07-23 01:09:29.896599] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.896 [2024-07-23 01:09:29.896956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.896 [2024-07-23 01:09:29.897199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.896 [2024-07-23 01:09:29.897241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:45.896 [2024-07-23 01:09:29.897258] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:45.896 [2024-07-23 01:09:29.897466] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:45.896 [2024-07-23 01:09:29.897626] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.896 [2024-07-23 01:09:29.897653] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.896 [2024-07-23 01:09:29.897680] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.896 [2024-07-23 01:09:29.899922] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.896 [2024-07-23 01:09:29.909062] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.896 [2024-07-23 01:09:29.909426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.896 [2024-07-23 01:09:29.909647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.896 [2024-07-23 01:09:29.909686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:45.896 [2024-07-23 01:09:29.909703] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:45.896 [2024-07-23 01:09:29.909835] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:45.896 [2024-07-23 01:09:29.909989] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.896 [2024-07-23 01:09:29.910031] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.896 [2024-07-23 01:09:29.910051] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.896 [2024-07-23 01:09:29.912593] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.896 [2024-07-23 01:09:29.921490] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.896 [2024-07-23 01:09:29.921849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.896 [2024-07-23 01:09:29.922061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.896 [2024-07-23 01:09:29.922090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:45.896 [2024-07-23 01:09:29.922108] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:45.896 [2024-07-23 01:09:29.922238] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:45.896 [2024-07-23 01:09:29.922408] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.896 [2024-07-23 01:09:29.922433] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.896 [2024-07-23 01:09:29.922449] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.896 [2024-07-23 01:09:29.924623] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.896 [2024-07-23 01:09:29.934018] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.896 [2024-07-23 01:09:29.934445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.896 [2024-07-23 01:09:29.934692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.896 [2024-07-23 01:09:29.934734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:45.896 [2024-07-23 01:09:29.934750] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:45.896 [2024-07-23 01:09:29.934922] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:45.896 [2024-07-23 01:09:29.935109] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.896 [2024-07-23 01:09:29.935134] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.896 [2024-07-23 01:09:29.935151] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.896 [2024-07-23 01:09:29.937357] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.896 [2024-07-23 01:09:29.946545] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.896 [2024-07-23 01:09:29.947056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.896 [2024-07-23 01:09:29.947270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.896 [2024-07-23 01:09:29.947317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:45.896 [2024-07-23 01:09:29.947336] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:45.896 [2024-07-23 01:09:29.947519] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:45.896 [2024-07-23 01:09:29.947709] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.896 [2024-07-23 01:09:29.947730] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.896 [2024-07-23 01:09:29.947748] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.896 [2024-07-23 01:09:29.950129] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.896 [2024-07-23 01:09:29.959269] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.896 [2024-07-23 01:09:29.959720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.896 [2024-07-23 01:09:29.959936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.896 [2024-07-23 01:09:29.959966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:45.896 [2024-07-23 01:09:29.959984] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:45.896 [2024-07-23 01:09:29.960168] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:45.896 [2024-07-23 01:09:29.960338] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.896 [2024-07-23 01:09:29.960364] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.896 [2024-07-23 01:09:29.960381] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.896 [2024-07-23 01:09:29.962537] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.896 [2024-07-23 01:09:29.972055] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.896 [2024-07-23 01:09:29.972403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.896 [2024-07-23 01:09:29.972576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.896 [2024-07-23 01:09:29.972602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:45.896 [2024-07-23 01:09:29.972628] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:45.896 [2024-07-23 01:09:29.972777] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:45.896 [2024-07-23 01:09:29.972980] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.896 [2024-07-23 01:09:29.973006] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.896 [2024-07-23 01:09:29.973022] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.896 [2024-07-23 01:09:29.975365] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.896 [2024-07-23 01:09:29.984778] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.896 [2024-07-23 01:09:29.985226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.896 [2024-07-23 01:09:29.985404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.896 [2024-07-23 01:09:29.985445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:45.896 [2024-07-23 01:09:29.985461] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:45.896 [2024-07-23 01:09:29.985667] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:45.896 [2024-07-23 01:09:29.985856] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.896 [2024-07-23 01:09:29.985881] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.896 [2024-07-23 01:09:29.985897] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.896 [2024-07-23 01:09:29.988108] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.896 [2024-07-23 01:09:29.997481] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.896 [2024-07-23 01:09:29.997866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.896 [2024-07-23 01:09:29.998135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.896 [2024-07-23 01:09:29.998165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:45.896 [2024-07-23 01:09:29.998183] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:45.896 [2024-07-23 01:09:29.998312] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:45.896 [2024-07-23 01:09:29.998482] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.896 [2024-07-23 01:09:29.998507] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.896 [2024-07-23 01:09:29.998523] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.896 [2024-07-23 01:09:30.000922] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.897 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3525480 Killed "${NVMF_APP[@]}" "$@" 00:29:45.897 01:09:30 -- host/bdevperf.sh@36 -- # tgt_init 00:29:45.897 01:09:30 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:45.897 01:09:30 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:45.897 01:09:30 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:45.897 01:09:30 -- common/autotest_common.sh@10 -- # set +x 00:29:45.897 [2024-07-23 01:09:30.010318] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.897 [2024-07-23 01:09:30.010655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.897 [2024-07-23 01:09:30.010851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.897 01:09:30 -- nvmf/common.sh@469 -- # nvmfpid=3526585 00:29:45.897 [2024-07-23 01:09:30.010890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:45.897 01:09:30 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:45.897 [2024-07-23 01:09:30.010909] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:45.897 01:09:30 -- nvmf/common.sh@470 -- # waitforlisten 3526585 00:29:45.897 [2024-07-23 01:09:30.011079] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:45.897 [2024-07-23 01:09:30.011215] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.897 [2024-07-23 01:09:30.011237] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.897 [2024-07-23 01:09:30.011269] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.897 01:09:30 -- common/autotest_common.sh@819 -- # '[' -z 3526585 ']' 00:29:45.897 01:09:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:45.897 01:09:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:45.897 01:09:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:45.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:45.897 01:09:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:45.897 01:09:30 -- common/autotest_common.sh@10 -- # set +x 00:29:45.897 [2024-07-23 01:09:30.013425] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.897 [2024-07-23 01:09:30.023050] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.897 [2024-07-23 01:09:30.023391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.897 [2024-07-23 01:09:30.023600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.897 [2024-07-23 01:09:30.023642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:45.897 [2024-07-23 01:09:30.023677] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:45.897 [2024-07-23 01:09:30.023828] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:45.897 [2024-07-23 01:09:30.024038] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.897 [2024-07-23 01:09:30.024063] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.897 [2024-07-23 01:09:30.024080] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.897 [2024-07-23 01:09:30.026459] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.897 [2024-07-23 01:09:30.035675] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.897 [2024-07-23 01:09:30.036002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.897 [2024-07-23 01:09:30.036198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.897 [2024-07-23 01:09:30.036227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:45.897 [2024-07-23 01:09:30.036246] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:45.897 [2024-07-23 01:09:30.036430] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:45.897 [2024-07-23 01:09:30.036564] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.897 [2024-07-23 01:09:30.036589] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.897 [2024-07-23 01:09:30.036605] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.897 [2024-07-23 01:09:30.038950] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.897 [2024-07-23 01:09:30.048084] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.897 [2024-07-23 01:09:30.048437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.897 [2024-07-23 01:09:30.048690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.897 [2024-07-23 01:09:30.048732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:45.897 [2024-07-23 01:09:30.048749] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:45.897 [2024-07-23 01:09:30.048927] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:45.897 [2024-07-23 01:09:30.049093] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.897 [2024-07-23 01:09:30.049118] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.897 [2024-07-23 01:09:30.049135] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.897 [2024-07-23 01:09:30.051490] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.897 [2024-07-23 01:09:30.056362] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:29:45.897 [2024-07-23 01:09:30.056432] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:45.897 [2024-07-23 01:09:30.060852] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.897 [2024-07-23 01:09:30.061201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.897 [2024-07-23 01:09:30.061386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.897 [2024-07-23 01:09:30.061416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:45.897 [2024-07-23 01:09:30.061434] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:45.897 [2024-07-23 01:09:30.061600] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:45.897 [2024-07-23 01:09:30.061784] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.897 [2024-07-23 01:09:30.061806] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.897 [2024-07-23 01:09:30.061820] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.897 [2024-07-23 01:09:30.064017] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.897 [2024-07-23 01:09:30.073468] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.897 [2024-07-23 01:09:30.074028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.897 [2024-07-23 01:09:30.074223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.897 [2024-07-23 01:09:30.074255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:45.897 [2024-07-23 01:09:30.074274] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:45.897 [2024-07-23 01:09:30.074446] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:45.897 [2024-07-23 01:09:30.074599] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.897 [2024-07-23 01:09:30.074639] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.897 [2024-07-23 01:09:30.074657] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.897 [2024-07-23 01:09:30.076923] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.897 [2024-07-23 01:09:30.085808] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.897 [2024-07-23 01:09:30.086211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.897 [2024-07-23 01:09:30.086378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.897 [2024-07-23 01:09:30.086409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:45.897 [2024-07-23 01:09:30.086428] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:45.897 [2024-07-23 01:09:30.086595] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:45.897 [2024-07-23 01:09:30.086813] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.897 [2024-07-23 01:09:30.086850] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.897 [2024-07-23 01:09:30.086865] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.897 [2024-07-23 01:09:30.089143] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.159 EAL: No free 2048 kB hugepages reported on node 1 00:29:46.159 [2024-07-23 01:09:30.098440] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.159 [2024-07-23 01:09:30.098792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.159 [2024-07-23 01:09:30.098948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.159 [2024-07-23 01:09:30.098992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:46.159 [2024-07-23 01:09:30.099011] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:46.159 [2024-07-23 01:09:30.099177] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:46.159 [2024-07-23 01:09:30.099348] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.159 [2024-07-23 01:09:30.099373] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.159 [2024-07-23 01:09:30.099389] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.159 [2024-07-23 01:09:30.101475] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.159 [2024-07-23 01:09:30.111120] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.159 [2024-07-23 01:09:30.111552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.159 [2024-07-23 01:09:30.111736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.159 [2024-07-23 01:09:30.111764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:46.159 [2024-07-23 01:09:30.111781] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:46.159 [2024-07-23 01:09:30.111942] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:46.159 [2024-07-23 01:09:30.112125] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.159 [2024-07-23 01:09:30.112150] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.159 [2024-07-23 01:09:30.112167] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.159 [2024-07-23 01:09:30.114424] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.159 [2024-07-23 01:09:30.123918] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.159 [2024-07-23 01:09:30.124281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.159 [2024-07-23 01:09:30.124466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.159 [2024-07-23 01:09:30.124495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:46.159 [2024-07-23 01:09:30.124514] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:46.159 [2024-07-23 01:09:30.124843] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:46.159 [2024-07-23 01:09:30.124971] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.159 [2024-07-23 01:09:30.125009] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.159 [2024-07-23 01:09:30.125025] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.159 [2024-07-23 01:09:30.127383] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.159 [2024-07-23 01:09:30.132129] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:46.159 [2024-07-23 01:09:30.136513] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.159 [2024-07-23 01:09:30.137025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.159 [2024-07-23 01:09:30.137254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.159 [2024-07-23 01:09:30.137284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:46.159 [2024-07-23 01:09:30.137303] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:46.159 [2024-07-23 01:09:30.137491] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:46.160 [2024-07-23 01:09:30.137636] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.160 [2024-07-23 01:09:30.137676] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.160 [2024-07-23 01:09:30.137692] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.160 [2024-07-23 01:09:30.139881] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.160 [2024-07-23 01:09:30.149035] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.160 [2024-07-23 01:09:30.149484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.160 [2024-07-23 01:09:30.149738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.160 [2024-07-23 01:09:30.149765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:46.160 [2024-07-23 01:09:30.149785] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:46.160 [2024-07-23 01:09:30.149957] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:46.160 [2024-07-23 01:09:30.150125] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.160 [2024-07-23 01:09:30.150151] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.160 [2024-07-23 01:09:30.150169] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.160 [2024-07-23 01:09:30.152534] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.160 [2024-07-23 01:09:30.161717] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.160 [2024-07-23 01:09:30.162077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.160 [2024-07-23 01:09:30.162313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.160 [2024-07-23 01:09:30.162342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:46.160 [2024-07-23 01:09:30.162361] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:46.160 [2024-07-23 01:09:30.162527] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:46.160 [2024-07-23 01:09:30.162717] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.160 [2024-07-23 01:09:30.162741] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.160 [2024-07-23 01:09:30.162756] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.160 [2024-07-23 01:09:30.165043] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.160 [2024-07-23 01:09:30.174216] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.160 [2024-07-23 01:09:30.174585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.160 [2024-07-23 01:09:30.174841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.160 [2024-07-23 01:09:30.174871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:46.160 [2024-07-23 01:09:30.174890] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:46.160 [2024-07-23 01:09:30.175074] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:46.160 [2024-07-23 01:09:30.175208] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.160 [2024-07-23 01:09:30.175233] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.160 [2024-07-23 01:09:30.175250] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.160 [2024-07-23 01:09:30.177477] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.160 [2024-07-23 01:09:30.186722] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.160 [2024-07-23 01:09:30.187174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.160 [2024-07-23 01:09:30.187417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.160 [2024-07-23 01:09:30.187443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:46.160 [2024-07-23 01:09:30.187463] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:46.160 [2024-07-23 01:09:30.187662] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:46.160 [2024-07-23 01:09:30.187839] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.160 [2024-07-23 01:09:30.187860] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.160 [2024-07-23 01:09:30.187876] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.160 [2024-07-23 01:09:30.190340] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.160 [2024-07-23 01:09:30.199393] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.160 [2024-07-23 01:09:30.199780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.160 [2024-07-23 01:09:30.199957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.160 [2024-07-23 01:09:30.199987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:46.160 [2024-07-23 01:09:30.200006] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:46.160 [2024-07-23 01:09:30.200158] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:46.160 [2024-07-23 01:09:30.200293] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.160 [2024-07-23 01:09:30.200317] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.160 [2024-07-23 01:09:30.200334] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.160 [2024-07-23 01:09:30.202791] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.160 [2024-07-23 01:09:30.212088] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.160 [2024-07-23 01:09:30.212506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.160 [2024-07-23 01:09:30.212705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.160 [2024-07-23 01:09:30.212732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:46.160 [2024-07-23 01:09:30.212761] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:46.160 [2024-07-23 01:09:30.212926] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:46.160 [2024-07-23 01:09:30.213093] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.160 [2024-07-23 01:09:30.213118] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.160 [2024-07-23 01:09:30.213134] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.160 [2024-07-23 01:09:30.215610] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.160 [2024-07-23 01:09:30.222331] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:46.160 [2024-07-23 01:09:30.222456] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:46.160 [2024-07-23 01:09:30.222474] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:46.160 [2024-07-23 01:09:30.222487] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:46.160 [2024-07-23 01:09:30.222541] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:46.160 [2024-07-23 01:09:30.222566] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:46.160 [2024-07-23 01:09:30.222568] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:46.160 [2024-07-23 01:09:30.224258] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.160 [2024-07-23 01:09:30.224688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.160 [2024-07-23 01:09:30.224872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.160 [2024-07-23 01:09:30.224899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:46.160 [2024-07-23 01:09:30.224916] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:46.160 [2024-07-23 01:09:30.225098] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:46.160 [2024-07-23 01:09:30.225244] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.160 [2024-07-23 01:09:30.225266] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.160 [2024-07-23 01:09:30.225281] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.160 [2024-07-23 01:09:30.227210] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.161 [2024-07-23 01:09:30.236353] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.161 [2024-07-23 01:09:30.236884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.161 [2024-07-23 01:09:30.237047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.161 [2024-07-23 01:09:30.237074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:46.161 [2024-07-23 01:09:30.237094] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:46.161 [2024-07-23 01:09:30.237269] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:46.161 [2024-07-23 01:09:30.237449] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.161 [2024-07-23 01:09:30.237471] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.161 [2024-07-23 01:09:30.237498] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.161 [2024-07-23 01:09:30.239470] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.161 [2024-07-23 01:09:30.248577] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.161 [2024-07-23 01:09:30.249264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.161 [2024-07-23 01:09:30.249556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.161 [2024-07-23 01:09:30.249585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:46.161 [2024-07-23 01:09:30.249606] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:46.161 [2024-07-23 01:09:30.249820] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:46.161 [2024-07-23 01:09:30.250010] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.161 [2024-07-23 01:09:30.250033] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.161 [2024-07-23 01:09:30.250051] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.161 [2024-07-23 01:09:30.252117] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.161 [2024-07-23 01:09:30.261046] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.161 [2024-07-23 01:09:30.261585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.161 [2024-07-23 01:09:30.261756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.161 [2024-07-23 01:09:30.261783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:46.161 [2024-07-23 01:09:30.261804] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:46.161 [2024-07-23 01:09:30.262013] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:46.161 [2024-07-23 01:09:30.262177] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.161 [2024-07-23 01:09:30.262199] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.161 [2024-07-23 01:09:30.262216] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.161 [2024-07-23 01:09:30.264272] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.161 [2024-07-23 01:09:30.273198] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.161 [2024-07-23 01:09:30.273754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.161 [2024-07-23 01:09:30.273979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.161 [2024-07-23 01:09:30.274007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:46.161 [2024-07-23 01:09:30.274027] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:46.161 [2024-07-23 01:09:30.274265] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:46.161 [2024-07-23 01:09:30.274386] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.161 [2024-07-23 01:09:30.274409] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.161 [2024-07-23 01:09:30.274426] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.161 [2024-07-23 01:09:30.276385] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.161 [2024-07-23 01:09:30.285410] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.161 [2024-07-23 01:09:30.285845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.161 [2024-07-23 01:09:30.286057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.161 [2024-07-23 01:09:30.286085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:46.161 [2024-07-23 01:09:30.286104] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:46.161 [2024-07-23 01:09:30.286259] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:46.161 [2024-07-23 01:09:30.286438] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.161 [2024-07-23 01:09:30.286461] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.161 [2024-07-23 01:09:30.286477] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.161 [2024-07-23 01:09:30.288508] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.161 [2024-07-23 01:09:30.298171] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.161 [2024-07-23 01:09:30.298729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.161 [2024-07-23 01:09:30.298916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.161 [2024-07-23 01:09:30.298942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:46.161 [2024-07-23 01:09:30.298962] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:46.161 [2024-07-23 01:09:30.299124] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:46.161 [2024-07-23 01:09:30.299278] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.161 [2024-07-23 01:09:30.299302] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.161 [2024-07-23 01:09:30.299319] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.161 [2024-07-23 01:09:30.301410] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.161 [2024-07-23 01:09:30.310664] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.161 [2024-07-23 01:09:30.311003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.161 [2024-07-23 01:09:30.311182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.161 [2024-07-23 01:09:30.311210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:46.161 [2024-07-23 01:09:30.311227] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:46.161 [2024-07-23 01:09:30.311377] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:46.161 [2024-07-23 01:09:30.311539] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.161 [2024-07-23 01:09:30.311562] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.161 [2024-07-23 01:09:30.311577] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.161 [2024-07-23 01:09:30.313530] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.161 [2024-07-23 01:09:30.323086] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.161 [2024-07-23 01:09:30.323422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.161 [2024-07-23 01:09:30.323570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.161 [2024-07-23 01:09:30.323598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:46.161 [2024-07-23 01:09:30.323621] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:46.161 [2024-07-23 01:09:30.323789] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:46.161 [2024-07-23 01:09:30.323940] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.161 [2024-07-23 01:09:30.323963] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.161 [2024-07-23 01:09:30.323977] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.161 [2024-07-23 01:09:30.326041] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.161 [2024-07-23 01:09:30.335147] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.162 [2024-07-23 01:09:30.335436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.162 [2024-07-23 01:09:30.335626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.162 [2024-07-23 01:09:30.335655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:46.162 [2024-07-23 01:09:30.335671] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:46.162 [2024-07-23 01:09:30.335807] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:46.162 [2024-07-23 01:09:30.336021] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.162 [2024-07-23 01:09:30.336044] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.162 [2024-07-23 01:09:30.336058] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.162 [2024-07-23 01:09:30.338049] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.162 [2024-07-23 01:09:30.347530] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.162 [2024-07-23 01:09:30.347891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.162 [2024-07-23 01:09:30.348096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.162 [2024-07-23 01:09:30.348124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:46.162 [2024-07-23 01:09:30.348140] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:46.162 [2024-07-23 01:09:30.348290] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:46.162 [2024-07-23 01:09:30.348466] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.162 [2024-07-23 01:09:30.348489] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.162 [2024-07-23 01:09:30.348503] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.162 [2024-07-23 01:09:30.350411] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.450 [2024-07-23 01:09:30.359819] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.450 [2024-07-23 01:09:30.360127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.450 [2024-07-23 01:09:30.360324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.450 [2024-07-23 01:09:30.360352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:46.450 [2024-07-23 01:09:30.360369] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:46.450 [2024-07-23 01:09:30.360536] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:46.450 [2024-07-23 01:09:30.360698] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.450 [2024-07-23 01:09:30.360722] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.450 [2024-07-23 01:09:30.360738] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.450 [2024-07-23 01:09:30.362880] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.450 [2024-07-23 01:09:30.371921] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.450 [2024-07-23 01:09:30.372264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.450 [2024-07-23 01:09:30.372423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.450 [2024-07-23 01:09:30.372450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:46.450 [2024-07-23 01:09:30.372466] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:46.450 [2024-07-23 01:09:30.372624] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:46.450 [2024-07-23 01:09:30.372790] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.450 [2024-07-23 01:09:30.372812] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.450 [2024-07-23 01:09:30.372827] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.450 [2024-07-23 01:09:30.374848] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.450 [2024-07-23 01:09:30.384259] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.450 [2024-07-23 01:09:30.384566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.450 [2024-07-23 01:09:30.384772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.450 [2024-07-23 01:09:30.384800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:46.450 [2024-07-23 01:09:30.384817] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:46.450 [2024-07-23 01:09:30.384998] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:46.450 [2024-07-23 01:09:30.385174] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.450 [2024-07-23 01:09:30.385197] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.450 [2024-07-23 01:09:30.385211] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.450 [2024-07-23 01:09:30.387200] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.450 [2024-07-23 01:09:30.396620] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.450 [2024-07-23 01:09:30.396977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.450 [2024-07-23 01:09:30.397144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.450 [2024-07-23 01:09:30.397175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:46.450 [2024-07-23 01:09:30.397192] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:46.450 [2024-07-23 01:09:30.397355] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:46.450 [2024-07-23 01:09:30.397532] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.450 [2024-07-23 01:09:30.397555] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.450 [2024-07-23 01:09:30.397569] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.450 [2024-07-23 01:09:30.399755] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.450 [2024-07-23 01:09:30.408755] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.450 [2024-07-23 01:09:30.409113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.450 [2024-07-23 01:09:30.409258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.450 [2024-07-23 01:09:30.409288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:46.450 [2024-07-23 01:09:30.409319] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:46.450 [2024-07-23 01:09:30.409480] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:46.450 [2024-07-23 01:09:30.409594] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.450 [2024-07-23 01:09:30.409638] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.450 [2024-07-23 01:09:30.409656] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.450 [2024-07-23 01:09:30.411723] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.450 [2024-07-23 01:09:30.421064] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.450 [2024-07-23 01:09:30.421373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.450 [2024-07-23 01:09:30.421521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.450 [2024-07-23 01:09:30.421548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:46.450 [2024-07-23 01:09:30.421565] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:46.450 [2024-07-23 01:09:30.421741] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:46.450 [2024-07-23 01:09:30.421936] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.450 [2024-07-23 01:09:30.421958] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.450 [2024-07-23 01:09:30.421973] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.450 [2024-07-23 01:09:30.424065] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.450 [2024-07-23 01:09:30.433383] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.450 [2024-07-23 01:09:30.433724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.450 [2024-07-23 01:09:30.433898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.450 [2024-07-23 01:09:30.433923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:46.450 [2024-07-23 01:09:30.433944] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:46.450 [2024-07-23 01:09:30.434125] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:46.450 [2024-07-23 01:09:30.434271] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.450 [2024-07-23 01:09:30.434294] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.450 [2024-07-23 01:09:30.434308] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.450 [2024-07-23 01:09:30.436358] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.450 [2024-07-23 01:09:30.445653] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.450 [2024-07-23 01:09:30.445957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.450 [2024-07-23 01:09:30.446091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.450 [2024-07-23 01:09:30.446117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:46.451 [2024-07-23 01:09:30.446133] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:46.451 [2024-07-23 01:09:30.446250] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:46.451 [2024-07-23 01:09:30.446443] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.451 [2024-07-23 01:09:30.446466] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.451 [2024-07-23 01:09:30.446480] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.451 [2024-07-23 01:09:30.448410] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.451 [2024-07-23 01:09:30.457974] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.451 [2024-07-23 01:09:30.458229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.451 [2024-07-23 01:09:30.458409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.451 [2024-07-23 01:09:30.458438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:46.451 [2024-07-23 01:09:30.458454] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:46.451 [2024-07-23 01:09:30.458675] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:46.451 [2024-07-23 01:09:30.458807] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.451 [2024-07-23 01:09:30.458837] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.451 [2024-07-23 01:09:30.458852] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.451 [2024-07-23 01:09:30.460905] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.451 [2024-07-23 01:09:30.470286] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.451 [2024-07-23 01:09:30.470579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.451 [2024-07-23 01:09:30.470757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.451 [2024-07-23 01:09:30.470786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:46.451 [2024-07-23 01:09:30.470803] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:46.451 [2024-07-23 01:09:30.471005] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:46.451 [2024-07-23 01:09:30.471135] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.451 [2024-07-23 01:09:30.471158] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.451 [2024-07-23 01:09:30.471172] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.451 [2024-07-23 01:09:30.473071] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.451 [2024-07-23 01:09:30.482469] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.451 [2024-07-23 01:09:30.482807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.451 [2024-07-23 01:09:30.483004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.451 [2024-07-23 01:09:30.483030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:46.451 [2024-07-23 01:09:30.483047] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:46.451 [2024-07-23 01:09:30.483163] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:46.451 [2024-07-23 01:09:30.483342] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.451 [2024-07-23 01:09:30.483374] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.451 [2024-07-23 01:09:30.483388] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.451 [2024-07-23 01:09:30.485389] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.451 [2024-07-23 01:09:30.494764] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.451 [2024-07-23 01:09:30.495066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.451 [2024-07-23 01:09:30.495237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.451 [2024-07-23 01:09:30.495264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:46.451 [2024-07-23 01:09:30.495280] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:46.451 [2024-07-23 01:09:30.495494] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:46.451 [2024-07-23 01:09:30.495633] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.451 [2024-07-23 01:09:30.495657] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.451 [2024-07-23 01:09:30.495672] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.451 [2024-07-23 01:09:30.497675] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.451 [2024-07-23 01:09:30.506989] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.451 [2024-07-23 01:09:30.507356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.451 [2024-07-23 01:09:30.507522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.451 [2024-07-23 01:09:30.507549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:46.451 [2024-07-23 01:09:30.507565] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:46.451 [2024-07-23 01:09:30.507724] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:46.451 [2024-07-23 01:09:30.507854] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.451 [2024-07-23 01:09:30.507877] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.451 [2024-07-23 01:09:30.507907] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.451 [2024-07-23 01:09:30.510098] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.451 [2024-07-23 01:09:30.519099] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.451 [2024-07-23 01:09:30.519426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.451 [2024-07-23 01:09:30.519596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.451 [2024-07-23 01:09:30.519631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:46.451 [2024-07-23 01:09:30.519649] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:46.451 [2024-07-23 01:09:30.519783] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:46.451 [2024-07-23 01:09:30.519967] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.451 [2024-07-23 01:09:30.519989] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.451 [2024-07-23 01:09:30.520004] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.451 [2024-07-23 01:09:30.521976] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.451 [2024-07-23 01:09:30.531277] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.451 [2024-07-23 01:09:30.531689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.451 [2024-07-23 01:09:30.531835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.451 [2024-07-23 01:09:30.531863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:46.451 [2024-07-23 01:09:30.531880] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:46.451 [2024-07-23 01:09:30.532013] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:46.451 [2024-07-23 01:09:30.532191] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.451 [2024-07-23 01:09:30.532213] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.451 [2024-07-23 01:09:30.532226] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.451 [2024-07-23 01:09:30.534289] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.451 [2024-07-23 01:09:30.543456] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.451 [2024-07-23 01:09:30.543791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.451 [2024-07-23 01:09:30.543960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.451 [2024-07-23 01:09:30.543987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:46.451 [2024-07-23 01:09:30.544004] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:46.451 [2024-07-23 01:09:30.544203] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:46.451 [2024-07-23 01:09:30.544347] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.451 [2024-07-23 01:09:30.544373] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.451 [2024-07-23 01:09:30.544388] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.451 [2024-07-23 01:09:30.546660] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.451 [2024-07-23 01:09:30.555800] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.451 [2024-07-23 01:09:30.556189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.451 [2024-07-23 01:09:30.556351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.451 [2024-07-23 01:09:30.556379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:46.451 [2024-07-23 01:09:30.556396] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:46.451 [2024-07-23 01:09:30.556546] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:46.451 [2024-07-23 01:09:30.556739] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.452 [2024-07-23 01:09:30.556764] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.452 [2024-07-23 01:09:30.556779] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.452 [2024-07-23 01:09:30.558896] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.452 [2024-07-23 01:09:30.568365] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.452 [2024-07-23 01:09:30.568671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.452 [2024-07-23 01:09:30.568837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.452 [2024-07-23 01:09:30.568865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:46.452 [2024-07-23 01:09:30.568881] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:46.452 [2024-07-23 01:09:30.569014] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:46.452 [2024-07-23 01:09:30.569190] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.452 [2024-07-23 01:09:30.569213] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.452 [2024-07-23 01:09:30.569227] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.452 [2024-07-23 01:09:30.571236] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.452 [2024-07-23 01:09:30.580705] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.452 [2024-07-23 01:09:30.581038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.452 [2024-07-23 01:09:30.581243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.452 [2024-07-23 01:09:30.581269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:46.452 [2024-07-23 01:09:30.581286] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:46.452 [2024-07-23 01:09:30.581451] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:46.452 [2024-07-23 01:09:30.581669] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.452 [2024-07-23 01:09:30.581693] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.452 [2024-07-23 01:09:30.581714] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.452 [2024-07-23 01:09:30.583621] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.452 [2024-07-23 01:09:30.592805] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.452 [2024-07-23 01:09:30.593112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.452 [2024-07-23 01:09:30.593291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.452 [2024-07-23 01:09:30.593318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:46.452 [2024-07-23 01:09:30.593335] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:46.452 [2024-07-23 01:09:30.593485] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:46.452 [2024-07-23 01:09:30.593639] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.452 [2024-07-23 01:09:30.593661] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.452 [2024-07-23 01:09:30.593676] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.452 [2024-07-23 01:09:30.595912] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.452 [2024-07-23 01:09:30.604974] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.452 [2024-07-23 01:09:30.605342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.452 [2024-07-23 01:09:30.605507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.452 [2024-07-23 01:09:30.605534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:46.452 [2024-07-23 01:09:30.605551] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:46.452 [2024-07-23 01:09:30.605760] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:46.452 [2024-07-23 01:09:30.605893] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.452 [2024-07-23 01:09:30.605930] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.452 [2024-07-23 01:09:30.605945] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.452 [2024-07-23 01:09:30.607997] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.452 [2024-07-23 01:09:30.617185] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.452 [2024-07-23 01:09:30.617604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.452 [2024-07-23 01:09:30.617788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.452 [2024-07-23 01:09:30.617816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:46.452 [2024-07-23 01:09:30.617833] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:46.452 [2024-07-23 01:09:30.618031] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:46.452 [2024-07-23 01:09:30.618205] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.452 [2024-07-23 01:09:30.618228] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.452 [2024-07-23 01:09:30.618242] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.452 [2024-07-23 01:09:30.620250] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.452 [2024-07-23 01:09:30.629655] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.452 [2024-07-23 01:09:30.630019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.452 [2024-07-23 01:09:30.630183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.452 [2024-07-23 01:09:30.630210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:46.452 [2024-07-23 01:09:30.630227] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:46.452 [2024-07-23 01:09:30.630408] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:46.452 [2024-07-23 01:09:30.630562] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.452 [2024-07-23 01:09:30.630585] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.452 [2024-07-23 01:09:30.630600] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.452 [2024-07-23 01:09:30.632789] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.714 [2024-07-23 01:09:30.641924] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.714 [2024-07-23 01:09:30.642252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.714 [2024-07-23 01:09:30.642419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.714 [2024-07-23 01:09:30.642447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:46.714 [2024-07-23 01:09:30.642464] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:46.714 [2024-07-23 01:09:30.642638] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:46.714 [2024-07-23 01:09:30.642836] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.714 [2024-07-23 01:09:30.642858] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.714 [2024-07-23 01:09:30.642876] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.715 [2024-07-23 01:09:30.644911] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.715 [2024-07-23 01:09:30.654263] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.715 [2024-07-23 01:09:30.654585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.715 [2024-07-23 01:09:30.654749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.715 [2024-07-23 01:09:30.654775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:46.715 [2024-07-23 01:09:30.654792] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:46.715 [2024-07-23 01:09:30.654959] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:46.715 [2024-07-23 01:09:30.655152] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.715 [2024-07-23 01:09:30.655175] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.715 [2024-07-23 01:09:30.655189] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.715 [2024-07-23 01:09:30.657255] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.715 [2024-07-23 01:09:30.666440] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.715 [2024-07-23 01:09:30.666806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.715 [2024-07-23 01:09:30.666975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.715 [2024-07-23 01:09:30.667003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:46.715 [2024-07-23 01:09:30.667022] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:46.715 [2024-07-23 01:09:30.667187] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:46.715 [2024-07-23 01:09:30.667333] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.715 [2024-07-23 01:09:30.667355] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.715 [2024-07-23 01:09:30.667368] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.715 [2024-07-23 01:09:30.669555] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.715 [2024-07-23 01:09:30.678535] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.715 [2024-07-23 01:09:30.678860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.715 [2024-07-23 01:09:30.679030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.715 [2024-07-23 01:09:30.679056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:46.715 [2024-07-23 01:09:30.679073] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:46.715 [2024-07-23 01:09:30.679254] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:46.715 [2024-07-23 01:09:30.679415] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.715 [2024-07-23 01:09:30.679445] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.715 [2024-07-23 01:09:30.679458] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.715 [2024-07-23 01:09:30.681639] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.715 [2024-07-23 01:09:30.691009] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.715 [2024-07-23 01:09:30.691380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.715 [2024-07-23 01:09:30.691545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.715 [2024-07-23 01:09:30.691571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:46.715 [2024-07-23 01:09:30.691588] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:46.715 [2024-07-23 01:09:30.691728] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:46.715 [2024-07-23 01:09:30.691879] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.715 [2024-07-23 01:09:30.691901] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.715 [2024-07-23 01:09:30.691931] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.715 [2024-07-23 01:09:30.694089] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.715 [2024-07-23 01:09:30.703302] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.715 [2024-07-23 01:09:30.703694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.715 [2024-07-23 01:09:30.703860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.715 [2024-07-23 01:09:30.703887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:46.715 [2024-07-23 01:09:30.703903] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:46.715 [2024-07-23 01:09:30.704068] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:46.715 [2024-07-23 01:09:30.704215] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.715 [2024-07-23 01:09:30.704237] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.715 [2024-07-23 01:09:30.704251] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.715 [2024-07-23 01:09:30.706285] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.715 [2024-07-23 01:09:30.715705] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.715 [2024-07-23 01:09:30.716029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.715 [2024-07-23 01:09:30.716163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.715 [2024-07-23 01:09:30.716190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:46.715 [2024-07-23 01:09:30.716206] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:46.715 [2024-07-23 01:09:30.716371] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:46.715 [2024-07-23 01:09:30.716548] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.715 [2024-07-23 01:09:30.716570] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.715 [2024-07-23 01:09:30.716584] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.715 [2024-07-23 01:09:30.718463] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.715 [2024-07-23 01:09:30.728027] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.715 [2024-07-23 01:09:30.728377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.715 [2024-07-23 01:09:30.728543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.715 [2024-07-23 01:09:30.728569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:46.715 [2024-07-23 01:09:30.728585] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:46.715 [2024-07-23 01:09:30.728710] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:46.715 [2024-07-23 01:09:30.728876] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.715 [2024-07-23 01:09:30.728899] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.715 [2024-07-23 01:09:30.728927] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.715 [2024-07-23 01:09:30.731037] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.715 [2024-07-23 01:09:30.740415] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.715 [2024-07-23 01:09:30.740739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.715 [2024-07-23 01:09:30.740905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.715 [2024-07-23 01:09:30.740935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:46.715 [2024-07-23 01:09:30.740953] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:46.715 [2024-07-23 01:09:30.741134] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:46.715 [2024-07-23 01:09:30.741295] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.715 [2024-07-23 01:09:30.741316] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.715 [2024-07-23 01:09:30.741330] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.715 [2024-07-23 01:09:30.743344] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.715 [2024-07-23 01:09:30.752626] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.715 [2024-07-23 01:09:30.752947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.715 [2024-07-23 01:09:30.753086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.715 [2024-07-23 01:09:30.753112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:46.715 [2024-07-23 01:09:30.753128] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:46.715 [2024-07-23 01:09:30.753263] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:46.715 [2024-07-23 01:09:30.753425] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.715 [2024-07-23 01:09:30.753447] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.715 [2024-07-23 01:09:30.753461] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.715 [2024-07-23 01:09:30.755564] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.715 [2024-07-23 01:09:30.764943] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.715 [2024-07-23 01:09:30.765309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.715 [2024-07-23 01:09:30.765479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.716 [2024-07-23 01:09:30.765506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:46.716 [2024-07-23 01:09:30.765522] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:46.716 [2024-07-23 01:09:30.765682] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:46.716 [2024-07-23 01:09:30.765833] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.716 [2024-07-23 01:09:30.765856] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.716 [2024-07-23 01:09:30.765870] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.716 [2024-07-23 01:09:30.767949] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.716 [2024-07-23 01:09:30.777177] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.716 [2024-07-23 01:09:30.777584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.716 [2024-07-23 01:09:30.777764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.716 [2024-07-23 01:09:30.777792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:46.716 [2024-07-23 01:09:30.777813] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:46.716 [2024-07-23 01:09:30.777978] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:46.716 [2024-07-23 01:09:30.778108] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.716 [2024-07-23 01:09:30.778129] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.716 [2024-07-23 01:09:30.778143] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.716 [2024-07-23 01:09:30.780316] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.716 [2024-07-23 01:09:30.789389] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.716 [2024-07-23 01:09:30.789778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.716 [2024-07-23 01:09:30.789972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.716 [2024-07-23 01:09:30.789998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:46.716 [2024-07-23 01:09:30.790014] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:46.716 [2024-07-23 01:09:30.790179] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:46.716 [2024-07-23 01:09:30.790370] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.716 [2024-07-23 01:09:30.790391] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.716 [2024-07-23 01:09:30.790405] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.716 [2024-07-23 01:09:30.792439] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.716 [2024-07-23 01:09:30.801673] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.716 [2024-07-23 01:09:30.802007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.716 [2024-07-23 01:09:30.802187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.716 [2024-07-23 01:09:30.802213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:46.716 [2024-07-23 01:09:30.802230] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:46.716 [2024-07-23 01:09:30.802424] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:46.716 [2024-07-23 01:09:30.802568] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.716 [2024-07-23 01:09:30.802589] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.716 [2024-07-23 01:09:30.802606] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.716 [2024-07-23 01:09:30.804729] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.716 [2024-07-23 01:09:30.814006] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.716 [2024-07-23 01:09:30.814426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.716 [2024-07-23 01:09:30.814650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.716 [2024-07-23 01:09:30.814677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:46.716 [2024-07-23 01:09:30.814694] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:46.716 [2024-07-23 01:09:30.814881] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:46.716 [2024-07-23 01:09:30.815056] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.716 [2024-07-23 01:09:30.815077] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.716 [2024-07-23 01:09:30.815090] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.716 [2024-07-23 01:09:30.817157] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.716 [2024-07-23 01:09:30.826297] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.716 [2024-07-23 01:09:30.826723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.716 [2024-07-23 01:09:30.826891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.716 [2024-07-23 01:09:30.826928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:46.716 [2024-07-23 01:09:30.826945] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:46.716 [2024-07-23 01:09:30.827155] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:46.716 [2024-07-23 01:09:30.827314] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.716 [2024-07-23 01:09:30.827335] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.716 [2024-07-23 01:09:30.827349] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.716 [2024-07-23 01:09:30.829532] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.716 [2024-07-23 01:09:30.838451] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.716 [2024-07-23 01:09:30.838811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.716 [2024-07-23 01:09:30.838957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.716 [2024-07-23 01:09:30.838983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:46.716 [2024-07-23 01:09:30.839000] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:46.716 [2024-07-23 01:09:30.839149] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:46.716 [2024-07-23 01:09:30.839355] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.716 [2024-07-23 01:09:30.839376] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.716 [2024-07-23 01:09:30.839391] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.716 [2024-07-23 01:09:30.841440] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.716 [2024-07-23 01:09:30.850857] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.716 [2024-07-23 01:09:30.851251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.716 [2024-07-23 01:09:30.851420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.716 [2024-07-23 01:09:30.851446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:46.716 [2024-07-23 01:09:30.851463] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:46.716 [2024-07-23 01:09:30.851651] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:46.716 [2024-07-23 01:09:30.851790] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.716 [2024-07-23 01:09:30.851811] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.716 [2024-07-23 01:09:30.851826] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.716 [2024-07-23 01:09:30.853813] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.716 [2024-07-23 01:09:30.863084] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.716 [2024-07-23 01:09:30.863463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.716 [2024-07-23 01:09:30.863660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.716 [2024-07-23 01:09:30.863687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:46.716 [2024-07-23 01:09:30.863704] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:46.716 [2024-07-23 01:09:30.863886] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:46.716 [2024-07-23 01:09:30.864042] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.716 [2024-07-23 01:09:30.864064] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.716 [2024-07-23 01:09:30.864078] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.716 [2024-07-23 01:09:30.866097] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.716 [2024-07-23 01:09:30.875454] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.716 [2024-07-23 01:09:30.875806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.716 [2024-07-23 01:09:30.875995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.716 [2024-07-23 01:09:30.876021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:46.716 [2024-07-23 01:09:30.876037] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:46.716 [2024-07-23 01:09:30.876171] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:46.716 [2024-07-23 01:09:30.876364] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.717 [2024-07-23 01:09:30.876386] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.717 [2024-07-23 01:09:30.876402] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.717 [2024-07-23 01:09:30.878388] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.717 [2024-07-23 01:09:30.887848] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.717 [2024-07-23 01:09:30.888156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.717 [2024-07-23 01:09:30.888349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.717 [2024-07-23 01:09:30.888376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:46.717 [2024-07-23 01:09:30.888392] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:46.717 [2024-07-23 01:09:30.888588] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:46.717 [2024-07-23 01:09:30.888729] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.717 [2024-07-23 01:09:30.888756] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.717 [2024-07-23 01:09:30.888771] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.717 [2024-07-23 01:09:30.890772] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.717 [2024-07-23 01:09:30.900210] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.717 [2024-07-23 01:09:30.900548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.717 [2024-07-23 01:09:30.900724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.717 [2024-07-23 01:09:30.900753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:46.717 [2024-07-23 01:09:30.900770] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:46.717 [2024-07-23 01:09:30.900934] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:46.717 [2024-07-23 01:09:30.901091] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.717 [2024-07-23 01:09:30.901113] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.717 [2024-07-23 01:09:30.901127] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.717 [2024-07-23 01:09:30.903103] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.717 [2024-07-23 01:09:30.912559] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.717 [2024-07-23 01:09:30.912888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.717 [2024-07-23 01:09:30.913074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.717 [2024-07-23 01:09:30.913100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:46.717 [2024-07-23 01:09:30.913117] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:46.717 [2024-07-23 01:09:30.913250] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:46.717 [2024-07-23 01:09:30.913414] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.717 [2024-07-23 01:09:30.913437] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.717 [2024-07-23 01:09:30.913452] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.717 [2024-07-23 01:09:30.915539] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.976 [2024-07-23 01:09:30.924828] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.976 [2024-07-23 01:09:30.925212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.976 [2024-07-23 01:09:30.925390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.976 [2024-07-23 01:09:30.925416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:46.976 [2024-07-23 01:09:30.925432] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:46.976 [2024-07-23 01:09:30.925639] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:46.976 [2024-07-23 01:09:30.925810] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.976 [2024-07-23 01:09:30.925833] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.976 [2024-07-23 01:09:30.925853] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.976 [2024-07-23 01:09:30.927758] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.976 [2024-07-23 01:09:30.937151] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.976 [2024-07-23 01:09:30.937472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.976 [2024-07-23 01:09:30.937666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.976 [2024-07-23 01:09:30.937694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:46.976 [2024-07-23 01:09:30.937711] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:46.976 [2024-07-23 01:09:30.937876] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:46.976 [2024-07-23 01:09:30.938037] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.976 [2024-07-23 01:09:30.938058] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.976 [2024-07-23 01:09:30.938072] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.976 [2024-07-23 01:09:30.940143] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.976 [2024-07-23 01:09:30.949470] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.976 [2024-07-23 01:09:30.949777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.976 [2024-07-23 01:09:30.949918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.976 [2024-07-23 01:09:30.949944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:46.976 [2024-07-23 01:09:30.949960] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:46.976 [2024-07-23 01:09:30.950157] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:46.976 [2024-07-23 01:09:30.950348] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.976 [2024-07-23 01:09:30.950370] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.977 [2024-07-23 01:09:30.950384] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.977 [2024-07-23 01:09:30.952459] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.977 [2024-07-23 01:09:30.961830] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.977 [2024-07-23 01:09:30.962155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.977 [2024-07-23 01:09:30.962301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.977 [2024-07-23 01:09:30.962328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:46.977 [2024-07-23 01:09:30.962345] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:46.977 [2024-07-23 01:09:30.962478] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:46.977 [2024-07-23 01:09:30.962683] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.977 [2024-07-23 01:09:30.962706] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.977 [2024-07-23 01:09:30.962721] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.977 [2024-07-23 01:09:30.964633] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.977 [2024-07-23 01:09:30.974173] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.977 [2024-07-23 01:09:30.974471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.977 [2024-07-23 01:09:30.974670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.977 [2024-07-23 01:09:30.974697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:46.977 [2024-07-23 01:09:30.974714] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:46.977 [2024-07-23 01:09:30.974879] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:46.977 [2024-07-23 01:09:30.975013] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.977 [2024-07-23 01:09:30.975035] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.977 [2024-07-23 01:09:30.975050] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.977 [2024-07-23 01:09:30.977193] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.977 [2024-07-23 01:09:30.986630] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.977 [2024-07-23 01:09:30.986960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.977 01:09:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:46.977 [2024-07-23 01:09:30.987130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.977 [2024-07-23 01:09:30.987156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:46.977 [2024-07-23 01:09:30.987173] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:46.977 01:09:30 -- common/autotest_common.sh@852 -- # return 0 00:29:46.977 01:09:30 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:46.977 [2024-07-23 01:09:30.987354] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:46.977 01:09:30 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:46.977 01:09:30 -- common/autotest_common.sh@10 -- # set +x 00:29:46.977 [2024-07-23 01:09:30.987560] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.977 [2024-07-23 01:09:30.987582] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.977 [2024-07-23 01:09:30.987610] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.977 [2024-07-23 01:09:30.989786] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.977 [2024-07-23 01:09:30.998916] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.977 [2024-07-23 01:09:30.999279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.977 [2024-07-23 01:09:30.999442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.977 [2024-07-23 01:09:30.999469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:46.977 [2024-07-23 01:09:30.999486] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:46.977 [2024-07-23 01:09:30.999628] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:46.977 [2024-07-23 01:09:30.999782] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.977 [2024-07-23 01:09:30.999805] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.977 [2024-07-23 01:09:30.999825] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.977 [2024-07-23 01:09:31.001966] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.977 01:09:31 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:46.977 01:09:31 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:46.977 01:09:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:46.977 01:09:31 -- common/autotest_common.sh@10 -- # set +x 00:29:46.977 [2024-07-23 01:09:31.007107] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:46.977 [2024-07-23 01:09:31.011226] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.977 [2024-07-23 01:09:31.011553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.977 [2024-07-23 01:09:31.011706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.977 [2024-07-23 01:09:31.011734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:46.977 [2024-07-23 01:09:31.011751] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:46.977 [2024-07-23 01:09:31.011884] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:46.977 [2024-07-23 01:09:31.012049] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.977 [2024-07-23 01:09:31.012070] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.977 [2024-07-23 01:09:31.012084] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.977 01:09:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:46.977 01:09:31 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:46.977 01:09:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:46.977 01:09:31 -- common/autotest_common.sh@10 -- # set +x 00:29:46.977 [2024-07-23 01:09:31.014363] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.977 [2024-07-23 01:09:31.023502] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.977 [2024-07-23 01:09:31.023833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.977 [2024-07-23 01:09:31.023993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.977 [2024-07-23 01:09:31.024020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:46.977 [2024-07-23 01:09:31.024037] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:46.977 [2024-07-23 01:09:31.024267] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:46.977 [2024-07-23 01:09:31.024421] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.977 [2024-07-23 01:09:31.024441] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.977 [2024-07-23 01:09:31.024455] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.977 [2024-07-23 01:09:31.026498] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.977 [2024-07-23 01:09:31.036132] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.977 [2024-07-23 01:09:31.036656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.977 [2024-07-23 01:09:31.036883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.977 [2024-07-23 01:09:31.036937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:46.977 [2024-07-23 01:09:31.036966] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:46.977 [2024-07-23 01:09:31.037188] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:46.977 [2024-07-23 01:09:31.037321] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.977 [2024-07-23 01:09:31.037344] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.977 [2024-07-23 01:09:31.037360] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.977 [2024-07-23 01:09:31.039582] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.977 Malloc0 00:29:46.977 01:09:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:46.977 01:09:31 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:46.977 01:09:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:46.977 01:09:31 -- common/autotest_common.sh@10 -- # set +x 00:29:46.977 [2024-07-23 01:09:31.048393] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.977 [2024-07-23 01:09:31.048791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.977 [2024-07-23 01:09:31.048943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.977 [2024-07-23 01:09:31.048971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:46.977 [2024-07-23 01:09:31.049000] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:46.977 [2024-07-23 01:09:31.049105] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:46.977 [2024-07-23 01:09:31.049271] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.977 [2024-07-23 01:09:31.049293] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.977 [2024-07-23 01:09:31.049308] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.978 [2024-07-23 01:09:31.051371] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.978 01:09:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:46.978 01:09:31 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:46.978 01:09:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:46.978 01:09:31 -- common/autotest_common.sh@10 -- # set +x 00:29:46.978 [2024-07-23 01:09:31.060630] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.978 [2024-07-23 01:09:31.060943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.978 [2024-07-23 01:09:31.061092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.978 [2024-07-23 01:09:31.061129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9fc20 with addr=10.0.0.2, port=4420 00:29:46.978 [2024-07-23 01:09:31.061146] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fc20 is same with the state(5) to be set 00:29:46.978 [2024-07-23 01:09:31.061295] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9fc20 (9): Bad file descriptor 00:29:46.978 [2024-07-23 01:09:31.061425] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.978 [2024-07-23 01:09:31.061447] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.978 [2024-07-23 01:09:31.061461] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.978 01:09:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:46.978 01:09:31 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:46.978 01:09:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:46.978 01:09:31 -- common/autotest_common.sh@10 -- # set +x 00:29:46.978 [2024-07-23 01:09:31.063651] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.978 [2024-07-23 01:09:31.066489] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:46.978 01:09:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:46.978 01:09:31 -- host/bdevperf.sh@38 -- # wait 3525783 00:29:46.978 [2024-07-23 01:09:31.073225] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.978 [2024-07-23 01:09:31.141503] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:56.946 00:29:56.946 Latency(us) 00:29:56.946 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:56.946 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:56.946 Verification LBA range: start 0x0 length 0x4000 00:29:56.946 Nvme1n1 : 15.01 9621.30 37.58 15658.77 0.00 5048.65 934.49 20000.62 00:29:56.946 =================================================================================================================== 00:29:56.946 Total : 9621.30 37.58 15658.77 0.00 5048.65 934.49 20000.62 00:29:56.946 01:09:39 -- host/bdevperf.sh@39 -- # sync 00:29:56.946 01:09:39 -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:56.946 01:09:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:56.946 01:09:39 -- common/autotest_common.sh@10 -- # set +x 00:29:56.946 01:09:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:56.946 01:09:39 -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:29:56.946 01:09:39 -- host/bdevperf.sh@44 -- # nvmftestfini 00:29:56.946 01:09:39 -- nvmf/common.sh@476 -- # nvmfcleanup 00:29:56.946 01:09:39 -- nvmf/common.sh@116 -- # sync 00:29:56.946 01:09:39 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:29:56.946 01:09:39 -- nvmf/common.sh@119 -- # set +e 00:29:56.946 01:09:39 -- nvmf/common.sh@120 -- # for i in {1..20} 00:29:56.946 01:09:39 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:29:56.946 rmmod nvme_tcp 00:29:56.946 rmmod nvme_fabrics 00:29:56.946 rmmod nvme_keyring 00:29:56.946 01:09:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:29:56.946 01:09:39 -- nvmf/common.sh@123 -- # set -e 00:29:56.946 01:09:39 -- nvmf/common.sh@124 -- # return 0 00:29:56.946 01:09:39 -- nvmf/common.sh@477 -- # '[' -n 3526585 ']' 00:29:56.946 01:09:39 -- nvmf/common.sh@478 -- # killprocess 3526585 00:29:56.946 01:09:39 -- common/autotest_common.sh@926 -- # '[' -z 3526585 ']' 00:29:56.946 01:09:39 -- common/autotest_common.sh@930 -- # kill -0 3526585 00:29:56.946 01:09:39 -- common/autotest_common.sh@931 -- # uname 00:29:56.946 01:09:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:56.946 01:09:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3526585 00:29:56.946 01:09:39 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:29:56.946 01:09:39 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:29:56.946 01:09:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3526585' 00:29:56.946 killing process with pid 3526585 00:29:56.946 01:09:39 -- common/autotest_common.sh@945 -- # kill 3526585 00:29:56.946 01:09:39 -- common/autotest_common.sh@950 -- # wait 3526585 00:29:56.946 01:09:40 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:29:56.946 01:09:40 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:29:56.946 01:09:40 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:29:56.946 01:09:40 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:56.946 01:09:40 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:29:56.946 01:09:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:56.946 01:09:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:56.946 01:09:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:57.882 01:09:42 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:29:57.882 00:29:57.882 real 0m22.896s 00:29:57.882 user 1m1.925s 00:29:57.882 sys 0m4.225s 00:29:57.882 01:09:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:57.882 01:09:42 -- common/autotest_common.sh@10 -- # set +x 00:29:57.882 ************************************ 00:29:57.882 END TEST nvmf_bdevperf 00:29:57.882 ************************************ 00:29:58.141 01:09:42 -- nvmf/nvmf.sh@124 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:58.141 01:09:42 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:29:58.141 01:09:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:58.141 01:09:42 -- common/autotest_common.sh@10 -- # set +x 00:29:58.141 ************************************ 00:29:58.141 START TEST nvmf_target_disconnect 00:29:58.141 ************************************ 00:29:58.141 01:09:42 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:58.141 * Looking for test storage... 00:29:58.141 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:58.141 01:09:42 -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:58.141 01:09:42 -- nvmf/common.sh@7 -- # uname -s 00:29:58.141 01:09:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:58.141 01:09:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:58.141 01:09:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:58.141 01:09:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:58.141 01:09:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:58.141 01:09:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:58.141 01:09:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:58.141 01:09:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:58.141 01:09:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:58.141 01:09:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:58.141 01:09:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:58.141 01:09:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:58.141 01:09:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:58.141 01:09:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:58.141 01:09:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:58.141 01:09:42 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:58.141 01:09:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:58.141 01:09:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:58.141 01:09:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:58.141 01:09:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.141 01:09:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.141 01:09:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.141 01:09:42 -- paths/export.sh@5 -- # export PATH 00:29:58.141 01:09:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.141 01:09:42 -- nvmf/common.sh@46 -- # : 0 00:29:58.141 01:09:42 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:29:58.142 01:09:42 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:29:58.142 01:09:42 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:29:58.142 01:09:42 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:58.142 01:09:42 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:58.142 01:09:42 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:29:58.142 01:09:42 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:29:58.142 01:09:42 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:29:58.142 01:09:42 -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:58.142 01:09:42 -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:29:58.142 01:09:42 -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:29:58.142 01:09:42 -- host/target_disconnect.sh@77 -- # nvmftestinit 00:29:58.142 01:09:42 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:29:58.142 01:09:42 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:58.142 01:09:42 -- nvmf/common.sh@436 -- # prepare_net_devs 00:29:58.142 01:09:42 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:29:58.142 01:09:42 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:29:58.142 01:09:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:58.142 01:09:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:58.142 01:09:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:58.142 01:09:42 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:29:58.142 01:09:42 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:29:58.142 01:09:42 -- nvmf/common.sh@284 -- # xtrace_disable 00:29:58.142 01:09:42 -- common/autotest_common.sh@10 -- # set +x 00:30:00.046 01:09:44 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:30:00.046 01:09:44 -- nvmf/common.sh@290 -- # pci_devs=() 00:30:00.046 01:09:44 -- nvmf/common.sh@290 -- # local -a pci_devs 00:30:00.046 01:09:44 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:30:00.046 01:09:44 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:30:00.046 01:09:44 -- nvmf/common.sh@292 -- # pci_drivers=() 00:30:00.046 01:09:44 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:30:00.046 01:09:44 -- nvmf/common.sh@294 -- # net_devs=() 00:30:00.046 01:09:44 -- nvmf/common.sh@294 -- # local -ga net_devs 00:30:00.046 01:09:44 -- nvmf/common.sh@295 -- # e810=() 00:30:00.046 01:09:44 -- nvmf/common.sh@295 -- # local -ga e810 00:30:00.046 01:09:44 -- nvmf/common.sh@296 -- # x722=() 00:30:00.046 01:09:44 -- nvmf/common.sh@296 -- # local -ga x722 00:30:00.046 01:09:44 -- nvmf/common.sh@297 -- # mlx=() 00:30:00.046 01:09:44 -- nvmf/common.sh@297 -- # local -ga mlx 00:30:00.046 01:09:44 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:00.046 01:09:44 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:00.046 01:09:44 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:00.046 01:09:44 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:00.046 01:09:44 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:00.046 01:09:44 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:00.046 01:09:44 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:00.046 01:09:44 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:00.046 01:09:44 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:00.046 01:09:44 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:00.046 01:09:44 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:00.046 01:09:44 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:30:00.046 01:09:44 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:30:00.046 01:09:44 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:30:00.046 01:09:44 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:30:00.046 01:09:44 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:30:00.046 01:09:44 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:30:00.046 01:09:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:00.046 01:09:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:00.046 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:00.046 01:09:44 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:00.046 01:09:44 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:00.046 01:09:44 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:00.046 01:09:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:00.046 01:09:44 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:00.046 01:09:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:00.047 01:09:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:00.047 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:00.047 01:09:44 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:00.047 01:09:44 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:00.047 01:09:44 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:00.047 01:09:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:00.047 01:09:44 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:00.047 01:09:44 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:30:00.047 01:09:44 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:30:00.047 01:09:44 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:30:00.047 01:09:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:00.047 01:09:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:00.047 01:09:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:00.047 01:09:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:00.047 01:09:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:00.047 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:00.047 01:09:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:00.047 01:09:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:00.047 01:09:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:00.047 01:09:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:00.047 01:09:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:00.047 01:09:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:00.047 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:00.047 01:09:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:00.047 01:09:44 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:30:00.047 01:09:44 -- nvmf/common.sh@402 -- # is_hw=yes 00:30:00.047 01:09:44 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:30:00.047 01:09:44 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:30:00.047 01:09:44 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:30:00.047 01:09:44 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:00.047 01:09:44 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:00.047 01:09:44 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:00.047 01:09:44 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:30:00.047 01:09:44 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:00.047 01:09:44 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:00.047 01:09:44 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:30:00.047 01:09:44 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:00.047 01:09:44 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:00.047 01:09:44 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:30:00.047 01:09:44 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:30:00.047 01:09:44 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:30:00.047 01:09:44 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:00.047 01:09:44 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:00.047 01:09:44 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:00.047 01:09:44 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:30:00.047 01:09:44 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:00.306 01:09:44 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:00.306 01:09:44 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:00.306 01:09:44 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:30:00.306 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:00.306 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.166 ms 00:30:00.306 00:30:00.306 --- 10.0.0.2 ping statistics --- 00:30:00.306 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:00.306 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:30:00.306 01:09:44 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:00.306 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:00.306 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.094 ms 00:30:00.306 00:30:00.306 --- 10.0.0.1 ping statistics --- 00:30:00.306 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:00.306 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:30:00.306 01:09:44 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:00.306 01:09:44 -- nvmf/common.sh@410 -- # return 0 00:30:00.306 01:09:44 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:30:00.306 01:09:44 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:00.306 01:09:44 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:30:00.306 01:09:44 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:30:00.306 01:09:44 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:00.306 01:09:44 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:30:00.306 01:09:44 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:30:00.306 01:09:44 -- host/target_disconnect.sh@78 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:30:00.306 01:09:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:00.306 01:09:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:00.306 01:09:44 -- common/autotest_common.sh@10 -- # set +x 00:30:00.306 ************************************ 00:30:00.306 START TEST nvmf_target_disconnect_tc1 00:30:00.306 ************************************ 00:30:00.306 01:09:44 -- common/autotest_common.sh@1104 -- # nvmf_target_disconnect_tc1 00:30:00.306 01:09:44 -- host/target_disconnect.sh@32 -- # set +e 00:30:00.306 01:09:44 -- host/target_disconnect.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:00.306 EAL: No free 2048 kB hugepages reported on node 1 00:30:00.306 [2024-07-23 01:09:44.415684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.306 [2024-07-23 01:09:44.416008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.306 [2024-07-23 01:09:44.416036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b5280 with addr=10.0.0.2, port=4420 00:30:00.306 [2024-07-23 01:09:44.416069] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:00.306 [2024-07-23 01:09:44.416088] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:00.306 [2024-07-23 01:09:44.416101] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:30:00.306 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:30:00.306 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:30:00.306 Initializing NVMe Controllers 00:30:00.306 01:09:44 -- host/target_disconnect.sh@33 -- # trap - ERR 00:30:00.306 01:09:44 -- host/target_disconnect.sh@33 -- # print_backtrace 00:30:00.306 01:09:44 -- common/autotest_common.sh@1132 -- # [[ hxBET =~ e ]] 00:30:00.306 01:09:44 -- common/autotest_common.sh@1132 -- # return 0 00:30:00.306 01:09:44 -- host/target_disconnect.sh@37 -- # '[' 1 '!=' 1 ']' 00:30:00.306 01:09:44 -- host/target_disconnect.sh@41 -- # set -e 00:30:00.306 00:30:00.306 real 0m0.096s 00:30:00.306 user 0m0.041s 00:30:00.306 sys 0m0.054s 00:30:00.306 01:09:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:00.306 01:09:44 -- common/autotest_common.sh@10 -- # set +x 00:30:00.306 ************************************ 00:30:00.306 END TEST nvmf_target_disconnect_tc1 00:30:00.306 ************************************ 00:30:00.306 01:09:44 -- host/target_disconnect.sh@79 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:30:00.306 01:09:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:00.306 01:09:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:00.306 01:09:44 -- common/autotest_common.sh@10 -- # set +x 00:30:00.306 ************************************ 00:30:00.306 START TEST nvmf_target_disconnect_tc2 00:30:00.306 ************************************ 00:30:00.306 01:09:44 -- common/autotest_common.sh@1104 -- # nvmf_target_disconnect_tc2 00:30:00.306 01:09:44 -- host/target_disconnect.sh@45 -- # disconnect_init 10.0.0.2 00:30:00.306 01:09:44 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:30:00.306 01:09:44 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:30:00.306 01:09:44 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:00.306 01:09:44 -- common/autotest_common.sh@10 -- # set +x 00:30:00.306 01:09:44 -- nvmf/common.sh@469 -- # nvmfpid=3529663 00:30:00.306 01:09:44 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:30:00.306 01:09:44 -- nvmf/common.sh@470 -- # waitforlisten 3529663 00:30:00.306 01:09:44 -- common/autotest_common.sh@819 -- # '[' -z 3529663 ']' 00:30:00.306 01:09:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:00.306 01:09:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:00.306 01:09:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:00.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:00.306 01:09:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:00.306 01:09:44 -- common/autotest_common.sh@10 -- # set +x 00:30:00.306 [2024-07-23 01:09:44.497337] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:30:00.306 [2024-07-23 01:09:44.497414] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:00.565 EAL: No free 2048 kB hugepages reported on node 1 00:30:00.565 [2024-07-23 01:09:44.559431] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:00.565 [2024-07-23 01:09:44.641999] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:00.565 [2024-07-23 01:09:44.642153] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:00.565 [2024-07-23 01:09:44.642171] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:00.565 [2024-07-23 01:09:44.642183] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:00.565 [2024-07-23 01:09:44.642272] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:30:00.565 [2024-07-23 01:09:44.642336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:30:00.565 [2024-07-23 01:09:44.642399] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:30:00.565 [2024-07-23 01:09:44.642402] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:30:01.498 01:09:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:01.498 01:09:45 -- common/autotest_common.sh@852 -- # return 0 00:30:01.498 01:09:45 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:30:01.498 01:09:45 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:01.498 01:09:45 -- common/autotest_common.sh@10 -- # set +x 00:30:01.498 01:09:45 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:01.498 01:09:45 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:01.498 01:09:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:01.499 01:09:45 -- common/autotest_common.sh@10 -- # set +x 00:30:01.499 Malloc0 00:30:01.499 01:09:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:01.499 01:09:45 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:01.499 01:09:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:01.499 01:09:45 -- common/autotest_common.sh@10 -- # set +x 00:30:01.499 [2024-07-23 01:09:45.478389] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:01.499 01:09:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:01.499 01:09:45 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:01.499 01:09:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:01.499 01:09:45 -- common/autotest_common.sh@10 -- # set +x 00:30:01.499 01:09:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:01.499 01:09:45 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:01.499 01:09:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:01.499 01:09:45 -- common/autotest_common.sh@10 -- # set +x 00:30:01.499 01:09:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:01.499 01:09:45 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:01.499 01:09:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:01.499 01:09:45 -- common/autotest_common.sh@10 -- # set +x 00:30:01.499 [2024-07-23 01:09:45.506672] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:01.499 01:09:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:01.499 01:09:45 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:01.499 01:09:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:01.499 01:09:45 -- common/autotest_common.sh@10 -- # set +x 00:30:01.499 01:09:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:01.499 01:09:45 -- host/target_disconnect.sh@50 -- # reconnectpid=3529823 00:30:01.499 01:09:45 -- host/target_disconnect.sh@52 -- # sleep 2 00:30:01.499 01:09:45 -- host/target_disconnect.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:01.499 EAL: No free 2048 kB hugepages reported on node 1 00:30:03.406 01:09:47 -- host/target_disconnect.sh@53 -- # kill -9 3529663 00:30:03.406 01:09:47 -- host/target_disconnect.sh@55 -- # sleep 2 00:30:03.406 Read completed with error (sct=0, sc=8) 00:30:03.406 starting I/O failed 00:30:03.406 Read completed with error (sct=0, sc=8) 00:30:03.406 starting I/O failed 00:30:03.406 Read completed with error (sct=0, sc=8) 00:30:03.406 starting I/O failed 00:30:03.406 Read completed with error (sct=0, sc=8) 00:30:03.406 starting I/O failed 00:30:03.406 Read completed with error (sct=0, sc=8) 00:30:03.406 starting I/O failed 00:30:03.406 Read completed with error (sct=0, sc=8) 00:30:03.406 starting I/O failed 00:30:03.406 Read completed with error (sct=0, sc=8) 00:30:03.406 starting I/O failed 00:30:03.406 Read completed with error (sct=0, sc=8) 00:30:03.406 starting I/O failed 00:30:03.406 Read completed with error (sct=0, sc=8) 00:30:03.406 starting I/O failed 00:30:03.406 Read completed with error (sct=0, sc=8) 00:30:03.406 starting I/O failed 00:30:03.406 Read completed with error (sct=0, sc=8) 00:30:03.406 starting I/O failed 00:30:03.406 Read completed with error (sct=0, sc=8) 00:30:03.406 starting I/O failed 00:30:03.406 Read completed with error (sct=0, sc=8) 00:30:03.406 starting I/O failed 00:30:03.406 Read completed with error (sct=0, sc=8) 00:30:03.406 starting I/O failed 00:30:03.406 Read completed with error (sct=0, sc=8) 00:30:03.406 starting I/O failed 00:30:03.406 Read completed with error (sct=0, sc=8) 00:30:03.406 starting I/O failed 00:30:03.406 Write completed with error (sct=0, sc=8) 00:30:03.406 starting I/O failed 00:30:03.406 Read completed with error (sct=0, sc=8) 00:30:03.406 starting I/O failed 00:30:03.406 Write completed with error (sct=0, sc=8) 00:30:03.406 starting I/O failed 00:30:03.406 Write completed with error (sct=0, sc=8) 00:30:03.406 starting I/O failed 00:30:03.406 Write completed with error (sct=0, sc=8) 00:30:03.406 starting I/O failed 00:30:03.406 Write completed with error (sct=0, sc=8) 00:30:03.406 starting I/O failed 00:30:03.406 Read completed with error (sct=0, sc=8) 00:30:03.406 starting I/O failed 00:30:03.406 Write completed with error (sct=0, sc=8) 00:30:03.406 starting I/O failed 00:30:03.406 Write completed with error (sct=0, sc=8) 00:30:03.406 starting I/O failed 00:30:03.406 Write completed with error (sct=0, sc=8) 00:30:03.406 starting I/O failed 00:30:03.406 Read completed with error (sct=0, sc=8) 00:30:03.406 starting I/O failed 00:30:03.406 Read completed with error (sct=0, sc=8) 00:30:03.406 starting I/O failed 00:30:03.406 Write completed with error (sct=0, sc=8) 00:30:03.406 starting I/O failed 00:30:03.406 Read completed with error (sct=0, sc=8) 00:30:03.406 starting I/O failed 00:30:03.406 Write completed with error (sct=0, sc=8) 00:30:03.406 starting I/O failed 00:30:03.406 Read completed with error (sct=0, sc=8) 00:30:03.406 starting I/O failed 00:30:03.406 Read completed with error (sct=0, sc=8) 00:30:03.406 starting I/O failed 00:30:03.406 Read completed with error (sct=0, sc=8) 00:30:03.406 starting I/O failed 00:30:03.406 Read completed with error (sct=0, sc=8) 00:30:03.406 starting I/O failed 00:30:03.406 Read completed with error (sct=0, sc=8) 00:30:03.406 starting I/O failed 00:30:03.406 Read completed with error (sct=0, sc=8) 00:30:03.406 [2024-07-23 01:09:47.531559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.406 starting I/O failed 00:30:03.406 Read completed with error (sct=0, sc=8) 00:30:03.406 starting I/O failed 00:30:03.406 Read completed with error (sct=0, sc=8) 00:30:03.406 starting I/O failed 00:30:03.406 Read completed with error (sct=0, sc=8) 00:30:03.406 starting I/O failed 00:30:03.406 Read completed with error (sct=0, sc=8) 00:30:03.406 starting I/O failed 00:30:03.406 Read completed with error (sct=0, sc=8) 00:30:03.406 starting I/O failed 00:30:03.406 Read completed with error (sct=0, sc=8) 00:30:03.406 starting I/O failed 00:30:03.406 Read completed with error (sct=0, sc=8) 00:30:03.406 starting I/O failed 00:30:03.406 Read completed with error (sct=0, sc=8) 00:30:03.406 starting I/O failed 00:30:03.406 Read completed with error (sct=0, sc=8) 00:30:03.406 starting I/O failed 00:30:03.406 Write completed with error (sct=0, sc=8) 00:30:03.406 starting I/O failed 00:30:03.406 Read completed with error (sct=0, sc=8) 00:30:03.406 starting I/O failed 00:30:03.406 Write completed with error (sct=0, sc=8) 00:30:03.406 starting I/O failed 00:30:03.406 Read completed with error (sct=0, sc=8) 00:30:03.406 starting I/O failed 00:30:03.406 Read completed with error (sct=0, sc=8) 00:30:03.406 starting I/O failed 00:30:03.406 Read completed with error (sct=0, sc=8) 00:30:03.406 starting I/O failed 00:30:03.406 Write completed with error (sct=0, sc=8) 00:30:03.406 starting I/O failed 00:30:03.406 Read completed with error (sct=0, sc=8) 00:30:03.406 starting I/O failed 00:30:03.406 Write completed with error (sct=0, sc=8) 00:30:03.406 starting I/O failed 00:30:03.406 Read completed with error (sct=0, sc=8) 00:30:03.406 starting I/O failed 00:30:03.406 Write completed with error (sct=0, sc=8) 00:30:03.406 starting I/O failed 00:30:03.406 Write completed with error (sct=0, sc=8) 00:30:03.406 starting I/O failed 00:30:03.406 Read completed with error (sct=0, sc=8) 00:30:03.406 starting I/O failed 00:30:03.406 Write completed with error (sct=0, sc=8) 00:30:03.406 starting I/O failed 00:30:03.406 Read completed with error (sct=0, sc=8) 00:30:03.406 starting I/O failed 00:30:03.406 Read completed with error (sct=0, sc=8) 00:30:03.406 starting I/O failed 00:30:03.406 Read completed with error (sct=0, sc=8) 00:30:03.407 starting I/O failed 00:30:03.407 Write completed with error (sct=0, sc=8) 00:30:03.407 starting I/O failed 00:30:03.407 [2024-07-23 01:09:47.531867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.407 Read completed with error (sct=0, sc=8) 00:30:03.407 starting I/O failed 00:30:03.407 Read completed with error (sct=0, sc=8) 00:30:03.407 starting I/O failed 00:30:03.407 Read completed with error (sct=0, sc=8) 00:30:03.407 starting I/O failed 00:30:03.407 Read completed with error (sct=0, sc=8) 00:30:03.407 starting I/O failed 00:30:03.407 Read completed with error (sct=0, sc=8) 00:30:03.407 starting I/O failed 00:30:03.407 Read completed with error (sct=0, sc=8) 00:30:03.407 starting I/O failed 00:30:03.407 Read completed with error (sct=0, sc=8) 00:30:03.407 starting I/O failed 00:30:03.407 Read completed with error (sct=0, sc=8) 00:30:03.407 starting I/O failed 00:30:03.407 Read completed with error (sct=0, sc=8) 00:30:03.407 starting I/O failed 00:30:03.407 Read completed with error (sct=0, sc=8) 00:30:03.407 starting I/O failed 00:30:03.407 Read completed with error (sct=0, sc=8) 00:30:03.407 starting I/O failed 00:30:03.407 Write completed with error (sct=0, sc=8) 00:30:03.407 starting I/O failed 00:30:03.407 Write completed with error (sct=0, sc=8) 00:30:03.407 starting I/O failed 00:30:03.407 Read completed with error (sct=0, sc=8) 00:30:03.407 starting I/O failed 00:30:03.407 Write completed with error (sct=0, sc=8) 00:30:03.407 starting I/O failed 00:30:03.407 Write completed with error (sct=0, sc=8) 00:30:03.407 starting I/O failed 00:30:03.407 Write completed with error (sct=0, sc=8) 00:30:03.407 starting I/O failed 00:30:03.407 Write completed with error (sct=0, sc=8) 00:30:03.407 starting I/O failed 00:30:03.407 Read completed with error (sct=0, sc=8) 00:30:03.407 starting I/O failed 00:30:03.407 Write completed with error (sct=0, sc=8) 00:30:03.407 starting I/O failed 00:30:03.407 Write completed with error (sct=0, sc=8) 00:30:03.407 starting I/O failed 00:30:03.407 Write completed with error (sct=0, sc=8) 00:30:03.407 starting I/O failed 00:30:03.407 Read completed with error (sct=0, sc=8) 00:30:03.407 starting I/O failed 00:30:03.407 Write completed with error (sct=0, sc=8) 00:30:03.407 starting I/O failed 00:30:03.407 Read completed with error (sct=0, sc=8) 00:30:03.407 starting I/O failed 00:30:03.407 Read completed with error (sct=0, sc=8) 00:30:03.407 starting I/O failed 00:30:03.407 Write completed with error (sct=0, sc=8) 00:30:03.407 starting I/O failed 00:30:03.407 Write completed with error (sct=0, sc=8) 00:30:03.407 starting I/O failed 00:30:03.407 Read completed with error (sct=0, sc=8) 00:30:03.407 starting I/O failed 00:30:03.407 Write completed with error (sct=0, sc=8) 00:30:03.407 starting I/O failed 00:30:03.407 Read completed with error (sct=0, sc=8) 00:30:03.407 starting I/O failed 00:30:03.407 Read completed with error (sct=0, sc=8) 00:30:03.407 starting I/O failed 00:30:03.407 [2024-07-23 01:09:47.532153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.407 Read completed with error (sct=0, sc=8) 00:30:03.407 starting I/O failed 00:30:03.407 Read completed with error (sct=0, sc=8) 00:30:03.407 starting I/O failed 00:30:03.407 Read completed with error (sct=0, sc=8) 00:30:03.407 starting I/O failed 00:30:03.407 Read completed with error (sct=0, sc=8) 00:30:03.407 starting I/O failed 00:30:03.407 Read completed with error (sct=0, sc=8) 00:30:03.407 starting I/O failed 00:30:03.407 Read completed with error (sct=0, sc=8) 00:30:03.407 starting I/O failed 00:30:03.407 Read completed with error (sct=0, sc=8) 00:30:03.407 starting I/O failed 00:30:03.407 Read completed with error (sct=0, sc=8) 00:30:03.407 starting I/O failed 00:30:03.407 Read completed with error (sct=0, sc=8) 00:30:03.407 starting I/O failed 00:30:03.407 Read completed with error (sct=0, sc=8) 00:30:03.407 starting I/O failed 00:30:03.407 Write completed with error (sct=0, sc=8) 00:30:03.407 starting I/O failed 00:30:03.407 Read completed with error (sct=0, sc=8) 00:30:03.407 starting I/O failed 00:30:03.407 Write completed with error (sct=0, sc=8) 00:30:03.407 starting I/O failed 00:30:03.407 Read completed with error (sct=0, sc=8) 00:30:03.407 starting I/O failed 00:30:03.407 Write completed with error (sct=0, sc=8) 00:30:03.407 starting I/O failed 00:30:03.407 Write completed with error (sct=0, sc=8) 00:30:03.407 starting I/O failed 00:30:03.407 Read completed with error (sct=0, sc=8) 00:30:03.407 starting I/O failed 00:30:03.407 Write completed with error (sct=0, sc=8) 00:30:03.407 starting I/O failed 00:30:03.407 Read completed with error (sct=0, sc=8) 00:30:03.407 starting I/O failed 00:30:03.407 Read completed with error (sct=0, sc=8) 00:30:03.407 starting I/O failed 00:30:03.407 Read completed with error (sct=0, sc=8) 00:30:03.407 starting I/O failed 00:30:03.407 Read completed with error (sct=0, sc=8) 00:30:03.407 starting I/O failed 00:30:03.407 Read completed with error (sct=0, sc=8) 00:30:03.407 starting I/O failed 00:30:03.407 Write completed with error (sct=0, sc=8) 00:30:03.407 starting I/O failed 00:30:03.407 Read completed with error (sct=0, sc=8) 00:30:03.407 starting I/O failed 00:30:03.407 Read completed with error (sct=0, sc=8) 00:30:03.407 starting I/O failed 00:30:03.407 Write completed with error (sct=0, sc=8) 00:30:03.407 starting I/O failed 00:30:03.407 Read completed with error (sct=0, sc=8) 00:30:03.407 starting I/O failed 00:30:03.407 Read completed with error (sct=0, sc=8) 00:30:03.407 starting I/O failed 00:30:03.407 Write completed with error (sct=0, sc=8) 00:30:03.407 starting I/O failed 00:30:03.407 Read completed with error (sct=0, sc=8) 00:30:03.407 starting I/O failed 00:30:03.407 Read completed with error (sct=0, sc=8) 00:30:03.407 starting I/O failed 00:30:03.407 [2024-07-23 01:09:47.532506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:03.407 [2024-07-23 01:09:47.532753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.407 [2024-07-23 01:09:47.532926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.407 [2024-07-23 01:09:47.532953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.407 qpair failed and we were unable to recover it. 00:30:03.407 [2024-07-23 01:09:47.533115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.407 [2024-07-23 01:09:47.533384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.407 [2024-07-23 01:09:47.533413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.407 qpair failed and we were unable to recover it. 00:30:03.407 [2024-07-23 01:09:47.533641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.407 [2024-07-23 01:09:47.533795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.407 [2024-07-23 01:09:47.533820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.407 qpair failed and we were unable to recover it. 00:30:03.407 [2024-07-23 01:09:47.533988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.407 [2024-07-23 01:09:47.534133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.407 [2024-07-23 01:09:47.534158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.407 qpair failed and we were unable to recover it. 00:30:03.407 [2024-07-23 01:09:47.534303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.407 [2024-07-23 01:09:47.534521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.407 [2024-07-23 01:09:47.534550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.407 qpair failed and we were unable to recover it. 00:30:03.407 [2024-07-23 01:09:47.534752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.407 [2024-07-23 01:09:47.534890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.407 [2024-07-23 01:09:47.534916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.407 qpair failed and we were unable to recover it. 00:30:03.407 [2024-07-23 01:09:47.535126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.407 [2024-07-23 01:09:47.535365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.407 [2024-07-23 01:09:47.535393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.407 qpair failed and we were unable to recover it. 00:30:03.407 [2024-07-23 01:09:47.535684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.407 [2024-07-23 01:09:47.535827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.407 [2024-07-23 01:09:47.535852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.407 qpair failed and we were unable to recover it. 00:30:03.407 [2024-07-23 01:09:47.536052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.407 [2024-07-23 01:09:47.536188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.407 [2024-07-23 01:09:47.536228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.407 qpair failed and we were unable to recover it. 00:30:03.407 [2024-07-23 01:09:47.536511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.407 [2024-07-23 01:09:47.536705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.407 [2024-07-23 01:09:47.536731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.407 qpair failed and we were unable to recover it. 00:30:03.407 [2024-07-23 01:09:47.536870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.407 [2024-07-23 01:09:47.537073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.407 [2024-07-23 01:09:47.537098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.407 qpair failed and we were unable to recover it. 00:30:03.407 [2024-07-23 01:09:47.537261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.407 [2024-07-23 01:09:47.537401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.407 [2024-07-23 01:09:47.537428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.407 qpair failed and we were unable to recover it. 00:30:03.407 [2024-07-23 01:09:47.537634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.407 [2024-07-23 01:09:47.537773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.407 [2024-07-23 01:09:47.537798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.407 qpair failed and we were unable to recover it. 00:30:03.407 [2024-07-23 01:09:47.537933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.408 [2024-07-23 01:09:47.538072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.408 [2024-07-23 01:09:47.538097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.408 qpair failed and we were unable to recover it. 00:30:03.408 [2024-07-23 01:09:47.538320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.408 [2024-07-23 01:09:47.538502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.408 [2024-07-23 01:09:47.538527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.408 qpair failed and we were unable to recover it. 00:30:03.408 [2024-07-23 01:09:47.538697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.408 [2024-07-23 01:09:47.538842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.408 [2024-07-23 01:09:47.538867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.408 qpair failed and we were unable to recover it. 00:30:03.408 [2024-07-23 01:09:47.539084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.408 [2024-07-23 01:09:47.539399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.408 [2024-07-23 01:09:47.539448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.408 qpair failed and we were unable to recover it. 00:30:03.408 [2024-07-23 01:09:47.539683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.408 [2024-07-23 01:09:47.539823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.408 [2024-07-23 01:09:47.539849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.408 qpair failed and we were unable to recover it. 00:30:03.408 [2024-07-23 01:09:47.540082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.408 [2024-07-23 01:09:47.540368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.408 [2024-07-23 01:09:47.540407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.408 qpair failed and we were unable to recover it. 00:30:03.408 [2024-07-23 01:09:47.540581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.408 [2024-07-23 01:09:47.540727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.408 [2024-07-23 01:09:47.540752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.408 qpair failed and we were unable to recover it. 00:30:03.408 [2024-07-23 01:09:47.540893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.408 [2024-07-23 01:09:47.541028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.408 [2024-07-23 01:09:47.541052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.408 qpair failed and we were unable to recover it. 00:30:03.408 [2024-07-23 01:09:47.541268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.408 [2024-07-23 01:09:47.541487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.408 [2024-07-23 01:09:47.541511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.408 qpair failed and we were unable to recover it. 00:30:03.408 [2024-07-23 01:09:47.541708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.408 [2024-07-23 01:09:47.541846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.408 [2024-07-23 01:09:47.541872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.408 qpair failed and we were unable to recover it. 00:30:03.408 [2024-07-23 01:09:47.542097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.408 [2024-07-23 01:09:47.542262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.408 [2024-07-23 01:09:47.542302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.408 qpair failed and we were unable to recover it. 00:30:03.408 [2024-07-23 01:09:47.542498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.408 [2024-07-23 01:09:47.542696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.408 [2024-07-23 01:09:47.542723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.408 qpair failed and we were unable to recover it. 00:30:03.408 [2024-07-23 01:09:47.542862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.408 [2024-07-23 01:09:47.543083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.408 [2024-07-23 01:09:47.543108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.408 qpair failed and we were unable to recover it. 00:30:03.408 [2024-07-23 01:09:47.543333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.408 [2024-07-23 01:09:47.543535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.408 [2024-07-23 01:09:47.543563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.408 qpair failed and we were unable to recover it. 00:30:03.408 [2024-07-23 01:09:47.543792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.408 [2024-07-23 01:09:47.543934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.408 [2024-07-23 01:09:47.543959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.408 qpair failed and we were unable to recover it. 00:30:03.408 [2024-07-23 01:09:47.544096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.408 [2024-07-23 01:09:47.544228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.408 [2024-07-23 01:09:47.544253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.408 qpair failed and we were unable to recover it. 00:30:03.408 [2024-07-23 01:09:47.544467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.408 [2024-07-23 01:09:47.544673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.408 [2024-07-23 01:09:47.544698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.408 qpair failed and we were unable to recover it. 00:30:03.408 [2024-07-23 01:09:47.544871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.408 [2024-07-23 01:09:47.545076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.408 [2024-07-23 01:09:47.545100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.408 qpair failed and we were unable to recover it. 00:30:03.408 [2024-07-23 01:09:47.545267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.408 [2024-07-23 01:09:47.545490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.408 [2024-07-23 01:09:47.545518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.408 qpair failed and we were unable to recover it. 00:30:03.408 [2024-07-23 01:09:47.545722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.408 [2024-07-23 01:09:47.545891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.408 [2024-07-23 01:09:47.545930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.408 qpair failed and we were unable to recover it. 00:30:03.408 [2024-07-23 01:09:47.546115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.408 [2024-07-23 01:09:47.546337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.408 [2024-07-23 01:09:47.546361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.408 qpair failed and we were unable to recover it. 00:30:03.408 [2024-07-23 01:09:47.546547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.408 [2024-07-23 01:09:47.546705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.408 [2024-07-23 01:09:47.546731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.408 qpair failed and we were unable to recover it. 00:30:03.408 [2024-07-23 01:09:47.546872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.408 [2024-07-23 01:09:47.547044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.408 [2024-07-23 01:09:47.547067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.408 qpair failed and we were unable to recover it. 00:30:03.408 [2024-07-23 01:09:47.547254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.408 [2024-07-23 01:09:47.547452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.408 [2024-07-23 01:09:47.547477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.408 qpair failed and we were unable to recover it. 00:30:03.408 [2024-07-23 01:09:47.547672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.408 [2024-07-23 01:09:47.547848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.408 [2024-07-23 01:09:47.547873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.408 qpair failed and we were unable to recover it. 00:30:03.408 [2024-07-23 01:09:47.548134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.408 [2024-07-23 01:09:47.548287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.408 [2024-07-23 01:09:47.548314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.408 qpair failed and we were unable to recover it. 00:30:03.408 [2024-07-23 01:09:47.548497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.408 [2024-07-23 01:09:47.548689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.408 [2024-07-23 01:09:47.548714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.408 qpair failed and we were unable to recover it. 00:30:03.408 [2024-07-23 01:09:47.548868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.408 [2024-07-23 01:09:47.549059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.408 [2024-07-23 01:09:47.549083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.408 qpair failed and we were unable to recover it. 00:30:03.408 [2024-07-23 01:09:47.549226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.408 [2024-07-23 01:09:47.549395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.408 [2024-07-23 01:09:47.549419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.408 qpair failed and we were unable to recover it. 00:30:03.408 [2024-07-23 01:09:47.549609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.408 [2024-07-23 01:09:47.549791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.408 [2024-07-23 01:09:47.549817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.408 qpair failed and we were unable to recover it. 00:30:03.408 [2024-07-23 01:09:47.550829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.408 [2024-07-23 01:09:47.550996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.408 [2024-07-23 01:09:47.551022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.408 qpair failed and we were unable to recover it. 00:30:03.409 [2024-07-23 01:09:47.551227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.409 [2024-07-23 01:09:47.551411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.409 [2024-07-23 01:09:47.551439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.409 qpair failed and we were unable to recover it. 00:30:03.409 [2024-07-23 01:09:47.551652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.409 [2024-07-23 01:09:47.551813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.409 [2024-07-23 01:09:47.551838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.409 qpair failed and we were unable to recover it. 00:30:03.409 [2024-07-23 01:09:47.552058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.409 [2024-07-23 01:09:47.552229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.409 [2024-07-23 01:09:47.552254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.409 qpair failed and we were unable to recover it. 00:30:03.409 [2024-07-23 01:09:47.552445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.409 [2024-07-23 01:09:47.552609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.409 [2024-07-23 01:09:47.552640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.409 qpair failed and we were unable to recover it. 00:30:03.409 [2024-07-23 01:09:47.552783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.409 [2024-07-23 01:09:47.552926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.409 [2024-07-23 01:09:47.552952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.409 qpair failed and we were unable to recover it. 00:30:03.409 [2024-07-23 01:09:47.553167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.409 [2024-07-23 01:09:47.553354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.409 [2024-07-23 01:09:47.553378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.409 qpair failed and we were unable to recover it. 00:30:03.409 [2024-07-23 01:09:47.553598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.409 [2024-07-23 01:09:47.553816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.409 [2024-07-23 01:09:47.553840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.409 qpair failed and we were unable to recover it. 00:30:03.409 [2024-07-23 01:09:47.554133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.409 [2024-07-23 01:09:47.554323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.409 [2024-07-23 01:09:47.554347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.409 qpair failed and we were unable to recover it. 00:30:03.409 [2024-07-23 01:09:47.554480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.409 [2024-07-23 01:09:47.554678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.409 [2024-07-23 01:09:47.554704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.409 qpair failed and we were unable to recover it. 00:30:03.409 [2024-07-23 01:09:47.554896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.409 [2024-07-23 01:09:47.555029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.409 [2024-07-23 01:09:47.555055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.409 qpair failed and we were unable to recover it. 00:30:03.409 [2024-07-23 01:09:47.555243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.409 [2024-07-23 01:09:47.555457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.409 [2024-07-23 01:09:47.555482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.409 qpair failed and we were unable to recover it. 00:30:03.409 [2024-07-23 01:09:47.555655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.409 [2024-07-23 01:09:47.555845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.409 [2024-07-23 01:09:47.555871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.409 qpair failed and we were unable to recover it. 00:30:03.409 [2024-07-23 01:09:47.556123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.409 [2024-07-23 01:09:47.556311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.409 [2024-07-23 01:09:47.556350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.409 qpair failed and we were unable to recover it. 00:30:03.409 [2024-07-23 01:09:47.556524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.409 [2024-07-23 01:09:47.556717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.409 [2024-07-23 01:09:47.556750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.409 qpair failed and we were unable to recover it. 00:30:03.409 [2024-07-23 01:09:47.556893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.409 [2024-07-23 01:09:47.557035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.409 [2024-07-23 01:09:47.557059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.409 qpair failed and we were unable to recover it. 00:30:03.409 [2024-07-23 01:09:47.557286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.409 [2024-07-23 01:09:47.557422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.409 [2024-07-23 01:09:47.557447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.409 qpair failed and we were unable to recover it. 00:30:03.409 [2024-07-23 01:09:47.557639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.409 [2024-07-23 01:09:47.557816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.409 [2024-07-23 01:09:47.557841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.409 qpair failed and we were unable to recover it. 00:30:03.409 [2024-07-23 01:09:47.557999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.409 [2024-07-23 01:09:47.558185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.409 [2024-07-23 01:09:47.558209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.409 qpair failed and we were unable to recover it. 00:30:03.409 [2024-07-23 01:09:47.558399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.409 [2024-07-23 01:09:47.558589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.409 [2024-07-23 01:09:47.558645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.409 qpair failed and we were unable to recover it. 00:30:03.409 [2024-07-23 01:09:47.558854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.409 [2024-07-23 01:09:47.559000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.409 [2024-07-23 01:09:47.559025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.409 qpair failed and we were unable to recover it. 00:30:03.409 [2024-07-23 01:09:47.559213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.409 [2024-07-23 01:09:47.559440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.409 [2024-07-23 01:09:47.559468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.409 qpair failed and we were unable to recover it. 00:30:03.409 [2024-07-23 01:09:47.559638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.409 [2024-07-23 01:09:47.559853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.409 [2024-07-23 01:09:47.559878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.409 qpair failed and we were unable to recover it. 00:30:03.409 [2024-07-23 01:09:47.560142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.409 [2024-07-23 01:09:47.560341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.409 [2024-07-23 01:09:47.560367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.409 qpair failed and we were unable to recover it. 00:30:03.409 [2024-07-23 01:09:47.560532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.409 [2024-07-23 01:09:47.560707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.409 [2024-07-23 01:09:47.560733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.409 qpair failed and we were unable to recover it. 00:30:03.409 [2024-07-23 01:09:47.560894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.409 [2024-07-23 01:09:47.561079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.409 [2024-07-23 01:09:47.561105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.409 qpair failed and we were unable to recover it. 00:30:03.409 [2024-07-23 01:09:47.561321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.409 [2024-07-23 01:09:47.561482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.409 [2024-07-23 01:09:47.561506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.409 qpair failed and we were unable to recover it. 00:30:03.409 [2024-07-23 01:09:47.561675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.409 [2024-07-23 01:09:47.561870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.409 [2024-07-23 01:09:47.561895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.409 qpair failed and we were unable to recover it. 00:30:03.409 [2024-07-23 01:09:47.562126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.409 [2024-07-23 01:09:47.562336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.409 [2024-07-23 01:09:47.562359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.409 qpair failed and we were unable to recover it. 00:30:03.409 [2024-07-23 01:09:47.562550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.409 [2024-07-23 01:09:47.562766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.409 [2024-07-23 01:09:47.562792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.409 qpair failed and we were unable to recover it. 00:30:03.409 [2024-07-23 01:09:47.563003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.409 [2024-07-23 01:09:47.563170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.409 [2024-07-23 01:09:47.563194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.409 qpair failed and we were unable to recover it. 00:30:03.409 [2024-07-23 01:09:47.563358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.409 [2024-07-23 01:09:47.563583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.410 [2024-07-23 01:09:47.563607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.410 qpair failed and we were unable to recover it. 00:30:03.410 [2024-07-23 01:09:47.563833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.410 [2024-07-23 01:09:47.564016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.410 [2024-07-23 01:09:47.564040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.410 qpair failed and we were unable to recover it. 00:30:03.410 [2024-07-23 01:09:47.564212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.410 [2024-07-23 01:09:47.564473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.410 [2024-07-23 01:09:47.564500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.410 qpair failed and we were unable to recover it. 00:30:03.410 [2024-07-23 01:09:47.564692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.410 [2024-07-23 01:09:47.564921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.410 [2024-07-23 01:09:47.564946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.410 qpair failed and we were unable to recover it. 00:30:03.410 [2024-07-23 01:09:47.565136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.410 [2024-07-23 01:09:47.565331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.410 [2024-07-23 01:09:47.565355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.410 qpair failed and we were unable to recover it. 00:30:03.410 [2024-07-23 01:09:47.565573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.410 [2024-07-23 01:09:47.565728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.410 [2024-07-23 01:09:47.565753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.410 qpair failed and we were unable to recover it. 00:30:03.410 [2024-07-23 01:09:47.565943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.410 [2024-07-23 01:09:47.566124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.410 [2024-07-23 01:09:47.566148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.410 qpair failed and we were unable to recover it. 00:30:03.410 [2024-07-23 01:09:47.566329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.410 [2024-07-23 01:09:47.566553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.410 [2024-07-23 01:09:47.566576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.410 qpair failed and we were unable to recover it. 00:30:03.410 [2024-07-23 01:09:47.566800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.410 [2024-07-23 01:09:47.566987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.410 [2024-07-23 01:09:47.567012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.410 qpair failed and we were unable to recover it. 00:30:03.410 [2024-07-23 01:09:47.567212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.410 [2024-07-23 01:09:47.567436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.410 [2024-07-23 01:09:47.567460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.410 qpair failed and we were unable to recover it. 00:30:03.410 [2024-07-23 01:09:47.567653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.410 [2024-07-23 01:09:47.567866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.410 [2024-07-23 01:09:47.567891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.410 qpair failed and we were unable to recover it. 00:30:03.410 [2024-07-23 01:09:47.568059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.410 [2024-07-23 01:09:47.568200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.410 [2024-07-23 01:09:47.568225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.410 qpair failed and we were unable to recover it. 00:30:03.410 [2024-07-23 01:09:47.568387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.410 [2024-07-23 01:09:47.568552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.410 [2024-07-23 01:09:47.568575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.410 qpair failed and we were unable to recover it. 00:30:03.410 [2024-07-23 01:09:47.568755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.410 [2024-07-23 01:09:47.568913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.410 [2024-07-23 01:09:47.568937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.410 qpair failed and we were unable to recover it. 00:30:03.410 [2024-07-23 01:09:47.569082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.410 [2024-07-23 01:09:47.569273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.410 [2024-07-23 01:09:47.569315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.410 qpair failed and we were unable to recover it. 00:30:03.410 [2024-07-23 01:09:47.569506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.410 [2024-07-23 01:09:47.569727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.410 [2024-07-23 01:09:47.569754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.410 qpair failed and we were unable to recover it. 00:30:03.410 [2024-07-23 01:09:47.569915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.410 [2024-07-23 01:09:47.570088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.410 [2024-07-23 01:09:47.570112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.410 qpair failed and we were unable to recover it. 00:30:03.410 [2024-07-23 01:09:47.570278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.410 [2024-07-23 01:09:47.570438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.410 [2024-07-23 01:09:47.570476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.410 qpair failed and we were unable to recover it. 00:30:03.410 [2024-07-23 01:09:47.570691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.410 [2024-07-23 01:09:47.570876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.410 [2024-07-23 01:09:47.570904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.410 qpair failed and we were unable to recover it. 00:30:03.410 [2024-07-23 01:09:47.571126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.410 [2024-07-23 01:09:47.571339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.410 [2024-07-23 01:09:47.571363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.410 qpair failed and we were unable to recover it. 00:30:03.410 [2024-07-23 01:09:47.571550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.410 [2024-07-23 01:09:47.571701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.410 [2024-07-23 01:09:47.571729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.410 qpair failed and we were unable to recover it. 00:30:03.410 [2024-07-23 01:09:47.571867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.410 [2024-07-23 01:09:47.572137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.410 [2024-07-23 01:09:47.572161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.410 qpair failed and we were unable to recover it. 00:30:03.410 [2024-07-23 01:09:47.572300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.410 [2024-07-23 01:09:47.572472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.410 [2024-07-23 01:09:47.572497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.410 qpair failed and we were unable to recover it. 00:30:03.410 [2024-07-23 01:09:47.572665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.411 [2024-07-23 01:09:47.572829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.411 [2024-07-23 01:09:47.572853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.411 qpair failed and we were unable to recover it. 00:30:03.411 [2024-07-23 01:09:47.573083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.411 [2024-07-23 01:09:47.573283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.411 [2024-07-23 01:09:47.573307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.411 qpair failed and we were unable to recover it. 00:30:03.411 [2024-07-23 01:09:47.573484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.411 [2024-07-23 01:09:47.573633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.411 [2024-07-23 01:09:47.573658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.411 qpair failed and we were unable to recover it. 00:30:03.411 [2024-07-23 01:09:47.573833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.411 [2024-07-23 01:09:47.573972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.411 [2024-07-23 01:09:47.573999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.411 qpair failed and we were unable to recover it. 00:30:03.411 [2024-07-23 01:09:47.574206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.411 [2024-07-23 01:09:47.574413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.411 [2024-07-23 01:09:47.574437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.411 qpair failed and we were unable to recover it. 00:30:03.411 [2024-07-23 01:09:47.574652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.411 [2024-07-23 01:09:47.574952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.411 [2024-07-23 01:09:47.574992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.411 qpair failed and we were unable to recover it. 00:30:03.411 [2024-07-23 01:09:47.575186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.411 [2024-07-23 01:09:47.575380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.411 [2024-07-23 01:09:47.575405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.411 qpair failed and we were unable to recover it. 00:30:03.411 [2024-07-23 01:09:47.575592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.411 [2024-07-23 01:09:47.575771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.411 [2024-07-23 01:09:47.575797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.411 qpair failed and we were unable to recover it. 00:30:03.411 [2024-07-23 01:09:47.575968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.411 [2024-07-23 01:09:47.576133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.411 [2024-07-23 01:09:47.576158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.411 qpair failed and we were unable to recover it. 00:30:03.411 [2024-07-23 01:09:47.576338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.411 [2024-07-23 01:09:47.576577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.411 [2024-07-23 01:09:47.576603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.411 qpair failed and we were unable to recover it. 00:30:03.411 [2024-07-23 01:09:47.576788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.411 [2024-07-23 01:09:47.577159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.411 [2024-07-23 01:09:47.577182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.411 qpair failed and we were unable to recover it. 00:30:03.411 [2024-07-23 01:09:47.577380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.411 [2024-07-23 01:09:47.577588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.411 [2024-07-23 01:09:47.577625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.411 qpair failed and we were unable to recover it. 00:30:03.411 [2024-07-23 01:09:47.577766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.411 [2024-07-23 01:09:47.577953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.411 [2024-07-23 01:09:47.577978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.411 qpair failed and we were unable to recover it. 00:30:03.411 [2024-07-23 01:09:47.578194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.411 [2024-07-23 01:09:47.578399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.411 [2024-07-23 01:09:47.578424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.411 qpair failed and we were unable to recover it. 00:30:03.411 [2024-07-23 01:09:47.578638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.411 [2024-07-23 01:09:47.578855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.411 [2024-07-23 01:09:47.578879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.411 qpair failed and we were unable to recover it. 00:30:03.411 [2024-07-23 01:09:47.579077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.411 [2024-07-23 01:09:47.579217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.411 [2024-07-23 01:09:47.579257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.411 qpair failed and we were unable to recover it. 00:30:03.411 [2024-07-23 01:09:47.579411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.411 [2024-07-23 01:09:47.579575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.411 [2024-07-23 01:09:47.579602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.411 qpair failed and we were unable to recover it. 00:30:03.411 [2024-07-23 01:09:47.579842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.411 [2024-07-23 01:09:47.580102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.411 [2024-07-23 01:09:47.580140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.411 qpair failed and we were unable to recover it. 00:30:03.411 [2024-07-23 01:09:47.580333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.411 [2024-07-23 01:09:47.580486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.411 [2024-07-23 01:09:47.580511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.411 qpair failed and we were unable to recover it. 00:30:03.411 [2024-07-23 01:09:47.580707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.411 [2024-07-23 01:09:47.580878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.411 [2024-07-23 01:09:47.580903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.411 qpair failed and we were unable to recover it. 00:30:03.411 [2024-07-23 01:09:47.581119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.411 [2024-07-23 01:09:47.581298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.411 [2024-07-23 01:09:47.581323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.411 qpair failed and we were unable to recover it. 00:30:03.411 [2024-07-23 01:09:47.581476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.411 [2024-07-23 01:09:47.581664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.411 [2024-07-23 01:09:47.581697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.411 qpair failed and we were unable to recover it. 00:30:03.411 [2024-07-23 01:09:47.581865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.411 [2024-07-23 01:09:47.582116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.411 [2024-07-23 01:09:47.582155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.411 qpair failed and we were unable to recover it. 00:30:03.411 [2024-07-23 01:09:47.582345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.411 [2024-07-23 01:09:47.582558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.411 [2024-07-23 01:09:47.582585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.411 qpair failed and we were unable to recover it. 00:30:03.411 [2024-07-23 01:09:47.582786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.411 [2024-07-23 01:09:47.582978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.411 [2024-07-23 01:09:47.583016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.411 qpair failed and we were unable to recover it. 00:30:03.411 [2024-07-23 01:09:47.583255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.411 [2024-07-23 01:09:47.583413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.411 [2024-07-23 01:09:47.583440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.411 qpair failed and we were unable to recover it. 00:30:03.411 [2024-07-23 01:09:47.583645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.411 [2024-07-23 01:09:47.583823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.411 [2024-07-23 01:09:47.583847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.411 qpair failed and we were unable to recover it. 00:30:03.411 [2024-07-23 01:09:47.583985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.411 [2024-07-23 01:09:47.584144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.411 [2024-07-23 01:09:47.584183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.411 qpair failed and we were unable to recover it. 00:30:03.411 [2024-07-23 01:09:47.584391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.411 [2024-07-23 01:09:47.584557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.411 [2024-07-23 01:09:47.584599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.411 qpair failed and we were unable to recover it. 00:30:03.411 [2024-07-23 01:09:47.584775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.411 [2024-07-23 01:09:47.584942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.411 [2024-07-23 01:09:47.584967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.411 qpair failed and we were unable to recover it. 00:30:03.411 [2024-07-23 01:09:47.585108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.411 [2024-07-23 01:09:47.585272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.412 [2024-07-23 01:09:47.585311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.412 qpair failed and we were unable to recover it. 00:30:03.412 [2024-07-23 01:09:47.585502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.412 [2024-07-23 01:09:47.585752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.412 [2024-07-23 01:09:47.585782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.412 qpair failed and we were unable to recover it. 00:30:03.412 [2024-07-23 01:09:47.585988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.412 [2024-07-23 01:09:47.586178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.412 [2024-07-23 01:09:47.586208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.412 qpair failed and we were unable to recover it. 00:30:03.412 [2024-07-23 01:09:47.586416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.412 [2024-07-23 01:09:47.586618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.412 [2024-07-23 01:09:47.586643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.412 qpair failed and we were unable to recover it. 00:30:03.412 [2024-07-23 01:09:47.586796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.412 [2024-07-23 01:09:47.586968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.412 [2024-07-23 01:09:47.587006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.412 qpair failed and we were unable to recover it. 00:30:03.412 [2024-07-23 01:09:47.587167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.412 [2024-07-23 01:09:47.587365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.412 [2024-07-23 01:09:47.587390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.412 qpair failed and we were unable to recover it. 00:30:03.412 [2024-07-23 01:09:47.587543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.412 [2024-07-23 01:09:47.587736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.412 [2024-07-23 01:09:47.587761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.412 qpair failed and we were unable to recover it. 00:30:03.412 [2024-07-23 01:09:47.587959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.412 [2024-07-23 01:09:47.588123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.412 [2024-07-23 01:09:47.588148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.412 qpair failed and we were unable to recover it. 00:30:03.412 [2024-07-23 01:09:47.588298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.412 [2024-07-23 01:09:47.588509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.412 [2024-07-23 01:09:47.588533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.412 qpair failed and we were unable to recover it. 00:30:03.412 [2024-07-23 01:09:47.588707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.412 [2024-07-23 01:09:47.588865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.412 [2024-07-23 01:09:47.588891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.412 qpair failed and we were unable to recover it. 00:30:03.412 [2024-07-23 01:09:47.589164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.412 [2024-07-23 01:09:47.589341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.412 [2024-07-23 01:09:47.589365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.412 qpair failed and we were unable to recover it. 00:30:03.412 [2024-07-23 01:09:47.589541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.412 [2024-07-23 01:09:47.589703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.412 [2024-07-23 01:09:47.589733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.412 qpair failed and we were unable to recover it. 00:30:03.412 [2024-07-23 01:09:47.589879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.412 [2024-07-23 01:09:47.590097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.412 [2024-07-23 01:09:47.590126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.412 qpair failed and we were unable to recover it. 00:30:03.412 [2024-07-23 01:09:47.590354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.412 [2024-07-23 01:09:47.590548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.412 [2024-07-23 01:09:47.590572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.412 qpair failed and we were unable to recover it. 00:30:03.412 [2024-07-23 01:09:47.590760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.412 [2024-07-23 01:09:47.591027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.412 [2024-07-23 01:09:47.591051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.412 qpair failed and we were unable to recover it. 00:30:03.412 [2024-07-23 01:09:47.591220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.412 [2024-07-23 01:09:47.591375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.412 [2024-07-23 01:09:47.591399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.412 qpair failed and we were unable to recover it. 00:30:03.412 [2024-07-23 01:09:47.591582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.412 [2024-07-23 01:09:47.591766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.412 [2024-07-23 01:09:47.591792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.412 qpair failed and we were unable to recover it. 00:30:03.412 [2024-07-23 01:09:47.592024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.412 [2024-07-23 01:09:47.592238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.412 [2024-07-23 01:09:47.592263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.412 qpair failed and we were unable to recover it. 00:30:03.412 [2024-07-23 01:09:47.592448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.412 [2024-07-23 01:09:47.592660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.412 [2024-07-23 01:09:47.592689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.412 qpair failed and we were unable to recover it. 00:30:03.412 [2024-07-23 01:09:47.592832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.412 [2024-07-23 01:09:47.592998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.412 [2024-07-23 01:09:47.593040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.412 qpair failed and we were unable to recover it. 00:30:03.412 [2024-07-23 01:09:47.593255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.412 [2024-07-23 01:09:47.593463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.412 [2024-07-23 01:09:47.593487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.412 qpair failed and we were unable to recover it. 00:30:03.412 [2024-07-23 01:09:47.593653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.412 [2024-07-23 01:09:47.593808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.412 [2024-07-23 01:09:47.593832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.412 qpair failed and we were unable to recover it. 00:30:03.412 [2024-07-23 01:09:47.594082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.412 [2024-07-23 01:09:47.594251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.412 [2024-07-23 01:09:47.594275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.412 qpair failed and we were unable to recover it. 00:30:03.412 [2024-07-23 01:09:47.594477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.412 [2024-07-23 01:09:47.594712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.412 [2024-07-23 01:09:47.594738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.412 qpair failed and we were unable to recover it. 00:30:03.412 [2024-07-23 01:09:47.594929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.412 [2024-07-23 01:09:47.595144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.412 [2024-07-23 01:09:47.595172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.412 qpair failed and we were unable to recover it. 00:30:03.412 [2024-07-23 01:09:47.595358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.412 [2024-07-23 01:09:47.595559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.412 [2024-07-23 01:09:47.595584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.412 qpair failed and we were unable to recover it. 00:30:03.412 [2024-07-23 01:09:47.595775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.412 [2024-07-23 01:09:47.595982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.412 [2024-07-23 01:09:47.596010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.412 qpair failed and we were unable to recover it. 00:30:03.412 [2024-07-23 01:09:47.596172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.412 [2024-07-23 01:09:47.596394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.412 [2024-07-23 01:09:47.596418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.412 qpair failed and we were unable to recover it. 00:30:03.412 [2024-07-23 01:09:47.596596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.412 [2024-07-23 01:09:47.596843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.412 [2024-07-23 01:09:47.596868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.412 qpair failed and we were unable to recover it. 00:30:03.412 [2024-07-23 01:09:47.597067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.412 [2024-07-23 01:09:47.597234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.412 [2024-07-23 01:09:47.597258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.412 qpair failed and we were unable to recover it. 00:30:03.412 [2024-07-23 01:09:47.597459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.412 [2024-07-23 01:09:47.597699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.412 [2024-07-23 01:09:47.597728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.412 qpair failed and we were unable to recover it. 00:30:03.412 [2024-07-23 01:09:47.597930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.413 [2024-07-23 01:09:47.598104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.413 [2024-07-23 01:09:47.598145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.413 qpair failed and we were unable to recover it. 00:30:03.413 [2024-07-23 01:09:47.598368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.413 [2024-07-23 01:09:47.598568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.413 [2024-07-23 01:09:47.598592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.413 qpair failed and we were unable to recover it. 00:30:03.413 [2024-07-23 01:09:47.598790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.413 [2024-07-23 01:09:47.598949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.413 [2024-07-23 01:09:47.598975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.413 qpair failed and we were unable to recover it. 00:30:03.413 [2024-07-23 01:09:47.599106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.413 [2024-07-23 01:09:47.599317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.413 [2024-07-23 01:09:47.599341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.413 qpair failed and we were unable to recover it. 00:30:03.413 [2024-07-23 01:09:47.599526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.413 [2024-07-23 01:09:47.599710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.413 [2024-07-23 01:09:47.599748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.413 qpair failed and we were unable to recover it. 00:30:03.413 [2024-07-23 01:09:47.599970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.413 [2024-07-23 01:09:47.600148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.413 [2024-07-23 01:09:47.600174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.413 qpair failed and we were unable to recover it. 00:30:03.413 [2024-07-23 01:09:47.600352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.413 [2024-07-23 01:09:47.600543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.413 [2024-07-23 01:09:47.600568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.413 qpair failed and we were unable to recover it. 00:30:03.413 [2024-07-23 01:09:47.600757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.413 [2024-07-23 01:09:47.600945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.413 [2024-07-23 01:09:47.600972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.413 qpair failed and we were unable to recover it. 00:30:03.413 [2024-07-23 01:09:47.601169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.413 [2024-07-23 01:09:47.601335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.413 [2024-07-23 01:09:47.601360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.413 qpair failed and we were unable to recover it. 00:30:03.413 [2024-07-23 01:09:47.601541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.413 [2024-07-23 01:09:47.601707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.413 [2024-07-23 01:09:47.601750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.413 qpair failed and we were unable to recover it. 00:30:03.413 [2024-07-23 01:09:47.601928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.413 [2024-07-23 01:09:47.602126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.413 [2024-07-23 01:09:47.602175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.413 qpair failed and we were unable to recover it. 00:30:03.413 [2024-07-23 01:09:47.602383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.413 [2024-07-23 01:09:47.602531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.413 [2024-07-23 01:09:47.602557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.413 qpair failed and we were unable to recover it. 00:30:03.413 [2024-07-23 01:09:47.602745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.413 [2024-07-23 01:09:47.602902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.413 [2024-07-23 01:09:47.602927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.413 qpair failed and we were unable to recover it. 00:30:03.413 [2024-07-23 01:09:47.603144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.413 [2024-07-23 01:09:47.603294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.413 [2024-07-23 01:09:47.603320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.413 qpair failed and we were unable to recover it. 00:30:03.413 [2024-07-23 01:09:47.603513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.413 [2024-07-23 01:09:47.603707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.413 [2024-07-23 01:09:47.603732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.413 qpair failed and we were unable to recover it. 00:30:03.413 [2024-07-23 01:09:47.603904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.413 [2024-07-23 01:09:47.604055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.413 [2024-07-23 01:09:47.604093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.413 qpair failed and we were unable to recover it. 00:30:03.413 [2024-07-23 01:09:47.604352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.413 [2024-07-23 01:09:47.604497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.413 [2024-07-23 01:09:47.604523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.685 qpair failed and we were unable to recover it. 00:30:03.685 [2024-07-23 01:09:47.604674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.685 [2024-07-23 01:09:47.604840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.685 [2024-07-23 01:09:47.604876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.685 qpair failed and we were unable to recover it. 00:30:03.685 [2024-07-23 01:09:47.605097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.685 [2024-07-23 01:09:47.605322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.685 [2024-07-23 01:09:47.605358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.685 qpair failed and we were unable to recover it. 00:30:03.685 [2024-07-23 01:09:47.605641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.685 [2024-07-23 01:09:47.605842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.685 [2024-07-23 01:09:47.605868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.685 qpair failed and we were unable to recover it. 00:30:03.685 [2024-07-23 01:09:47.606042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.685 [2024-07-23 01:09:47.606217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.685 [2024-07-23 01:09:47.606265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.685 qpair failed and we were unable to recover it. 00:30:03.685 [2024-07-23 01:09:47.606474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.685 [2024-07-23 01:09:47.606638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.685 [2024-07-23 01:09:47.606665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.685 qpair failed and we were unable to recover it. 00:30:03.685 [2024-07-23 01:09:47.606842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.685 [2024-07-23 01:09:47.607115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.685 [2024-07-23 01:09:47.607156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.685 qpair failed and we were unable to recover it. 00:30:03.685 [2024-07-23 01:09:47.607357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.685 [2024-07-23 01:09:47.607559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.685 [2024-07-23 01:09:47.607597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.685 qpair failed and we were unable to recover it. 00:30:03.685 [2024-07-23 01:09:47.607750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.685 [2024-07-23 01:09:47.607916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.685 [2024-07-23 01:09:47.607940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.685 qpair failed and we were unable to recover it. 00:30:03.685 [2024-07-23 01:09:47.608140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.685 [2024-07-23 01:09:47.608346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.685 [2024-07-23 01:09:47.608369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.685 qpair failed and we were unable to recover it. 00:30:03.685 [2024-07-23 01:09:47.608576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.685 [2024-07-23 01:09:47.608825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.685 [2024-07-23 01:09:47.608850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.685 qpair failed and we were unable to recover it. 00:30:03.685 [2024-07-23 01:09:47.609016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.685 [2024-07-23 01:09:47.609205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.685 [2024-07-23 01:09:47.609230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.685 qpair failed and we were unable to recover it. 00:30:03.685 [2024-07-23 01:09:47.609391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.685 [2024-07-23 01:09:47.609602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.685 [2024-07-23 01:09:47.609635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.685 qpair failed and we were unable to recover it. 00:30:03.685 [2024-07-23 01:09:47.609840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.685 [2024-07-23 01:09:47.610003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.685 [2024-07-23 01:09:47.610044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.686 qpair failed and we were unable to recover it. 00:30:03.686 [2024-07-23 01:09:47.610196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.686 [2024-07-23 01:09:47.610386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.686 [2024-07-23 01:09:47.610427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.686 qpair failed and we were unable to recover it. 00:30:03.686 [2024-07-23 01:09:47.610619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.686 [2024-07-23 01:09:47.610779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.686 [2024-07-23 01:09:47.610804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.686 qpair failed and we were unable to recover it. 00:30:03.686 [2024-07-23 01:09:47.610963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.686 [2024-07-23 01:09:47.611152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.686 [2024-07-23 01:09:47.611176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.686 qpair failed and we were unable to recover it. 00:30:03.686 [2024-07-23 01:09:47.611395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.686 [2024-07-23 01:09:47.611551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.686 [2024-07-23 01:09:47.611575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.686 qpair failed and we were unable to recover it. 00:30:03.686 [2024-07-23 01:09:47.611746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.686 [2024-07-23 01:09:47.611932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.686 [2024-07-23 01:09:47.611957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.686 qpair failed and we were unable to recover it. 00:30:03.686 [2024-07-23 01:09:47.612193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.686 [2024-07-23 01:09:47.612376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.686 [2024-07-23 01:09:47.612400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.686 qpair failed and we were unable to recover it. 00:30:03.686 [2024-07-23 01:09:47.612627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.686 [2024-07-23 01:09:47.612814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.686 [2024-07-23 01:09:47.612841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.686 qpair failed and we were unable to recover it. 00:30:03.686 [2024-07-23 01:09:47.612990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.686 [2024-07-23 01:09:47.613154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.686 [2024-07-23 01:09:47.613178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.686 qpair failed and we were unable to recover it. 00:30:03.686 [2024-07-23 01:09:47.613315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.686 [2024-07-23 01:09:47.613479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.686 [2024-07-23 01:09:47.613520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.686 qpair failed and we were unable to recover it. 00:30:03.686 [2024-07-23 01:09:47.613745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.686 [2024-07-23 01:09:47.613935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.686 [2024-07-23 01:09:47.613960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.686 qpair failed and we were unable to recover it. 00:30:03.686 [2024-07-23 01:09:47.614189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.686 [2024-07-23 01:09:47.614402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.686 [2024-07-23 01:09:47.614429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.686 qpair failed and we were unable to recover it. 00:30:03.686 [2024-07-23 01:09:47.614664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.686 [2024-07-23 01:09:47.614931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.686 [2024-07-23 01:09:47.614958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.686 qpair failed and we were unable to recover it. 00:30:03.686 [2024-07-23 01:09:47.615172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.686 [2024-07-23 01:09:47.615397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.686 [2024-07-23 01:09:47.615421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.686 qpair failed and we were unable to recover it. 00:30:03.686 [2024-07-23 01:09:47.615656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.686 [2024-07-23 01:09:47.615860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.686 [2024-07-23 01:09:47.615886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.686 qpair failed and we were unable to recover it. 00:30:03.686 [2024-07-23 01:09:47.616089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.686 [2024-07-23 01:09:47.616282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.686 [2024-07-23 01:09:47.616306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.686 qpair failed and we were unable to recover it. 00:30:03.686 [2024-07-23 01:09:47.616523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.686 [2024-07-23 01:09:47.616738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.686 [2024-07-23 01:09:47.616767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.686 qpair failed and we were unable to recover it. 00:30:03.686 [2024-07-23 01:09:47.616983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.686 [2024-07-23 01:09:47.617200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.686 [2024-07-23 01:09:47.617224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.686 qpair failed and we were unable to recover it. 00:30:03.686 [2024-07-23 01:09:47.617425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.686 [2024-07-23 01:09:47.617589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.686 [2024-07-23 01:09:47.617629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.686 qpair failed and we were unable to recover it. 00:30:03.686 [2024-07-23 01:09:47.617803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.686 [2024-07-23 01:09:47.618037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.686 [2024-07-23 01:09:47.618085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.686 qpair failed and we were unable to recover it. 00:30:03.686 [2024-07-23 01:09:47.618284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.686 [2024-07-23 01:09:47.618493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.686 [2024-07-23 01:09:47.618519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.686 qpair failed and we were unable to recover it. 00:30:03.686 [2024-07-23 01:09:47.618696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.686 [2024-07-23 01:09:47.618861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.686 [2024-07-23 01:09:47.618887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.686 qpair failed and we were unable to recover it. 00:30:03.686 [2024-07-23 01:09:47.619088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.686 [2024-07-23 01:09:47.619307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.686 [2024-07-23 01:09:47.619330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.686 qpair failed and we were unable to recover it. 00:30:03.686 [2024-07-23 01:09:47.619545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.686 [2024-07-23 01:09:47.619713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.686 [2024-07-23 01:09:47.619742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.686 qpair failed and we were unable to recover it. 00:30:03.686 [2024-07-23 01:09:47.619924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.686 [2024-07-23 01:09:47.620095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.686 [2024-07-23 01:09:47.620134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.686 qpair failed and we were unable to recover it. 00:30:03.686 [2024-07-23 01:09:47.620311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.686 [2024-07-23 01:09:47.620500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.686 [2024-07-23 01:09:47.620541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.686 qpair failed and we were unable to recover it. 00:30:03.686 [2024-07-23 01:09:47.620739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.686 [2024-07-23 01:09:47.620945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.686 [2024-07-23 01:09:47.620973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.686 qpair failed and we were unable to recover it. 00:30:03.686 [2024-07-23 01:09:47.621124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.686 [2024-07-23 01:09:47.621330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.686 [2024-07-23 01:09:47.621354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.687 qpair failed and we were unable to recover it. 00:30:03.687 [2024-07-23 01:09:47.621517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.687 [2024-07-23 01:09:47.621719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.687 [2024-07-23 01:09:47.621745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.687 qpair failed and we were unable to recover it. 00:30:03.687 [2024-07-23 01:09:47.621906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.687 [2024-07-23 01:09:47.622065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.687 [2024-07-23 01:09:47.622104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.687 qpair failed and we were unable to recover it. 00:30:03.687 [2024-07-23 01:09:47.622266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.687 [2024-07-23 01:09:47.622397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.687 [2024-07-23 01:09:47.622420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.687 qpair failed and we were unable to recover it. 00:30:03.687 [2024-07-23 01:09:47.622577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.687 [2024-07-23 01:09:47.622792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.687 [2024-07-23 01:09:47.622818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.687 qpair failed and we were unable to recover it. 00:30:03.687 [2024-07-23 01:09:47.622991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.687 [2024-07-23 01:09:47.623207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.687 [2024-07-23 01:09:47.623246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.687 qpair failed and we were unable to recover it. 00:30:03.687 [2024-07-23 01:09:47.623432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.687 [2024-07-23 01:09:47.623606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.687 [2024-07-23 01:09:47.623641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.687 qpair failed and we were unable to recover it. 00:30:03.687 [2024-07-23 01:09:47.623796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.687 [2024-07-23 01:09:47.623958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.687 [2024-07-23 01:09:47.623984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.687 qpair failed and we were unable to recover it. 00:30:03.687 [2024-07-23 01:09:47.624228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.687 [2024-07-23 01:09:47.624395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.687 [2024-07-23 01:09:47.624418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.687 qpair failed and we were unable to recover it. 00:30:03.687 [2024-07-23 01:09:47.624625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.687 [2024-07-23 01:09:47.624815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.687 [2024-07-23 01:09:47.624839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.687 qpair failed and we were unable to recover it. 00:30:03.687 [2024-07-23 01:09:47.625107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.687 [2024-07-23 01:09:47.625290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.687 [2024-07-23 01:09:47.625320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.687 qpair failed and we were unable to recover it. 00:30:03.687 [2024-07-23 01:09:47.625551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.687 [2024-07-23 01:09:47.625742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.687 [2024-07-23 01:09:47.625768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.687 qpair failed and we were unable to recover it. 00:30:03.687 [2024-07-23 01:09:47.625951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.687 [2024-07-23 01:09:47.626112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.687 [2024-07-23 01:09:47.626136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.687 qpair failed and we were unable to recover it. 00:30:03.687 [2024-07-23 01:09:47.626309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.687 [2024-07-23 01:09:47.626526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.687 [2024-07-23 01:09:47.626554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.687 qpair failed and we were unable to recover it. 00:30:03.687 [2024-07-23 01:09:47.626764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.687 [2024-07-23 01:09:47.626937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.687 [2024-07-23 01:09:47.626965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.687 qpair failed and we were unable to recover it. 00:30:03.687 [2024-07-23 01:09:47.627152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.687 [2024-07-23 01:09:47.627382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.687 [2024-07-23 01:09:47.627407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.687 qpair failed and we were unable to recover it. 00:30:03.687 [2024-07-23 01:09:47.627589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.687 [2024-07-23 01:09:47.627748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.687 [2024-07-23 01:09:47.627772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.687 qpair failed and we were unable to recover it. 00:30:03.687 [2024-07-23 01:09:47.627903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.687 [2024-07-23 01:09:47.628113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.687 [2024-07-23 01:09:47.628137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.687 qpair failed and we were unable to recover it. 00:30:03.687 [2024-07-23 01:09:47.628269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.687 [2024-07-23 01:09:47.628464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.687 [2024-07-23 01:09:47.628505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.687 qpair failed and we were unable to recover it. 00:30:03.687 [2024-07-23 01:09:47.628646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.687 [2024-07-23 01:09:47.628828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.687 [2024-07-23 01:09:47.628856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.687 qpair failed and we were unable to recover it. 00:30:03.687 [2024-07-23 01:09:47.629061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.687 [2024-07-23 01:09:47.629242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.687 [2024-07-23 01:09:47.629269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.687 qpair failed and we were unable to recover it. 00:30:03.687 [2024-07-23 01:09:47.629447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.687 [2024-07-23 01:09:47.629651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.687 [2024-07-23 01:09:47.629678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.687 qpair failed and we were unable to recover it. 00:30:03.687 [2024-07-23 01:09:47.629871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.687 [2024-07-23 01:09:47.630016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.687 [2024-07-23 01:09:47.630040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.687 qpair failed and we were unable to recover it. 00:30:03.687 [2024-07-23 01:09:47.630251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.687 [2024-07-23 01:09:47.630431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.687 [2024-07-23 01:09:47.630460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.687 qpair failed and we were unable to recover it. 00:30:03.687 [2024-07-23 01:09:47.630637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.687 [2024-07-23 01:09:47.630826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.687 [2024-07-23 01:09:47.630855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.687 qpair failed and we were unable to recover it. 00:30:03.687 [2024-07-23 01:09:47.631054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.687 [2024-07-23 01:09:47.631222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.687 [2024-07-23 01:09:47.631246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.687 qpair failed and we were unable to recover it. 00:30:03.687 [2024-07-23 01:09:47.631439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.687 [2024-07-23 01:09:47.631658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.687 [2024-07-23 01:09:47.631683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.687 qpair failed and we were unable to recover it. 00:30:03.687 [2024-07-23 01:09:47.631889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.687 [2024-07-23 01:09:47.632149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.687 [2024-07-23 01:09:47.632177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.687 qpair failed and we were unable to recover it. 00:30:03.687 [2024-07-23 01:09:47.632388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.687 [2024-07-23 01:09:47.632569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.687 [2024-07-23 01:09:47.632598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.687 qpair failed and we were unable to recover it. 00:30:03.688 [2024-07-23 01:09:47.632806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.688 [2024-07-23 01:09:47.633007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.688 [2024-07-23 01:09:47.633031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.688 qpair failed and we were unable to recover it. 00:30:03.688 [2024-07-23 01:09:47.633232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.688 [2024-07-23 01:09:47.633390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.688 [2024-07-23 01:09:47.633414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.688 qpair failed and we were unable to recover it. 00:30:03.688 [2024-07-23 01:09:47.633580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.688 [2024-07-23 01:09:47.633760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.688 [2024-07-23 01:09:47.633788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.688 qpair failed and we were unable to recover it. 00:30:03.688 [2024-07-23 01:09:47.633971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.688 [2024-07-23 01:09:47.634150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.688 [2024-07-23 01:09:47.634179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.688 qpair failed and we were unable to recover it. 00:30:03.688 [2024-07-23 01:09:47.634403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.688 [2024-07-23 01:09:47.634567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.688 [2024-07-23 01:09:47.634592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.688 qpair failed and we were unable to recover it. 00:30:03.688 [2024-07-23 01:09:47.634778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.688 [2024-07-23 01:09:47.634942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.688 [2024-07-23 01:09:47.634969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.688 qpair failed and we were unable to recover it. 00:30:03.688 [2024-07-23 01:09:47.635116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.688 [2024-07-23 01:09:47.635294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.688 [2024-07-23 01:09:47.635323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.688 qpair failed and we were unable to recover it. 00:30:03.688 [2024-07-23 01:09:47.635508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.688 [2024-07-23 01:09:47.635709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.688 [2024-07-23 01:09:47.635739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.688 qpair failed and we were unable to recover it. 00:30:03.688 [2024-07-23 01:09:47.635931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.688 [2024-07-23 01:09:47.636174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.688 [2024-07-23 01:09:47.636213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.688 qpair failed and we were unable to recover it. 00:30:03.688 [2024-07-23 01:09:47.636364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.688 [2024-07-23 01:09:47.636514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.688 [2024-07-23 01:09:47.636538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.688 qpair failed and we were unable to recover it. 00:30:03.688 [2024-07-23 01:09:47.636700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.688 [2024-07-23 01:09:47.636911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.688 [2024-07-23 01:09:47.636939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.688 qpair failed and we were unable to recover it. 00:30:03.688 [2024-07-23 01:09:47.637146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.688 [2024-07-23 01:09:47.637304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.688 [2024-07-23 01:09:47.637331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.688 qpair failed and we were unable to recover it. 00:30:03.688 [2024-07-23 01:09:47.637485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.688 [2024-07-23 01:09:47.637694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.688 [2024-07-23 01:09:47.637722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.688 qpair failed and we were unable to recover it. 00:30:03.688 [2024-07-23 01:09:47.637907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.688 [2024-07-23 01:09:47.638052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.688 [2024-07-23 01:09:47.638095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.688 qpair failed and we were unable to recover it. 00:30:03.688 [2024-07-23 01:09:47.638303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.688 [2024-07-23 01:09:47.638482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.688 [2024-07-23 01:09:47.638509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.688 qpair failed and we were unable to recover it. 00:30:03.688 [2024-07-23 01:09:47.638738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.688 [2024-07-23 01:09:47.638905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.688 [2024-07-23 01:09:47.638945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.688 qpair failed and we were unable to recover it. 00:30:03.688 [2024-07-23 01:09:47.639164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.688 [2024-07-23 01:09:47.639410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.688 [2024-07-23 01:09:47.639437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.688 qpair failed and we were unable to recover it. 00:30:03.688 [2024-07-23 01:09:47.639629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.688 [2024-07-23 01:09:47.639889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.688 [2024-07-23 01:09:47.639919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.688 qpair failed and we were unable to recover it. 00:30:03.688 [2024-07-23 01:09:47.640103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.688 [2024-07-23 01:09:47.640263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.688 [2024-07-23 01:09:47.640287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.688 qpair failed and we were unable to recover it. 00:30:03.688 [2024-07-23 01:09:47.640482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.688 [2024-07-23 01:09:47.640623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.688 [2024-07-23 01:09:47.640648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.688 qpair failed and we were unable to recover it. 00:30:03.688 [2024-07-23 01:09:47.640793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.688 [2024-07-23 01:09:47.640991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.688 [2024-07-23 01:09:47.641016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.688 qpair failed and we were unable to recover it. 00:30:03.688 [2024-07-23 01:09:47.641164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.688 [2024-07-23 01:09:47.641295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.688 [2024-07-23 01:09:47.641319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.688 qpair failed and we were unable to recover it. 00:30:03.688 [2024-07-23 01:09:47.641479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.688 [2024-07-23 01:09:47.641669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.688 [2024-07-23 01:09:47.641697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.688 qpair failed and we were unable to recover it. 00:30:03.688 [2024-07-23 01:09:47.641882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.688 [2024-07-23 01:09:47.642110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.688 [2024-07-23 01:09:47.642137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.688 qpair failed and we were unable to recover it. 00:30:03.688 [2024-07-23 01:09:47.642322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.688 [2024-07-23 01:09:47.642527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.688 [2024-07-23 01:09:47.642554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.688 qpair failed and we were unable to recover it. 00:30:03.688 [2024-07-23 01:09:47.642746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.688 [2024-07-23 01:09:47.642936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.688 [2024-07-23 01:09:47.642961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.688 qpair failed and we were unable to recover it. 00:30:03.688 [2024-07-23 01:09:47.643140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.688 [2024-07-23 01:09:47.643387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.688 [2024-07-23 01:09:47.643419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.688 qpair failed and we were unable to recover it. 00:30:03.688 [2024-07-23 01:09:47.643628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.688 [2024-07-23 01:09:47.643844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.688 [2024-07-23 01:09:47.643869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.689 qpair failed and we were unable to recover it. 00:30:03.689 [2024-07-23 01:09:47.644025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.689 [2024-07-23 01:09:47.644238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.689 [2024-07-23 01:09:47.644262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.689 qpair failed and we were unable to recover it. 00:30:03.689 [2024-07-23 01:09:47.644417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.689 [2024-07-23 01:09:47.644580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.689 [2024-07-23 01:09:47.644604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.689 qpair failed and we were unable to recover it. 00:30:03.689 [2024-07-23 01:09:47.644784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.689 [2024-07-23 01:09:47.644966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.689 [2024-07-23 01:09:47.645007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.689 qpair failed and we were unable to recover it. 00:30:03.689 [2024-07-23 01:09:47.645147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.689 [2024-07-23 01:09:47.645306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.689 [2024-07-23 01:09:47.645330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.689 qpair failed and we were unable to recover it. 00:30:03.689 [2024-07-23 01:09:47.645484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.689 [2024-07-23 01:09:47.645678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.689 [2024-07-23 01:09:47.645703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.689 qpair failed and we were unable to recover it. 00:30:03.689 [2024-07-23 01:09:47.645859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.689 [2024-07-23 01:09:47.646061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.689 [2024-07-23 01:09:47.646089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.689 qpair failed and we were unable to recover it. 00:30:03.689 [2024-07-23 01:09:47.646289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.689 [2024-07-23 01:09:47.646417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.689 [2024-07-23 01:09:47.646440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.689 qpair failed and we were unable to recover it. 00:30:03.689 [2024-07-23 01:09:47.646642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.689 [2024-07-23 01:09:47.646821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.689 [2024-07-23 01:09:47.646848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.689 qpair failed and we were unable to recover it. 00:30:03.689 [2024-07-23 01:09:47.646996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.689 [2024-07-23 01:09:47.647228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.689 [2024-07-23 01:09:47.647260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.689 qpair failed and we were unable to recover it. 00:30:03.689 [2024-07-23 01:09:47.647470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.689 [2024-07-23 01:09:47.647659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.689 [2024-07-23 01:09:47.647688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.689 qpair failed and we were unable to recover it. 00:30:03.689 [2024-07-23 01:09:47.647895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.689 [2024-07-23 01:09:47.648075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.689 [2024-07-23 01:09:47.648105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.689 qpair failed and we were unable to recover it. 00:30:03.689 [2024-07-23 01:09:47.648281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.689 [2024-07-23 01:09:47.648494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.689 [2024-07-23 01:09:47.648522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.689 qpair failed and we were unable to recover it. 00:30:03.689 [2024-07-23 01:09:47.648702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.689 [2024-07-23 01:09:47.648902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.689 [2024-07-23 01:09:47.648929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.689 qpair failed and we were unable to recover it. 00:30:03.689 [2024-07-23 01:09:47.649106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.689 [2024-07-23 01:09:47.649282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.689 [2024-07-23 01:09:47.649310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.689 qpair failed and we were unable to recover it. 00:30:03.689 [2024-07-23 01:09:47.649457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.689 [2024-07-23 01:09:47.649631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.689 [2024-07-23 01:09:47.649660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.689 qpair failed and we were unable to recover it. 00:30:03.689 [2024-07-23 01:09:47.649855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.689 [2024-07-23 01:09:47.650027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.689 [2024-07-23 01:09:47.650051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.689 qpair failed and we were unable to recover it. 00:30:03.689 [2024-07-23 01:09:47.650271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.689 [2024-07-23 01:09:47.650451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.689 [2024-07-23 01:09:47.650479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.689 qpair failed and we were unable to recover it. 00:30:03.689 [2024-07-23 01:09:47.650682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.689 [2024-07-23 01:09:47.650867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.689 [2024-07-23 01:09:47.650895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.689 qpair failed and we were unable to recover it. 00:30:03.689 [2024-07-23 01:09:47.651082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.689 [2024-07-23 01:09:47.651373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.689 [2024-07-23 01:09:47.651405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.689 qpair failed and we were unable to recover it. 00:30:03.689 [2024-07-23 01:09:47.651637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.689 [2024-07-23 01:09:47.651822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.689 [2024-07-23 01:09:47.651849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.689 qpair failed and we were unable to recover it. 00:30:03.689 [2024-07-23 01:09:47.652027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.689 [2024-07-23 01:09:47.652232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.689 [2024-07-23 01:09:47.652256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.689 qpair failed and we were unable to recover it. 00:30:03.689 [2024-07-23 01:09:47.652450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.689 [2024-07-23 01:09:47.652645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.689 [2024-07-23 01:09:47.652674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.689 qpair failed and we were unable to recover it. 00:30:03.689 [2024-07-23 01:09:47.652830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.689 [2024-07-23 01:09:47.653035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.689 [2024-07-23 01:09:47.653063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.689 qpair failed and we were unable to recover it. 00:30:03.689 [2024-07-23 01:09:47.653253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.689 [2024-07-23 01:09:47.653409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.689 [2024-07-23 01:09:47.653436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.689 qpair failed and we were unable to recover it. 00:30:03.689 [2024-07-23 01:09:47.653626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.689 [2024-07-23 01:09:47.653760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.689 [2024-07-23 01:09:47.653803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.689 qpair failed and we were unable to recover it. 00:30:03.689 [2024-07-23 01:09:47.653985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.689 [2024-07-23 01:09:47.654191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.689 [2024-07-23 01:09:47.654219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.689 qpair failed and we were unable to recover it. 00:30:03.689 [2024-07-23 01:09:47.654425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.689 [2024-07-23 01:09:47.654603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.689 [2024-07-23 01:09:47.654650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.689 qpair failed and we were unable to recover it. 00:30:03.689 [2024-07-23 01:09:47.654874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.689 [2024-07-23 01:09:47.655054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.690 [2024-07-23 01:09:47.655081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.690 qpair failed and we were unable to recover it. 00:30:03.690 [2024-07-23 01:09:47.655292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.690 [2024-07-23 01:09:47.655505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.690 [2024-07-23 01:09:47.655537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.690 qpair failed and we were unable to recover it. 00:30:03.690 [2024-07-23 01:09:47.655743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.690 [2024-07-23 01:09:47.655921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.690 [2024-07-23 01:09:47.655950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.690 qpair failed and we were unable to recover it. 00:30:03.690 [2024-07-23 01:09:47.656165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.690 [2024-07-23 01:09:47.656379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.690 [2024-07-23 01:09:47.656407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.690 qpair failed and we were unable to recover it. 00:30:03.690 [2024-07-23 01:09:47.656587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.691 [2024-07-23 01:09:47.656774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.691 [2024-07-23 01:09:47.656802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.691 qpair failed and we were unable to recover it. 00:30:03.691 [2024-07-23 01:09:47.657012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.691 [2024-07-23 01:09:47.657215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.691 [2024-07-23 01:09:47.657242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.691 qpair failed and we were unable to recover it. 00:30:03.691 [2024-07-23 01:09:47.657417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.691 [2024-07-23 01:09:47.657571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.691 [2024-07-23 01:09:47.657610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.691 qpair failed and we were unable to recover it. 00:30:03.691 [2024-07-23 01:09:47.657849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.691 [2024-07-23 01:09:47.658026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.691 [2024-07-23 01:09:47.658054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.691 qpair failed and we were unable to recover it. 00:30:03.691 [2024-07-23 01:09:47.658269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.691 [2024-07-23 01:09:47.658497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.691 [2024-07-23 01:09:47.658522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.691 qpair failed and we were unable to recover it. 00:30:03.691 [2024-07-23 01:09:47.658725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.691 [2024-07-23 01:09:47.659006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.691 [2024-07-23 01:09:47.659034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.691 qpair failed and we were unable to recover it. 00:30:03.691 [2024-07-23 01:09:47.659214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.691 [2024-07-23 01:09:47.659391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.691 [2024-07-23 01:09:47.659420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.691 qpair failed and we were unable to recover it. 00:30:03.691 [2024-07-23 01:09:47.659631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.691 [2024-07-23 01:09:47.659865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.691 [2024-07-23 01:09:47.659889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.691 qpair failed and we were unable to recover it. 00:30:03.691 [2024-07-23 01:09:47.660090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.691 [2024-07-23 01:09:47.660276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.691 [2024-07-23 01:09:47.660303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.691 qpair failed and we were unable to recover it. 00:30:03.691 [2024-07-23 01:09:47.660486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.691 [2024-07-23 01:09:47.660700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.691 [2024-07-23 01:09:47.660726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.691 qpair failed and we were unable to recover it. 00:30:03.691 [2024-07-23 01:09:47.660971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.691 [2024-07-23 01:09:47.661156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.691 [2024-07-23 01:09:47.661183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.691 qpair failed and we were unable to recover it. 00:30:03.691 [2024-07-23 01:09:47.661368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.691 [2024-07-23 01:09:47.661662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.691 [2024-07-23 01:09:47.661690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.691 qpair failed and we were unable to recover it. 00:30:03.691 [2024-07-23 01:09:47.661853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.691 [2024-07-23 01:09:47.662068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.691 [2024-07-23 01:09:47.662106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.691 qpair failed and we were unable to recover it. 00:30:03.691 [2024-07-23 01:09:47.662267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.691 [2024-07-23 01:09:47.662458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.691 [2024-07-23 01:09:47.662486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.691 qpair failed and we were unable to recover it. 00:30:03.691 [2024-07-23 01:09:47.662666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.691 [2024-07-23 01:09:47.662892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.691 [2024-07-23 01:09:47.662920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.691 qpair failed and we were unable to recover it. 00:30:03.691 [2024-07-23 01:09:47.663079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.691 [2024-07-23 01:09:47.663261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.691 [2024-07-23 01:09:47.663291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.691 qpair failed and we were unable to recover it. 00:30:03.691 [2024-07-23 01:09:47.663472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.691 [2024-07-23 01:09:47.663660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.691 [2024-07-23 01:09:47.663689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.691 qpair failed and we were unable to recover it. 00:30:03.691 [2024-07-23 01:09:47.663900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.691 [2024-07-23 01:09:47.664124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.691 [2024-07-23 01:09:47.664151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.691 qpair failed and we were unable to recover it. 00:30:03.691 [2024-07-23 01:09:47.664346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.691 [2024-07-23 01:09:47.664548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.691 [2024-07-23 01:09:47.664575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.691 qpair failed and we were unable to recover it. 00:30:03.691 [2024-07-23 01:09:47.664791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.691 [2024-07-23 01:09:47.664995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.691 [2024-07-23 01:09:47.665023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.691 qpair failed and we were unable to recover it. 00:30:03.691 [2024-07-23 01:09:47.665250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.691 [2024-07-23 01:09:47.665405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.691 [2024-07-23 01:09:47.665430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.691 qpair failed and we were unable to recover it. 00:30:03.691 [2024-07-23 01:09:47.665645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.691 [2024-07-23 01:09:47.665861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.691 [2024-07-23 01:09:47.665886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.691 qpair failed and we were unable to recover it. 00:30:03.691 [2024-07-23 01:09:47.666118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.691 [2024-07-23 01:09:47.666293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.691 [2024-07-23 01:09:47.666321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.691 qpair failed and we were unable to recover it. 00:30:03.691 [2024-07-23 01:09:47.666523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.691 [2024-07-23 01:09:47.666722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.691 [2024-07-23 01:09:47.666748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.691 qpair failed and we were unable to recover it. 00:30:03.692 [2024-07-23 01:09:47.666935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.692 [2024-07-23 01:09:47.667112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.692 [2024-07-23 01:09:47.667139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.692 qpair failed and we were unable to recover it. 00:30:03.692 [2024-07-23 01:09:47.667323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.692 [2024-07-23 01:09:47.667490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.692 [2024-07-23 01:09:47.667516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.692 qpair failed and we were unable to recover it. 00:30:03.692 [2024-07-23 01:09:47.667721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.692 [2024-07-23 01:09:47.667866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.692 [2024-07-23 01:09:47.667892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.692 qpair failed and we were unable to recover it. 00:30:03.692 [2024-07-23 01:09:47.668111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.692 [2024-07-23 01:09:47.668268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.692 [2024-07-23 01:09:47.668295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.692 qpair failed and we were unable to recover it. 00:30:03.692 [2024-07-23 01:09:47.668448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.692 [2024-07-23 01:09:47.668655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.692 [2024-07-23 01:09:47.668683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.692 qpair failed and we were unable to recover it. 00:30:03.692 [2024-07-23 01:09:47.668848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.692 [2024-07-23 01:09:47.669010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.692 [2024-07-23 01:09:47.669034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.692 qpair failed and we were unable to recover it. 00:30:03.692 [2024-07-23 01:09:47.669214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.692 [2024-07-23 01:09:47.669394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.692 [2024-07-23 01:09:47.669421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.692 qpair failed and we were unable to recover it. 00:30:03.692 [2024-07-23 01:09:47.669625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.692 [2024-07-23 01:09:47.669834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.692 [2024-07-23 01:09:47.669860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.692 qpair failed and we were unable to recover it. 00:30:03.692 [2024-07-23 01:09:47.670023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.692 [2024-07-23 01:09:47.670211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.692 [2024-07-23 01:09:47.670239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.692 qpair failed and we were unable to recover it. 00:30:03.692 [2024-07-23 01:09:47.670445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.692 [2024-07-23 01:09:47.670638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.692 [2024-07-23 01:09:47.670666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.692 qpair failed and we were unable to recover it. 00:30:03.692 [2024-07-23 01:09:47.670891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.692 [2024-07-23 01:09:47.671056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.692 [2024-07-23 01:09:47.671081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.692 qpair failed and we were unable to recover it. 00:30:03.692 [2024-07-23 01:09:47.671241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.692 [2024-07-23 01:09:47.671406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.692 [2024-07-23 01:09:47.671430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.692 qpair failed and we were unable to recover it. 00:30:03.692 [2024-07-23 01:09:47.671646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.692 [2024-07-23 01:09:47.671803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.692 [2024-07-23 01:09:47.671830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.692 qpair failed and we were unable to recover it. 00:30:03.692 [2024-07-23 01:09:47.672018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.692 [2024-07-23 01:09:47.672194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.692 [2024-07-23 01:09:47.672220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.692 qpair failed and we were unable to recover it. 00:30:03.692 [2024-07-23 01:09:47.672435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.692 [2024-07-23 01:09:47.672587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.692 [2024-07-23 01:09:47.672622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.692 qpair failed and we were unable to recover it. 00:30:03.692 [2024-07-23 01:09:47.672833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.692 [2024-07-23 01:09:47.673021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.692 [2024-07-23 01:09:47.673049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.692 qpair failed and we were unable to recover it. 00:30:03.692 [2024-07-23 01:09:47.673230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.692 [2024-07-23 01:09:47.673419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.692 [2024-07-23 01:09:47.673446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.692 qpair failed and we were unable to recover it. 00:30:03.692 [2024-07-23 01:09:47.673636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.692 [2024-07-23 01:09:47.673830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.692 [2024-07-23 01:09:47.673855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.692 qpair failed and we were unable to recover it. 00:30:03.692 [2024-07-23 01:09:47.674038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.692 [2024-07-23 01:09:47.674250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.692 [2024-07-23 01:09:47.674278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.692 qpair failed and we were unable to recover it. 00:30:03.692 [2024-07-23 01:09:47.674489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.692 [2024-07-23 01:09:47.674695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.692 [2024-07-23 01:09:47.674723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.692 qpair failed and we were unable to recover it. 00:30:03.692 [2024-07-23 01:09:47.674935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.692 [2024-07-23 01:09:47.675106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.692 [2024-07-23 01:09:47.675131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.692 qpair failed and we were unable to recover it. 00:30:03.692 [2024-07-23 01:09:47.675269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.692 [2024-07-23 01:09:47.675481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.692 [2024-07-23 01:09:47.675509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.692 qpair failed and we were unable to recover it. 00:30:03.692 [2024-07-23 01:09:47.675689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.692 [2024-07-23 01:09:47.675873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.692 [2024-07-23 01:09:47.675901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.692 qpair failed and we were unable to recover it. 00:30:03.692 [2024-07-23 01:09:47.676054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.692 [2024-07-23 01:09:47.676270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.692 [2024-07-23 01:09:47.676297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.692 qpair failed and we were unable to recover it. 00:30:03.692 [2024-07-23 01:09:47.676485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.692 [2024-07-23 01:09:47.676676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.692 [2024-07-23 01:09:47.676702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.692 qpair failed and we were unable to recover it. 00:30:03.692 [2024-07-23 01:09:47.676927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.692 [2024-07-23 01:09:47.677104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.692 [2024-07-23 01:09:47.677131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.692 qpair failed and we were unable to recover it. 00:30:03.692 [2024-07-23 01:09:47.677295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.692 [2024-07-23 01:09:47.677432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.692 [2024-07-23 01:09:47.677457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.692 qpair failed and we were unable to recover it. 00:30:03.692 [2024-07-23 01:09:47.677676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.692 [2024-07-23 01:09:47.677855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.692 [2024-07-23 01:09:47.677883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.693 qpair failed and we were unable to recover it. 00:30:03.693 [2024-07-23 01:09:47.678063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.693 [2024-07-23 01:09:47.678253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.693 [2024-07-23 01:09:47.678278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.693 qpair failed and we were unable to recover it. 00:30:03.693 [2024-07-23 01:09:47.678444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.693 [2024-07-23 01:09:47.678587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.693 [2024-07-23 01:09:47.678612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.693 qpair failed and we were unable to recover it. 00:30:03.693 [2024-07-23 01:09:47.678791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.693 [2024-07-23 01:09:47.678965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.693 [2024-07-23 01:09:47.678992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.693 qpair failed and we were unable to recover it. 00:30:03.693 [2024-07-23 01:09:47.679195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.693 [2024-07-23 01:09:47.679381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.693 [2024-07-23 01:09:47.679408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.693 qpair failed and we were unable to recover it. 00:30:03.693 [2024-07-23 01:09:47.679572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.693 [2024-07-23 01:09:47.679750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.693 [2024-07-23 01:09:47.679794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.693 qpair failed and we were unable to recover it. 00:30:03.693 [2024-07-23 01:09:47.679988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.693 [2024-07-23 01:09:47.680326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.693 [2024-07-23 01:09:47.680354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.693 qpair failed and we were unable to recover it. 00:30:03.693 [2024-07-23 01:09:47.680549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.693 [2024-07-23 01:09:47.680716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.693 [2024-07-23 01:09:47.680741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.693 qpair failed and we were unable to recover it. 00:30:03.693 [2024-07-23 01:09:47.680929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.693 [2024-07-23 01:09:47.681059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.693 [2024-07-23 01:09:47.681101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.693 qpair failed and we were unable to recover it. 00:30:03.693 [2024-07-23 01:09:47.681285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.693 [2024-07-23 01:09:47.681455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.693 [2024-07-23 01:09:47.681482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.693 qpair failed and we were unable to recover it. 00:30:03.693 [2024-07-23 01:09:47.681673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.693 [2024-07-23 01:09:47.681841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.693 [2024-07-23 01:09:47.681867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.693 qpair failed and we were unable to recover it. 00:30:03.693 [2024-07-23 01:09:47.682088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.693 [2024-07-23 01:09:47.682262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.693 [2024-07-23 01:09:47.682301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.693 qpair failed and we were unable to recover it. 00:30:03.693 [2024-07-23 01:09:47.682432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.693 [2024-07-23 01:09:47.682626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.693 [2024-07-23 01:09:47.682651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.693 qpair failed and we were unable to recover it. 00:30:03.693 [2024-07-23 01:09:47.682890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.693 [2024-07-23 01:09:47.683105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.693 [2024-07-23 01:09:47.683132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.693 qpair failed and we were unable to recover it. 00:30:03.693 [2024-07-23 01:09:47.683316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.693 [2024-07-23 01:09:47.683452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.693 [2024-07-23 01:09:47.683494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.693 qpair failed and we were unable to recover it. 00:30:03.693 [2024-07-23 01:09:47.683664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.693 [2024-07-23 01:09:47.683895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.693 [2024-07-23 01:09:47.683936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.693 qpair failed and we were unable to recover it. 00:30:03.693 [2024-07-23 01:09:47.684116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.693 [2024-07-23 01:09:47.684302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.693 [2024-07-23 01:09:47.684329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.693 qpair failed and we were unable to recover it. 00:30:03.693 [2024-07-23 01:09:47.684506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.693 [2024-07-23 01:09:47.684670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.693 [2024-07-23 01:09:47.684711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.693 qpair failed and we were unable to recover it. 00:30:03.693 [2024-07-23 01:09:47.684869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.693 [2024-07-23 01:09:47.685050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.693 [2024-07-23 01:09:47.685078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.693 qpair failed and we were unable to recover it. 00:30:03.693 [2024-07-23 01:09:47.685255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.693 [2024-07-23 01:09:47.685458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.693 [2024-07-23 01:09:47.685486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.693 qpair failed and we were unable to recover it. 00:30:03.693 [2024-07-23 01:09:47.685670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.693 [2024-07-23 01:09:47.685858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.693 [2024-07-23 01:09:47.685885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.693 qpair failed and we were unable to recover it. 00:30:03.693 [2024-07-23 01:09:47.686039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.693 [2024-07-23 01:09:47.686220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.693 [2024-07-23 01:09:47.686247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.693 qpair failed and we were unable to recover it. 00:30:03.693 [2024-07-23 01:09:47.686452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.693 [2024-07-23 01:09:47.686663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.693 [2024-07-23 01:09:47.686687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.693 qpair failed and we were unable to recover it. 00:30:03.693 [2024-07-23 01:09:47.686861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.693 [2024-07-23 01:09:47.687026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.693 [2024-07-23 01:09:47.687052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.693 qpair failed and we were unable to recover it. 00:30:03.693 [2024-07-23 01:09:47.687246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.693 [2024-07-23 01:09:47.687464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.693 [2024-07-23 01:09:47.687491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.693 qpair failed and we were unable to recover it. 00:30:03.693 [2024-07-23 01:09:47.687666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.693 [2024-07-23 01:09:47.687843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.693 [2024-07-23 01:09:47.687871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.693 qpair failed and we were unable to recover it. 00:30:03.693 [2024-07-23 01:09:47.688033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.693 [2024-07-23 01:09:47.688207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.693 [2024-07-23 01:09:47.688231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.693 qpair failed and we were unable to recover it. 00:30:03.693 [2024-07-23 01:09:47.688439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.693 [2024-07-23 01:09:47.688668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.693 [2024-07-23 01:09:47.688696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.693 qpair failed and we were unable to recover it. 00:30:03.693 [2024-07-23 01:09:47.688879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.693 [2024-07-23 01:09:47.689094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.693 [2024-07-23 01:09:47.689118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.693 qpair failed and we were unable to recover it. 00:30:03.694 [2024-07-23 01:09:47.689296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.694 [2024-07-23 01:09:47.689526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.694 [2024-07-23 01:09:47.689550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.694 qpair failed and we were unable to recover it. 00:30:03.694 [2024-07-23 01:09:47.689742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.694 [2024-07-23 01:09:47.690044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.694 [2024-07-23 01:09:47.690071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.694 qpair failed and we were unable to recover it. 00:30:03.694 [2024-07-23 01:09:47.690252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.694 [2024-07-23 01:09:47.690430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.694 [2024-07-23 01:09:47.690454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.694 qpair failed and we were unable to recover it. 00:30:03.694 [2024-07-23 01:09:47.690649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.694 [2024-07-23 01:09:47.690862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.694 [2024-07-23 01:09:47.690890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.694 qpair failed and we were unable to recover it. 00:30:03.694 [2024-07-23 01:09:47.691096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.694 [2024-07-23 01:09:47.691316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.694 [2024-07-23 01:09:47.691344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.694 qpair failed and we were unable to recover it. 00:30:03.694 [2024-07-23 01:09:47.691551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.694 [2024-07-23 01:09:47.691731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.694 [2024-07-23 01:09:47.691760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.694 qpair failed and we were unable to recover it. 00:30:03.694 [2024-07-23 01:09:47.691986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.694 [2024-07-23 01:09:47.692172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.694 [2024-07-23 01:09:47.692196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.694 qpair failed and we were unable to recover it. 00:30:03.694 [2024-07-23 01:09:47.692389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.694 [2024-07-23 01:09:47.692546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.694 [2024-07-23 01:09:47.692585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.694 qpair failed and we were unable to recover it. 00:30:03.694 [2024-07-23 01:09:47.692816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.694 [2024-07-23 01:09:47.692994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.694 [2024-07-23 01:09:47.693022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.694 qpair failed and we were unable to recover it. 00:30:03.694 [2024-07-23 01:09:47.693197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.694 [2024-07-23 01:09:47.693410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.694 [2024-07-23 01:09:47.693438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.694 qpair failed and we were unable to recover it. 00:30:03.694 [2024-07-23 01:09:47.693627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.694 [2024-07-23 01:09:47.693861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.694 [2024-07-23 01:09:47.693889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.694 qpair failed and we were unable to recover it. 00:30:03.694 [2024-07-23 01:09:47.694066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.694 [2024-07-23 01:09:47.694283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.694 [2024-07-23 01:09:47.694310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.694 qpair failed and we were unable to recover it. 00:30:03.694 [2024-07-23 01:09:47.694529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.694 [2024-07-23 01:09:47.694729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.694 [2024-07-23 01:09:47.694754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.694 qpair failed and we were unable to recover it. 00:30:03.694 [2024-07-23 01:09:47.694946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.694 [2024-07-23 01:09:47.695151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.694 [2024-07-23 01:09:47.695174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.694 qpair failed and we were unable to recover it. 00:30:03.694 [2024-07-23 01:09:47.695390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.694 [2024-07-23 01:09:47.695550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.694 [2024-07-23 01:09:47.695589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.694 qpair failed and we were unable to recover it. 00:30:03.694 [2024-07-23 01:09:47.695753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.694 [2024-07-23 01:09:47.695963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.694 [2024-07-23 01:09:47.695990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.694 qpair failed and we were unable to recover it. 00:30:03.694 [2024-07-23 01:09:47.696198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.694 [2024-07-23 01:09:47.696376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.694 [2024-07-23 01:09:47.696403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.694 qpair failed and we were unable to recover it. 00:30:03.694 [2024-07-23 01:09:47.696609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.694 [2024-07-23 01:09:47.696797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.694 [2024-07-23 01:09:47.696827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.694 qpair failed and we were unable to recover it. 00:30:03.694 [2024-07-23 01:09:47.697040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.694 [2024-07-23 01:09:47.697229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.694 [2024-07-23 01:09:47.697257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.694 qpair failed and we were unable to recover it. 00:30:03.694 [2024-07-23 01:09:47.697466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.694 [2024-07-23 01:09:47.697673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.694 [2024-07-23 01:09:47.697699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.694 qpair failed and we were unable to recover it. 00:30:03.694 [2024-07-23 01:09:47.697862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.694 [2024-07-23 01:09:47.698069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.694 [2024-07-23 01:09:47.698098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.694 qpair failed and we were unable to recover it. 00:30:03.694 [2024-07-23 01:09:47.698251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.694 [2024-07-23 01:09:47.698464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.694 [2024-07-23 01:09:47.698491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.694 qpair failed and we were unable to recover it. 00:30:03.694 [2024-07-23 01:09:47.698684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.694 [2024-07-23 01:09:47.698826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.694 [2024-07-23 01:09:47.698852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.694 qpair failed and we were unable to recover it. 00:30:03.694 [2024-07-23 01:09:47.699022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.694 [2024-07-23 01:09:47.699176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.694 [2024-07-23 01:09:47.699206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.694 qpair failed and we were unable to recover it. 00:30:03.694 [2024-07-23 01:09:47.699435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.694 [2024-07-23 01:09:47.699629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.694 [2024-07-23 01:09:47.699657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.694 qpair failed and we were unable to recover it. 00:30:03.694 [2024-07-23 01:09:47.699834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.694 [2024-07-23 01:09:47.700040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.694 [2024-07-23 01:09:47.700066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.694 qpair failed and we were unable to recover it. 00:30:03.694 [2024-07-23 01:09:47.700245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.694 [2024-07-23 01:09:47.700460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.694 [2024-07-23 01:09:47.700487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.694 qpair failed and we were unable to recover it. 00:30:03.694 [2024-07-23 01:09:47.700676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.694 [2024-07-23 01:09:47.700848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.694 [2024-07-23 01:09:47.700888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.694 qpair failed and we were unable to recover it. 00:30:03.694 [2024-07-23 01:09:47.701107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.695 [2024-07-23 01:09:47.701325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.695 [2024-07-23 01:09:47.701350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.695 qpair failed and we were unable to recover it. 00:30:03.695 [2024-07-23 01:09:47.701552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.695 [2024-07-23 01:09:47.701772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.695 [2024-07-23 01:09:47.701801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.695 qpair failed and we were unable to recover it. 00:30:03.695 [2024-07-23 01:09:47.701965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.695 [2024-07-23 01:09:47.702158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.695 [2024-07-23 01:09:47.702200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.695 qpair failed and we were unable to recover it. 00:30:03.695 [2024-07-23 01:09:47.702381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.695 [2024-07-23 01:09:47.702561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.695 [2024-07-23 01:09:47.702588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.695 qpair failed and we were unable to recover it. 00:30:03.695 [2024-07-23 01:09:47.702749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.695 [2024-07-23 01:09:47.702924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.695 [2024-07-23 01:09:47.702951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.695 qpair failed and we were unable to recover it. 00:30:03.695 [2024-07-23 01:09:47.703139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.695 [2024-07-23 01:09:47.703434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.695 [2024-07-23 01:09:47.703462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.695 qpair failed and we were unable to recover it. 00:30:03.695 [2024-07-23 01:09:47.703671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.695 [2024-07-23 01:09:47.703852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.695 [2024-07-23 01:09:47.703879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.695 qpair failed and we were unable to recover it. 00:30:03.695 [2024-07-23 01:09:47.704059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.695 [2024-07-23 01:09:47.704265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.695 [2024-07-23 01:09:47.704292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.695 qpair failed and we were unable to recover it. 00:30:03.695 [2024-07-23 01:09:47.704471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.695 [2024-07-23 01:09:47.704632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.695 [2024-07-23 01:09:47.704672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.695 qpair failed and we were unable to recover it. 00:30:03.695 [2024-07-23 01:09:47.704880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.695 [2024-07-23 01:09:47.705077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.695 [2024-07-23 01:09:47.705102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.695 qpair failed and we were unable to recover it. 00:30:03.695 [2024-07-23 01:09:47.705311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.695 [2024-07-23 01:09:47.705460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.695 [2024-07-23 01:09:47.705489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.695 qpair failed and we were unable to recover it. 00:30:03.695 [2024-07-23 01:09:47.705655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.695 [2024-07-23 01:09:47.705834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.695 [2024-07-23 01:09:47.705874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.695 qpair failed and we were unable to recover it. 00:30:03.695 [2024-07-23 01:09:47.706063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.695 [2024-07-23 01:09:47.706253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.695 [2024-07-23 01:09:47.706292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.695 qpair failed and we were unable to recover it. 00:30:03.695 [2024-07-23 01:09:47.706505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.695 [2024-07-23 01:09:47.706691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.695 [2024-07-23 01:09:47.706717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.695 qpair failed and we were unable to recover it. 00:30:03.695 [2024-07-23 01:09:47.706904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.695 [2024-07-23 01:09:47.707124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.695 [2024-07-23 01:09:47.707151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.695 qpair failed and we were unable to recover it. 00:30:03.695 [2024-07-23 01:09:47.707358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.695 [2024-07-23 01:09:47.707611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.695 [2024-07-23 01:09:47.707660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.695 qpair failed and we were unable to recover it. 00:30:03.695 [2024-07-23 01:09:47.707831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.695 [2024-07-23 01:09:47.708043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.695 [2024-07-23 01:09:47.708070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.695 qpair failed and we were unable to recover it. 00:30:03.695 [2024-07-23 01:09:47.708313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.695 [2024-07-23 01:09:47.708498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.695 [2024-07-23 01:09:47.708522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.695 qpair failed and we were unable to recover it. 00:30:03.695 [2024-07-23 01:09:47.708690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.695 [2024-07-23 01:09:47.708843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.695 [2024-07-23 01:09:47.708870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.695 qpair failed and we were unable to recover it. 00:30:03.695 [2024-07-23 01:09:47.709039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.695 [2024-07-23 01:09:47.709223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.695 [2024-07-23 01:09:47.709262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.695 qpair failed and we were unable to recover it. 00:30:03.695 [2024-07-23 01:09:47.709396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.695 [2024-07-23 01:09:47.709566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.695 [2024-07-23 01:09:47.709595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.695 qpair failed and we were unable to recover it. 00:30:03.695 [2024-07-23 01:09:47.709797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.695 [2024-07-23 01:09:47.709977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.695 [2024-07-23 01:09:47.710004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.696 qpair failed and we were unable to recover it. 00:30:03.696 [2024-07-23 01:09:47.710184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.696 [2024-07-23 01:09:47.710359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.696 [2024-07-23 01:09:47.710387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.696 qpair failed and we were unable to recover it. 00:30:03.696 [2024-07-23 01:09:47.710600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.696 [2024-07-23 01:09:47.710815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.696 [2024-07-23 01:09:47.710842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.696 qpair failed and we were unable to recover it. 00:30:03.696 [2024-07-23 01:09:47.711025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.696 [2024-07-23 01:09:47.711231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.696 [2024-07-23 01:09:47.711258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.696 qpair failed and we were unable to recover it. 00:30:03.696 [2024-07-23 01:09:47.711476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.696 [2024-07-23 01:09:47.711655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.696 [2024-07-23 01:09:47.711680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.696 qpair failed and we were unable to recover it. 00:30:03.696 [2024-07-23 01:09:47.711902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.696 [2024-07-23 01:09:47.712109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.696 [2024-07-23 01:09:47.712136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.696 qpair failed and we were unable to recover it. 00:30:03.696 [2024-07-23 01:09:47.712347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.696 [2024-07-23 01:09:47.712500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.696 [2024-07-23 01:09:47.712527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.696 qpair failed and we were unable to recover it. 00:30:03.696 [2024-07-23 01:09:47.712681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.696 [2024-07-23 01:09:47.712875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.696 [2024-07-23 01:09:47.712914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.696 qpair failed and we were unable to recover it. 00:30:03.696 [2024-07-23 01:09:47.713100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.696 [2024-07-23 01:09:47.713274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.696 [2024-07-23 01:09:47.713301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.696 qpair failed and we were unable to recover it. 00:30:03.696 [2024-07-23 01:09:47.713466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.696 [2024-07-23 01:09:47.713641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.696 [2024-07-23 01:09:47.713685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.696 qpair failed and we were unable to recover it. 00:30:03.696 [2024-07-23 01:09:47.713864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.696 [2024-07-23 01:09:47.714017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.696 [2024-07-23 01:09:47.714056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.696 qpair failed and we were unable to recover it. 00:30:03.696 [2024-07-23 01:09:47.714279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.696 [2024-07-23 01:09:47.714467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.696 [2024-07-23 01:09:47.714494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.696 qpair failed and we were unable to recover it. 00:30:03.696 [2024-07-23 01:09:47.714701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.696 [2024-07-23 01:09:47.714866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.696 [2024-07-23 01:09:47.714908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.696 qpair failed and we were unable to recover it. 00:30:03.696 [2024-07-23 01:09:47.715119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.696 [2024-07-23 01:09:47.715296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.696 [2024-07-23 01:09:47.715323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.696 qpair failed and we were unable to recover it. 00:30:03.696 [2024-07-23 01:09:47.715562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.696 [2024-07-23 01:09:47.715747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.696 [2024-07-23 01:09:47.715772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.696 qpair failed and we were unable to recover it. 00:30:03.696 [2024-07-23 01:09:47.715970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.696 [2024-07-23 01:09:47.716156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.696 [2024-07-23 01:09:47.716183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.696 qpair failed and we were unable to recover it. 00:30:03.696 [2024-07-23 01:09:47.716366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.696 [2024-07-23 01:09:47.716574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.696 [2024-07-23 01:09:47.716601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.696 qpair failed and we were unable to recover it. 00:30:03.696 [2024-07-23 01:09:47.716793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.696 [2024-07-23 01:09:47.717000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.696 [2024-07-23 01:09:47.717028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.696 qpair failed and we were unable to recover it. 00:30:03.696 [2024-07-23 01:09:47.717327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.696 [2024-07-23 01:09:47.717524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.696 [2024-07-23 01:09:47.717552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.696 qpair failed and we were unable to recover it. 00:30:03.696 [2024-07-23 01:09:47.717743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.696 [2024-07-23 01:09:47.717956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.696 [2024-07-23 01:09:47.717988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.696 qpair failed and we were unable to recover it. 00:30:03.696 [2024-07-23 01:09:47.718173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.696 [2024-07-23 01:09:47.718354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.696 [2024-07-23 01:09:47.718381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.696 qpair failed and we were unable to recover it. 00:30:03.696 [2024-07-23 01:09:47.718536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.696 [2024-07-23 01:09:47.718746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.696 [2024-07-23 01:09:47.718774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.696 qpair failed and we were unable to recover it. 00:30:03.696 [2024-07-23 01:09:47.718958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.696 [2024-07-23 01:09:47.719216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.696 [2024-07-23 01:09:47.719246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.696 qpair failed and we were unable to recover it. 00:30:03.696 [2024-07-23 01:09:47.719424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.696 [2024-07-23 01:09:47.719635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.696 [2024-07-23 01:09:47.719663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.696 qpair failed and we were unable to recover it. 00:30:03.696 [2024-07-23 01:09:47.719821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.696 [2024-07-23 01:09:47.720000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.696 [2024-07-23 01:09:47.720039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.696 qpair failed and we were unable to recover it. 00:30:03.696 [2024-07-23 01:09:47.720223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.696 [2024-07-23 01:09:47.720422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.696 [2024-07-23 01:09:47.720447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.696 qpair failed and we were unable to recover it. 00:30:03.696 [2024-07-23 01:09:47.720618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.696 [2024-07-23 01:09:47.720819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.696 [2024-07-23 01:09:47.720846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.696 qpair failed and we were unable to recover it. 00:30:03.696 [2024-07-23 01:09:47.721035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.696 [2024-07-23 01:09:47.721248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.696 [2024-07-23 01:09:47.721275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.696 qpair failed and we were unable to recover it. 00:30:03.696 [2024-07-23 01:09:47.721450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.696 [2024-07-23 01:09:47.721601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.696 [2024-07-23 01:09:47.721637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.696 qpair failed and we were unable to recover it. 00:30:03.697 [2024-07-23 01:09:47.721860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.697 [2024-07-23 01:09:47.722006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.697 [2024-07-23 01:09:47.722045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.697 qpair failed and we were unable to recover it. 00:30:03.697 [2024-07-23 01:09:47.722237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.697 [2024-07-23 01:09:47.722427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.697 [2024-07-23 01:09:47.722456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.697 qpair failed and we were unable to recover it. 00:30:03.697 [2024-07-23 01:09:47.722680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.697 [2024-07-23 01:09:47.722862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.697 [2024-07-23 01:09:47.722886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.697 qpair failed and we were unable to recover it. 00:30:03.697 [2024-07-23 01:09:47.723053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.697 [2024-07-23 01:09:47.723191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.697 [2024-07-23 01:09:47.723214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.697 qpair failed and we were unable to recover it. 00:30:03.697 [2024-07-23 01:09:47.723387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.697 [2024-07-23 01:09:47.723641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.697 [2024-07-23 01:09:47.723669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.697 qpair failed and we were unable to recover it. 00:30:03.697 [2024-07-23 01:09:47.723868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.697 [2024-07-23 01:09:47.724085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.697 [2024-07-23 01:09:47.724112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.697 qpair failed and we were unable to recover it. 00:30:03.697 [2024-07-23 01:09:47.724292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.697 [2024-07-23 01:09:47.724492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.697 [2024-07-23 01:09:47.724516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.697 qpair failed and we were unable to recover it. 00:30:03.697 [2024-07-23 01:09:47.724728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.697 [2024-07-23 01:09:47.724933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.697 [2024-07-23 01:09:47.724960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.697 qpair failed and we were unable to recover it. 00:30:03.697 [2024-07-23 01:09:47.725167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.697 [2024-07-23 01:09:47.725348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.697 [2024-07-23 01:09:47.725377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.697 qpair failed and we were unable to recover it. 00:30:03.697 [2024-07-23 01:09:47.725611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.697 [2024-07-23 01:09:47.725805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.697 [2024-07-23 01:09:47.725833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.697 qpair failed and we were unable to recover it. 00:30:03.697 [2024-07-23 01:09:47.726000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.697 [2024-07-23 01:09:47.726258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.697 [2024-07-23 01:09:47.726288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.697 qpair failed and we were unable to recover it. 00:30:03.697 [2024-07-23 01:09:47.726494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.697 [2024-07-23 01:09:47.726711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.697 [2024-07-23 01:09:47.726739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.697 qpair failed and we were unable to recover it. 00:30:03.697 [2024-07-23 01:09:47.726942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.697 [2024-07-23 01:09:47.727102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.697 [2024-07-23 01:09:47.727145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.697 qpair failed and we were unable to recover it. 00:30:03.697 [2024-07-23 01:09:47.727335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.697 [2024-07-23 01:09:47.727609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.697 [2024-07-23 01:09:47.727645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.697 qpair failed and we were unable to recover it. 00:30:03.697 [2024-07-23 01:09:47.727820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.697 [2024-07-23 01:09:47.728036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.697 [2024-07-23 01:09:47.728063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.697 qpair failed and we were unable to recover it. 00:30:03.697 [2024-07-23 01:09:47.728270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.697 [2024-07-23 01:09:47.728445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.697 [2024-07-23 01:09:47.728472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.697 qpair failed and we were unable to recover it. 00:30:03.697 [2024-07-23 01:09:47.728712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.697 [2024-07-23 01:09:47.728927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.697 [2024-07-23 01:09:47.728954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.697 qpair failed and we were unable to recover it. 00:30:03.697 [2024-07-23 01:09:47.729161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.697 [2024-07-23 01:09:47.729325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.697 [2024-07-23 01:09:47.729365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.697 qpair failed and we were unable to recover it. 00:30:03.697 [2024-07-23 01:09:47.729556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.697 [2024-07-23 01:09:47.729706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.697 [2024-07-23 01:09:47.729736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.697 qpair failed and we were unable to recover it. 00:30:03.697 [2024-07-23 01:09:47.729952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.697 [2024-07-23 01:09:47.730159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.697 [2024-07-23 01:09:47.730186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.697 qpair failed and we were unable to recover it. 00:30:03.697 [2024-07-23 01:09:47.730360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.697 [2024-07-23 01:09:47.730525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.697 [2024-07-23 01:09:47.730565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.697 qpair failed and we were unable to recover it. 00:30:03.697 [2024-07-23 01:09:47.730746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.697 [2024-07-23 01:09:47.730979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.697 [2024-07-23 01:09:47.731007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.697 qpair failed and we were unable to recover it. 00:30:03.697 [2024-07-23 01:09:47.731184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.697 [2024-07-23 01:09:47.731388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.697 [2024-07-23 01:09:47.731415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.697 qpair failed and we were unable to recover it. 00:30:03.697 [2024-07-23 01:09:47.731635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.697 [2024-07-23 01:09:47.731788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.697 [2024-07-23 01:09:47.731816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.697 qpair failed and we were unable to recover it. 00:30:03.697 [2024-07-23 01:09:47.731998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.697 [2024-07-23 01:09:47.732210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.697 [2024-07-23 01:09:47.732237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.697 qpair failed and we were unable to recover it. 00:30:03.697 [2024-07-23 01:09:47.732419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.697 [2024-07-23 01:09:47.732561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.697 [2024-07-23 01:09:47.732585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.697 qpair failed and we were unable to recover it. 00:30:03.697 [2024-07-23 01:09:47.732799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.697 [2024-07-23 01:09:47.733018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.697 [2024-07-23 01:09:47.733046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.697 qpair failed and we were unable to recover it. 00:30:03.697 [2024-07-23 01:09:47.733234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.697 [2024-07-23 01:09:47.733377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.697 [2024-07-23 01:09:47.733401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.697 qpair failed and we were unable to recover it. 00:30:03.697 [2024-07-23 01:09:47.733594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.698 [2024-07-23 01:09:47.733760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.698 [2024-07-23 01:09:47.733785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.698 qpair failed and we were unable to recover it. 00:30:03.698 [2024-07-23 01:09:47.733950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.698 [2024-07-23 01:09:47.734161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.698 [2024-07-23 01:09:47.734189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.698 qpair failed and we were unable to recover it. 00:30:03.698 [2024-07-23 01:09:47.734366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.698 [2024-07-23 01:09:47.734549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.698 [2024-07-23 01:09:47.734578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.698 qpair failed and we were unable to recover it. 00:30:03.698 [2024-07-23 01:09:47.734787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.698 [2024-07-23 01:09:47.734972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.698 [2024-07-23 01:09:47.735000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.698 qpair failed and we were unable to recover it. 00:30:03.698 [2024-07-23 01:09:47.735205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.698 [2024-07-23 01:09:47.735421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.698 [2024-07-23 01:09:47.735449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.698 qpair failed and we were unable to recover it. 00:30:03.698 [2024-07-23 01:09:47.735669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.698 [2024-07-23 01:09:47.735877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.698 [2024-07-23 01:09:47.735905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.698 qpair failed and we were unable to recover it. 00:30:03.698 [2024-07-23 01:09:47.736052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.698 [2024-07-23 01:09:47.736231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.698 [2024-07-23 01:09:47.736260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.698 qpair failed and we were unable to recover it. 00:30:03.698 [2024-07-23 01:09:47.736440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.698 [2024-07-23 01:09:47.736641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.698 [2024-07-23 01:09:47.736666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.698 qpair failed and we were unable to recover it. 00:30:03.698 [2024-07-23 01:09:47.736897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.698 [2024-07-23 01:09:47.737080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.698 [2024-07-23 01:09:47.737109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.698 qpair failed and we were unable to recover it. 00:30:03.698 [2024-07-23 01:09:47.737303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.698 [2024-07-23 01:09:47.737481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.698 [2024-07-23 01:09:47.737510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.698 qpair failed and we were unable to recover it. 00:30:03.698 [2024-07-23 01:09:47.737722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.698 [2024-07-23 01:09:47.737869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.698 [2024-07-23 01:09:47.737909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.698 qpair failed and we were unable to recover it. 00:30:03.698 [2024-07-23 01:09:47.738101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.698 [2024-07-23 01:09:47.738318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.698 [2024-07-23 01:09:47.738342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.698 qpair failed and we were unable to recover it. 00:30:03.698 [2024-07-23 01:09:47.738531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.698 [2024-07-23 01:09:47.738716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.698 [2024-07-23 01:09:47.738741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.698 qpair failed and we were unable to recover it. 00:30:03.698 [2024-07-23 01:09:47.738999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.698 [2024-07-23 01:09:47.739186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.698 [2024-07-23 01:09:47.739213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.698 qpair failed and we were unable to recover it. 00:30:03.698 [2024-07-23 01:09:47.739397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.698 [2024-07-23 01:09:47.739578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.698 [2024-07-23 01:09:47.739608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.698 qpair failed and we were unable to recover it. 00:30:03.698 [2024-07-23 01:09:47.739799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.698 [2024-07-23 01:09:47.740011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.698 [2024-07-23 01:09:47.740039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.698 qpair failed and we were unable to recover it. 00:30:03.698 [2024-07-23 01:09:47.740243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.698 [2024-07-23 01:09:47.740466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.698 [2024-07-23 01:09:47.740490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.698 qpair failed and we were unable to recover it. 00:30:03.698 [2024-07-23 01:09:47.740644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.698 [2024-07-23 01:09:47.740831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.698 [2024-07-23 01:09:47.740860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.698 qpair failed and we were unable to recover it. 00:30:03.698 [2024-07-23 01:09:47.741038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.698 [2024-07-23 01:09:47.741217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.698 [2024-07-23 01:09:47.741246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.698 qpair failed and we were unable to recover it. 00:30:03.698 [2024-07-23 01:09:47.741473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.698 [2024-07-23 01:09:47.741690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.698 [2024-07-23 01:09:47.741718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.698 qpair failed and we were unable to recover it. 00:30:03.698 [2024-07-23 01:09:47.741911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.698 [2024-07-23 01:09:47.742085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.698 [2024-07-23 01:09:47.742109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.698 qpair failed and we were unable to recover it. 00:30:03.698 [2024-07-23 01:09:47.742266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.698 [2024-07-23 01:09:47.742430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.698 [2024-07-23 01:09:47.742453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.698 qpair failed and we were unable to recover it. 00:30:03.698 [2024-07-23 01:09:47.742652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.698 [2024-07-23 01:09:47.742847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.698 [2024-07-23 01:09:47.742871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.698 qpair failed and we were unable to recover it. 00:30:03.698 [2024-07-23 01:09:47.743078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.698 [2024-07-23 01:09:47.743282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.698 [2024-07-23 01:09:47.743323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.698 qpair failed and we were unable to recover it. 00:30:03.698 [2024-07-23 01:09:47.743538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.698 [2024-07-23 01:09:47.743741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.698 [2024-07-23 01:09:47.743766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.698 qpair failed and we were unable to recover it. 00:30:03.698 [2024-07-23 01:09:47.743932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.698 [2024-07-23 01:09:47.744105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.698 [2024-07-23 01:09:47.744130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.698 qpair failed and we were unable to recover it. 00:30:03.698 [2024-07-23 01:09:47.744308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.698 [2024-07-23 01:09:47.744473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.698 [2024-07-23 01:09:47.744500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.698 qpair failed and we were unable to recover it. 00:30:03.698 [2024-07-23 01:09:47.744702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.698 [2024-07-23 01:09:47.744905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.698 [2024-07-23 01:09:47.744930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.698 qpair failed and we were unable to recover it. 00:30:03.699 [2024-07-23 01:09:47.745069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.699 [2024-07-23 01:09:47.745244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.699 [2024-07-23 01:09:47.745285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.699 qpair failed and we were unable to recover it. 00:30:03.699 [2024-07-23 01:09:47.745530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.699 [2024-07-23 01:09:47.745744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.699 [2024-07-23 01:09:47.745771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.699 qpair failed and we were unable to recover it. 00:30:03.699 [2024-07-23 01:09:47.745979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.699 [2024-07-23 01:09:47.746188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.699 [2024-07-23 01:09:47.746212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.699 qpair failed and we were unable to recover it. 00:30:03.699 [2024-07-23 01:09:47.746402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.699 [2024-07-23 01:09:47.746609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.699 [2024-07-23 01:09:47.746654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.699 qpair failed and we were unable to recover it. 00:30:03.699 [2024-07-23 01:09:47.746874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.699 [2024-07-23 01:09:47.747081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.699 [2024-07-23 01:09:47.747108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.699 qpair failed and we were unable to recover it. 00:30:03.699 [2024-07-23 01:09:47.747298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.699 [2024-07-23 01:09:47.747531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.699 [2024-07-23 01:09:47.747559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.699 qpair failed and we were unable to recover it. 00:30:03.699 [2024-07-23 01:09:47.747726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.699 [2024-07-23 01:09:47.747915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.699 [2024-07-23 01:09:47.747955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.699 qpair failed and we were unable to recover it. 00:30:03.699 [2024-07-23 01:09:47.748116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.699 [2024-07-23 01:09:47.748293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.699 [2024-07-23 01:09:47.748322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.699 qpair failed and we were unable to recover it. 00:30:03.699 [2024-07-23 01:09:47.748500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.699 [2024-07-23 01:09:47.748711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.699 [2024-07-23 01:09:47.748739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.699 qpair failed and we were unable to recover it. 00:30:03.699 [2024-07-23 01:09:47.748943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.699 [2024-07-23 01:09:47.749138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.699 [2024-07-23 01:09:47.749164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.699 qpair failed and we were unable to recover it. 00:30:03.699 [2024-07-23 01:09:47.749354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.699 [2024-07-23 01:09:47.749559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.699 [2024-07-23 01:09:47.749599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.699 qpair failed and we were unable to recover it. 00:30:03.699 [2024-07-23 01:09:47.749825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.699 [2024-07-23 01:09:47.749976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.699 [2024-07-23 01:09:47.750003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.699 qpair failed and we were unable to recover it. 00:30:03.699 [2024-07-23 01:09:47.750316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.699 [2024-07-23 01:09:47.750535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.699 [2024-07-23 01:09:47.750562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.699 qpair failed and we were unable to recover it. 00:30:03.699 [2024-07-23 01:09:47.750749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.699 [2024-07-23 01:09:47.750956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.699 [2024-07-23 01:09:47.750983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.699 qpair failed and we were unable to recover it. 00:30:03.699 [2024-07-23 01:09:47.751160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.699 [2024-07-23 01:09:47.751372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.699 [2024-07-23 01:09:47.751411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.699 qpair failed and we were unable to recover it. 00:30:03.699 [2024-07-23 01:09:47.751609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.699 [2024-07-23 01:09:47.751825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.699 [2024-07-23 01:09:47.751852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.699 qpair failed and we were unable to recover it. 00:30:03.699 [2024-07-23 01:09:47.751998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.699 [2024-07-23 01:09:47.752176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.699 [2024-07-23 01:09:47.752202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.699 qpair failed and we were unable to recover it. 00:30:03.699 [2024-07-23 01:09:47.752359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.699 [2024-07-23 01:09:47.752523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.699 [2024-07-23 01:09:47.752565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.699 qpair failed and we were unable to recover it. 00:30:03.699 [2024-07-23 01:09:47.752762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.699 [2024-07-23 01:09:47.752926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.699 [2024-07-23 01:09:47.752953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.699 qpair failed and we were unable to recover it. 00:30:03.699 [2024-07-23 01:09:47.753108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.699 [2024-07-23 01:09:47.753241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.699 [2024-07-23 01:09:47.753264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.699 qpair failed and we were unable to recover it. 00:30:03.699 [2024-07-23 01:09:47.753475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.699 [2024-07-23 01:09:47.753679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.699 [2024-07-23 01:09:47.753706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.699 qpair failed and we were unable to recover it. 00:30:03.699 [2024-07-23 01:09:47.753920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.699 [2024-07-23 01:09:47.754093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.699 [2024-07-23 01:09:47.754120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.699 qpair failed and we were unable to recover it. 00:30:03.699 [2024-07-23 01:09:47.754314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.699 [2024-07-23 01:09:47.754480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.699 [2024-07-23 01:09:47.754521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.699 qpair failed and we were unable to recover it. 00:30:03.699 [2024-07-23 01:09:47.754673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.699 [2024-07-23 01:09:47.754865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.699 [2024-07-23 01:09:47.754893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.699 qpair failed and we were unable to recover it. 00:30:03.699 [2024-07-23 01:09:47.755052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.699 [2024-07-23 01:09:47.755271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.699 [2024-07-23 01:09:47.755309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.699 qpair failed and we were unable to recover it. 00:30:03.699 [2024-07-23 01:09:47.755493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.699 [2024-07-23 01:09:47.755716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.699 [2024-07-23 01:09:47.755744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.699 qpair failed and we were unable to recover it. 00:30:03.699 [2024-07-23 01:09:47.755934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.699 [2024-07-23 01:09:47.756102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.699 [2024-07-23 01:09:47.756143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.699 qpair failed and we were unable to recover it. 00:30:03.699 [2024-07-23 01:09:47.756357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.699 [2024-07-23 01:09:47.756590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.699 [2024-07-23 01:09:47.756622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.699 qpair failed and we were unable to recover it. 00:30:03.699 [2024-07-23 01:09:47.756853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.700 [2024-07-23 01:09:47.757035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.700 [2024-07-23 01:09:47.757061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.700 qpair failed and we were unable to recover it. 00:30:03.700 [2024-07-23 01:09:47.757245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.700 [2024-07-23 01:09:47.757384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.700 [2024-07-23 01:09:47.757410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.700 qpair failed and we were unable to recover it. 00:30:03.700 [2024-07-23 01:09:47.757651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.700 [2024-07-23 01:09:47.757835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.700 [2024-07-23 01:09:47.757860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.700 qpair failed and we were unable to recover it. 00:30:03.700 [2024-07-23 01:09:47.758059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.700 [2024-07-23 01:09:47.758279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.700 [2024-07-23 01:09:47.758306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.700 qpair failed and we were unable to recover it. 00:30:03.700 [2024-07-23 01:09:47.758486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.700 [2024-07-23 01:09:47.758691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.700 [2024-07-23 01:09:47.758719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.700 qpair failed and we were unable to recover it. 00:30:03.700 [2024-07-23 01:09:47.758898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.700 [2024-07-23 01:09:47.759083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.700 [2024-07-23 01:09:47.759109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.700 qpair failed and we were unable to recover it. 00:30:03.700 [2024-07-23 01:09:47.759313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.700 [2024-07-23 01:09:47.759493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.700 [2024-07-23 01:09:47.759519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.700 qpair failed and we were unable to recover it. 00:30:03.700 [2024-07-23 01:09:47.759672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.700 [2024-07-23 01:09:47.759881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.700 [2024-07-23 01:09:47.759909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.700 qpair failed and we were unable to recover it. 00:30:03.700 [2024-07-23 01:09:47.760088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.700 [2024-07-23 01:09:47.760266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.700 [2024-07-23 01:09:47.760296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.700 qpair failed and we were unable to recover it. 00:30:03.700 [2024-07-23 01:09:47.760475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.700 [2024-07-23 01:09:47.760620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.700 [2024-07-23 01:09:47.760645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.700 qpair failed and we were unable to recover it. 00:30:03.700 [2024-07-23 01:09:47.760807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.700 [2024-07-23 01:09:47.761027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.700 [2024-07-23 01:09:47.761054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.700 qpair failed and we were unable to recover it. 00:30:03.700 [2024-07-23 01:09:47.761205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.700 [2024-07-23 01:09:47.761382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.700 [2024-07-23 01:09:47.761409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.700 qpair failed and we were unable to recover it. 00:30:03.700 [2024-07-23 01:09:47.761594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.700 [2024-07-23 01:09:47.761783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.700 [2024-07-23 01:09:47.761812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.700 qpair failed and we were unable to recover it. 00:30:03.700 [2024-07-23 01:09:47.762025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.700 [2024-07-23 01:09:47.762162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.700 [2024-07-23 01:09:47.762187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.700 qpair failed and we were unable to recover it. 00:30:03.700 [2024-07-23 01:09:47.762351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.700 [2024-07-23 01:09:47.762548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.700 [2024-07-23 01:09:47.762575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.700 qpair failed and we were unable to recover it. 00:30:03.700 [2024-07-23 01:09:47.762776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.700 [2024-07-23 01:09:47.762928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.700 [2024-07-23 01:09:47.762957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.700 qpair failed and we were unable to recover it. 00:30:03.700 [2024-07-23 01:09:47.763142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.700 [2024-07-23 01:09:47.763337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.700 [2024-07-23 01:09:47.763361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.700 qpair failed and we were unable to recover it. 00:30:03.700 [2024-07-23 01:09:47.763545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.700 [2024-07-23 01:09:47.763726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.700 [2024-07-23 01:09:47.763755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.700 qpair failed and we were unable to recover it. 00:30:03.700 [2024-07-23 01:09:47.763923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.700 [2024-07-23 01:09:47.764112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.700 [2024-07-23 01:09:47.764138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.700 qpair failed and we were unable to recover it. 00:30:03.700 [2024-07-23 01:09:47.764291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.700 [2024-07-23 01:09:47.764468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.700 [2024-07-23 01:09:47.764496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.700 qpair failed and we were unable to recover it. 00:30:03.700 [2024-07-23 01:09:47.764709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.700 [2024-07-23 01:09:47.764888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.700 [2024-07-23 01:09:47.764914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.700 qpair failed and we were unable to recover it. 00:30:03.700 [2024-07-23 01:09:47.765121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.700 [2024-07-23 01:09:47.765255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.700 [2024-07-23 01:09:47.765280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.700 qpair failed and we were unable to recover it. 00:30:03.700 [2024-07-23 01:09:47.765440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.700 [2024-07-23 01:09:47.765628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.700 [2024-07-23 01:09:47.765657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.700 qpair failed and we were unable to recover it. 00:30:03.700 [2024-07-23 01:09:47.765849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.700 [2024-07-23 01:09:47.766014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.700 [2024-07-23 01:09:47.766055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.700 qpair failed and we were unable to recover it. 00:30:03.700 [2024-07-23 01:09:47.766247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.700 [2024-07-23 01:09:47.766408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.700 [2024-07-23 01:09:47.766446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.700 qpair failed and we were unable to recover it. 00:30:03.701 [2024-07-23 01:09:47.766666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.701 [2024-07-23 01:09:47.766804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.701 [2024-07-23 01:09:47.766829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.701 qpair failed and we were unable to recover it. 00:30:03.701 [2024-07-23 01:09:47.767039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.701 [2024-07-23 01:09:47.767213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.701 [2024-07-23 01:09:47.767239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.701 qpair failed and we were unable to recover it. 00:30:03.701 [2024-07-23 01:09:47.767418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.701 [2024-07-23 01:09:47.767603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.701 [2024-07-23 01:09:47.767641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.701 qpair failed and we were unable to recover it. 00:30:03.701 [2024-07-23 01:09:47.767851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.701 [2024-07-23 01:09:47.768032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.701 [2024-07-23 01:09:47.768059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.701 qpair failed and we were unable to recover it. 00:30:03.701 [2024-07-23 01:09:47.768277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.701 [2024-07-23 01:09:47.768443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.701 [2024-07-23 01:09:47.768466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.701 qpair failed and we were unable to recover it. 00:30:03.701 [2024-07-23 01:09:47.768604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.701 [2024-07-23 01:09:47.768834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.701 [2024-07-23 01:09:47.768862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.701 qpair failed and we were unable to recover it. 00:30:03.701 [2024-07-23 01:09:47.769017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.701 [2024-07-23 01:09:47.769197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.701 [2024-07-23 01:09:47.769224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.701 qpair failed and we were unable to recover it. 00:30:03.701 [2024-07-23 01:09:47.769386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.701 [2024-07-23 01:09:47.769566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.701 [2024-07-23 01:09:47.769593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.701 qpair failed and we were unable to recover it. 00:30:03.701 [2024-07-23 01:09:47.769791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.701 [2024-07-23 01:09:47.770002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.701 [2024-07-23 01:09:47.770029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.701 qpair failed and we were unable to recover it. 00:30:03.701 [2024-07-23 01:09:47.770236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.701 [2024-07-23 01:09:47.770442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.701 [2024-07-23 01:09:47.770469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.701 qpair failed and we were unable to recover it. 00:30:03.701 [2024-07-23 01:09:47.770661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.701 [2024-07-23 01:09:47.770830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.701 [2024-07-23 01:09:47.770854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.701 qpair failed and we were unable to recover it. 00:30:03.701 [2024-07-23 01:09:47.771081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.701 [2024-07-23 01:09:47.771262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.701 [2024-07-23 01:09:47.771289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.701 qpair failed and we were unable to recover it. 00:30:03.701 [2024-07-23 01:09:47.771468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.701 [2024-07-23 01:09:47.771634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.701 [2024-07-23 01:09:47.771678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.701 qpair failed and we were unable to recover it. 00:30:03.701 [2024-07-23 01:09:47.771835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.701 [2024-07-23 01:09:47.772015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.701 [2024-07-23 01:09:47.772043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.701 qpair failed and we were unable to recover it. 00:30:03.701 [2024-07-23 01:09:47.772253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.701 [2024-07-23 01:09:47.772422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.701 [2024-07-23 01:09:47.772464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.701 qpair failed and we were unable to recover it. 00:30:03.701 [2024-07-23 01:09:47.772640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.701 [2024-07-23 01:09:47.772834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.701 [2024-07-23 01:09:47.772860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.701 qpair failed and we were unable to recover it. 00:30:03.701 [2024-07-23 01:09:47.773069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.701 [2024-07-23 01:09:47.773233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.701 [2024-07-23 01:09:47.773257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.701 qpair failed and we were unable to recover it. 00:30:03.701 [2024-07-23 01:09:47.773457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.701 [2024-07-23 01:09:47.773636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.701 [2024-07-23 01:09:47.773663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.701 qpair failed and we were unable to recover it. 00:30:03.701 [2024-07-23 01:09:47.773869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.701 [2024-07-23 01:09:47.774057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.701 [2024-07-23 01:09:47.774086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.701 qpair failed and we were unable to recover it. 00:30:03.701 [2024-07-23 01:09:47.774292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.701 [2024-07-23 01:09:47.774495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.701 [2024-07-23 01:09:47.774523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.701 qpair failed and we were unable to recover it. 00:30:03.701 [2024-07-23 01:09:47.774710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.701 [2024-07-23 01:09:47.775005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.701 [2024-07-23 01:09:47.775033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.701 qpair failed and we were unable to recover it. 00:30:03.701 [2024-07-23 01:09:47.775215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.701 [2024-07-23 01:09:47.775400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.701 [2024-07-23 01:09:47.775439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.701 qpair failed and we were unable to recover it. 00:30:03.701 [2024-07-23 01:09:47.775633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.701 [2024-07-23 01:09:47.775813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.701 [2024-07-23 01:09:47.775845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.701 qpair failed and we were unable to recover it. 00:30:03.701 [2024-07-23 01:09:47.776024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.701 [2024-07-23 01:09:47.776211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.701 [2024-07-23 01:09:47.776249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.701 qpair failed and we were unable to recover it. 00:30:03.701 [2024-07-23 01:09:47.776403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.701 [2024-07-23 01:09:47.776593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.701 [2024-07-23 01:09:47.776622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.701 qpair failed and we were unable to recover it. 00:30:03.701 [2024-07-23 01:09:47.776800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.701 [2024-07-23 01:09:47.776970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.701 [2024-07-23 01:09:47.776994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.701 qpair failed and we were unable to recover it. 00:30:03.701 [2024-07-23 01:09:47.777216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.701 [2024-07-23 01:09:47.777386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.701 [2024-07-23 01:09:47.777413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.701 qpair failed and we were unable to recover it. 00:30:03.701 [2024-07-23 01:09:47.777637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.701 [2024-07-23 01:09:47.777853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.701 [2024-07-23 01:09:47.777880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.701 qpair failed and we were unable to recover it. 00:30:03.701 [2024-07-23 01:09:47.778039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.702 [2024-07-23 01:09:47.778209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.702 [2024-07-23 01:09:47.778233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.702 qpair failed and we were unable to recover it. 00:30:03.702 [2024-07-23 01:09:47.778396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.702 [2024-07-23 01:09:47.778586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.702 [2024-07-23 01:09:47.778630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.702 qpair failed and we were unable to recover it. 00:30:03.702 [2024-07-23 01:09:47.778807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.702 [2024-07-23 01:09:47.778968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.702 [2024-07-23 01:09:47.778992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.702 qpair failed and we were unable to recover it. 00:30:03.702 [2024-07-23 01:09:47.779201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.702 [2024-07-23 01:09:47.779363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.702 [2024-07-23 01:09:47.779407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.702 qpair failed and we were unable to recover it. 00:30:03.702 [2024-07-23 01:09:47.779665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.702 [2024-07-23 01:09:47.779813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.702 [2024-07-23 01:09:47.779841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.702 qpair failed and we were unable to recover it. 00:30:03.702 [2024-07-23 01:09:47.780007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.702 [2024-07-23 01:09:47.780226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.702 [2024-07-23 01:09:47.780253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.702 qpair failed and we were unable to recover it. 00:30:03.702 [2024-07-23 01:09:47.780432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.702 [2024-07-23 01:09:47.780639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.702 [2024-07-23 01:09:47.780680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.702 qpair failed and we were unable to recover it. 00:30:03.702 [2024-07-23 01:09:47.780840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.702 [2024-07-23 01:09:47.781018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.702 [2024-07-23 01:09:47.781044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.702 qpair failed and we were unable to recover it. 00:30:03.702 [2024-07-23 01:09:47.781205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.702 [2024-07-23 01:09:47.781414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.702 [2024-07-23 01:09:47.781441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.702 qpair failed and we were unable to recover it. 00:30:03.702 [2024-07-23 01:09:47.781624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.702 [2024-07-23 01:09:47.781808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.702 [2024-07-23 01:09:47.781833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.702 qpair failed and we were unable to recover it. 00:30:03.702 [2024-07-23 01:09:47.782047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.702 [2024-07-23 01:09:47.782228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.702 [2024-07-23 01:09:47.782255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.702 qpair failed and we were unable to recover it. 00:30:03.702 [2024-07-23 01:09:47.782421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.702 [2024-07-23 01:09:47.782595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.702 [2024-07-23 01:09:47.782636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.702 qpair failed and we were unable to recover it. 00:30:03.702 [2024-07-23 01:09:47.782803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.702 [2024-07-23 01:09:47.782934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.702 [2024-07-23 01:09:47.782976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.702 qpair failed and we were unable to recover it. 00:30:03.702 [2024-07-23 01:09:47.783116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.702 [2024-07-23 01:09:47.783314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.702 [2024-07-23 01:09:47.783341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.702 qpair failed and we were unable to recover it. 00:30:03.702 [2024-07-23 01:09:47.783492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.702 [2024-07-23 01:09:47.783674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.702 [2024-07-23 01:09:47.783709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.702 qpair failed and we were unable to recover it. 00:30:03.702 [2024-07-23 01:09:47.783879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.702 [2024-07-23 01:09:47.784067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.702 [2024-07-23 01:09:47.784107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.702 qpair failed and we were unable to recover it. 00:30:03.702 [2024-07-23 01:09:47.784289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.702 [2024-07-23 01:09:47.784471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.702 [2024-07-23 01:09:47.784499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.702 qpair failed and we were unable to recover it. 00:30:03.702 [2024-07-23 01:09:47.784670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.702 [2024-07-23 01:09:47.784801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.702 [2024-07-23 01:09:47.784825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.702 qpair failed and we were unable to recover it. 00:30:03.702 [2024-07-23 01:09:47.785007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.702 [2024-07-23 01:09:47.785207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.702 [2024-07-23 01:09:47.785232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.702 qpair failed and we were unable to recover it. 00:30:03.702 [2024-07-23 01:09:47.785396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.702 [2024-07-23 01:09:47.785533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.702 [2024-07-23 01:09:47.785556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.702 qpair failed and we were unable to recover it. 00:30:03.702 [2024-07-23 01:09:47.785749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.702 [2024-07-23 01:09:47.785960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.702 [2024-07-23 01:09:47.785987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.702 qpair failed and we were unable to recover it. 00:30:03.702 [2024-07-23 01:09:47.786197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.702 [2024-07-23 01:09:47.786401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.702 [2024-07-23 01:09:47.786428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.702 qpair failed and we were unable to recover it. 00:30:03.702 [2024-07-23 01:09:47.786645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.702 [2024-07-23 01:09:47.786785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.702 [2024-07-23 01:09:47.786809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.702 qpair failed and we were unable to recover it. 00:30:03.702 [2024-07-23 01:09:47.786977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.702 [2024-07-23 01:09:47.787194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.702 [2024-07-23 01:09:47.787222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.702 qpair failed and we were unable to recover it. 00:30:03.702 [2024-07-23 01:09:47.787407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.702 [2024-07-23 01:09:47.787583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.702 [2024-07-23 01:09:47.787610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.702 qpair failed and we were unable to recover it. 00:30:03.702 [2024-07-23 01:09:47.787794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.702 [2024-07-23 01:09:47.787960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.702 [2024-07-23 01:09:47.787985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.702 qpair failed and we were unable to recover it. 00:30:03.702 [2024-07-23 01:09:47.788171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.702 [2024-07-23 01:09:47.788348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.702 [2024-07-23 01:09:47.788377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.702 qpair failed and we were unable to recover it. 00:30:03.702 [2024-07-23 01:09:47.788585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.702 [2024-07-23 01:09:47.788803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.702 [2024-07-23 01:09:47.788829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.702 qpair failed and we were unable to recover it. 00:30:03.702 [2024-07-23 01:09:47.789020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.702 [2024-07-23 01:09:47.789153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.703 [2024-07-23 01:09:47.789177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.703 qpair failed and we were unable to recover it. 00:30:03.703 [2024-07-23 01:09:47.789340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.703 [2024-07-23 01:09:47.789496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.703 [2024-07-23 01:09:47.789523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.703 qpair failed and we were unable to recover it. 00:30:03.703 [2024-07-23 01:09:47.789714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.703 [2024-07-23 01:09:47.789887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.703 [2024-07-23 01:09:47.789929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.703 qpair failed and we were unable to recover it. 00:30:03.703 [2024-07-23 01:09:47.790134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.703 [2024-07-23 01:09:47.790285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.703 [2024-07-23 01:09:47.790311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.703 qpair failed and we were unable to recover it. 00:30:03.703 [2024-07-23 01:09:47.790516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.703 [2024-07-23 01:09:47.790737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.703 [2024-07-23 01:09:47.790762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.703 qpair failed and we were unable to recover it. 00:30:03.703 [2024-07-23 01:09:47.790975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.703 [2024-07-23 01:09:47.791128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.703 [2024-07-23 01:09:47.791155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.703 qpair failed and we were unable to recover it. 00:30:03.703 [2024-07-23 01:09:47.791338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.703 [2024-07-23 01:09:47.791482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.703 [2024-07-23 01:09:47.791510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.703 qpair failed and we were unable to recover it. 00:30:03.703 [2024-07-23 01:09:47.791720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.703 [2024-07-23 01:09:47.791897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.703 [2024-07-23 01:09:47.791925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.703 qpair failed and we were unable to recover it. 00:30:03.703 [2024-07-23 01:09:47.792078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.703 [2024-07-23 01:09:47.792286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.703 [2024-07-23 01:09:47.792313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.703 qpair failed and we were unable to recover it. 00:30:03.703 [2024-07-23 01:09:47.792493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.703 [2024-07-23 01:09:47.792670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.703 [2024-07-23 01:09:47.792699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.703 qpair failed and we were unable to recover it. 00:30:03.703 [2024-07-23 01:09:47.792918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.703 [2024-07-23 01:09:47.793051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.703 [2024-07-23 01:09:47.793074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.703 qpair failed and we were unable to recover it. 00:30:03.703 [2024-07-23 01:09:47.793221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.703 [2024-07-23 01:09:47.793424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.703 [2024-07-23 01:09:47.793451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.703 qpair failed and we were unable to recover it. 00:30:03.703 [2024-07-23 01:09:47.793607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.703 [2024-07-23 01:09:47.793777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.703 [2024-07-23 01:09:47.793802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.703 qpair failed and we were unable to recover it. 00:30:03.703 [2024-07-23 01:09:47.793944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.703 [2024-07-23 01:09:47.794131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.703 [2024-07-23 01:09:47.794157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.703 qpair failed and we were unable to recover it. 00:30:03.703 [2024-07-23 01:09:47.794321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.703 [2024-07-23 01:09:47.794459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.703 [2024-07-23 01:09:47.794486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.703 qpair failed and we were unable to recover it. 00:30:03.703 [2024-07-23 01:09:47.794710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.703 [2024-07-23 01:09:47.794897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.703 [2024-07-23 01:09:47.794924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.703 qpair failed and we were unable to recover it. 00:30:03.703 [2024-07-23 01:09:47.795081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.703 [2024-07-23 01:09:47.795270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.703 [2024-07-23 01:09:47.795312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.703 qpair failed and we were unable to recover it. 00:30:03.703 [2024-07-23 01:09:47.795528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.703 [2024-07-23 01:09:47.795693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.703 [2024-07-23 01:09:47.795719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.703 qpair failed and we were unable to recover it. 00:30:03.703 [2024-07-23 01:09:47.795907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.703 [2024-07-23 01:09:47.796103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.703 [2024-07-23 01:09:47.796128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.703 qpair failed and we were unable to recover it. 00:30:03.703 [2024-07-23 01:09:47.796310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.703 [2024-07-23 01:09:47.796485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.703 [2024-07-23 01:09:47.796511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.703 qpair failed and we were unable to recover it. 00:30:03.703 [2024-07-23 01:09:47.796672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.703 [2024-07-23 01:09:47.796808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.703 [2024-07-23 01:09:47.796833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.703 qpair failed and we were unable to recover it. 00:30:03.703 [2024-07-23 01:09:47.797026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.703 [2024-07-23 01:09:47.797204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.703 [2024-07-23 01:09:47.797230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.703 qpair failed and we were unable to recover it. 00:30:03.703 [2024-07-23 01:09:47.797435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.703 [2024-07-23 01:09:47.797647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.703 [2024-07-23 01:09:47.797673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.703 qpair failed and we were unable to recover it. 00:30:03.703 [2024-07-23 01:09:47.797839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.703 [2024-07-23 01:09:47.798004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.703 [2024-07-23 01:09:47.798029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.703 qpair failed and we were unable to recover it. 00:30:03.703 [2024-07-23 01:09:47.798193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.703 [2024-07-23 01:09:47.798323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.703 [2024-07-23 01:09:47.798367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.703 qpair failed and we were unable to recover it. 00:30:03.703 [2024-07-23 01:09:47.798545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.703 [2024-07-23 01:09:47.798720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.703 [2024-07-23 01:09:47.798748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.703 qpair failed and we were unable to recover it. 00:30:03.703 [2024-07-23 01:09:47.798946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.703 [2024-07-23 01:09:47.799150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.703 [2024-07-23 01:09:47.799177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.703 qpair failed and we were unable to recover it. 00:30:03.703 [2024-07-23 01:09:47.799338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.703 [2024-07-23 01:09:47.799527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.703 [2024-07-23 01:09:47.799552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.703 qpair failed and we were unable to recover it. 00:30:03.703 [2024-07-23 01:09:47.799700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.703 [2024-07-23 01:09:47.799890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.704 [2024-07-23 01:09:47.799933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.704 qpair failed and we were unable to recover it. 00:30:03.704 [2024-07-23 01:09:47.800141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.704 [2024-07-23 01:09:47.800316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.704 [2024-07-23 01:09:47.800342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.704 qpair failed and we were unable to recover it. 00:30:03.704 [2024-07-23 01:09:47.800520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.704 [2024-07-23 01:09:47.800728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.704 [2024-07-23 01:09:47.800756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.704 qpair failed and we were unable to recover it. 00:30:03.704 [2024-07-23 01:09:47.800936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.704 [2024-07-23 01:09:47.801114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.704 [2024-07-23 01:09:47.801141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.704 qpair failed and we were unable to recover it. 00:30:03.704 [2024-07-23 01:09:47.801301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.704 [2024-07-23 01:09:47.801460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.704 [2024-07-23 01:09:47.801486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.704 qpair failed and we were unable to recover it. 00:30:03.704 [2024-07-23 01:09:47.801676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.704 [2024-07-23 01:09:47.801858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.704 [2024-07-23 01:09:47.801885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.704 qpair failed and we were unable to recover it. 00:30:03.704 [2024-07-23 01:09:47.802046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.704 [2024-07-23 01:09:47.802212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.704 [2024-07-23 01:09:47.802238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.704 qpair failed and we were unable to recover it. 00:30:03.704 [2024-07-23 01:09:47.802462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.704 [2024-07-23 01:09:47.802665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.704 [2024-07-23 01:09:47.802694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.704 qpair failed and we were unable to recover it. 00:30:03.704 [2024-07-23 01:09:47.802845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.704 [2024-07-23 01:09:47.803032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.704 [2024-07-23 01:09:47.803057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.704 qpair failed and we were unable to recover it. 00:30:03.704 [2024-07-23 01:09:47.803218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.704 [2024-07-23 01:09:47.803425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.704 [2024-07-23 01:09:47.803453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.704 qpair failed and we were unable to recover it. 00:30:03.704 [2024-07-23 01:09:47.803633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.704 [2024-07-23 01:09:47.803848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.704 [2024-07-23 01:09:47.803875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.704 qpair failed and we were unable to recover it. 00:30:03.704 [2024-07-23 01:09:47.804048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.704 [2024-07-23 01:09:47.804229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.704 [2024-07-23 01:09:47.804256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.704 qpair failed and we were unable to recover it. 00:30:03.704 [2024-07-23 01:09:47.804406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.704 [2024-07-23 01:09:47.804591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.704 [2024-07-23 01:09:47.804641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.704 qpair failed and we were unable to recover it. 00:30:03.704 [2024-07-23 01:09:47.804821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.704 [2024-07-23 01:09:47.804998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.704 [2024-07-23 01:09:47.805024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.704 qpair failed and we were unable to recover it. 00:30:03.704 [2024-07-23 01:09:47.805208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.704 [2024-07-23 01:09:47.805395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.704 [2024-07-23 01:09:47.805419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.704 qpair failed and we were unable to recover it. 00:30:03.704 [2024-07-23 01:09:47.805626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.704 [2024-07-23 01:09:47.805790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.704 [2024-07-23 01:09:47.805816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.704 qpair failed and we were unable to recover it. 00:30:03.704 [2024-07-23 01:09:47.805982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.704 [2024-07-23 01:09:47.806189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.704 [2024-07-23 01:09:47.806217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.704 qpair failed and we were unable to recover it. 00:30:03.704 [2024-07-23 01:09:47.806426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.704 [2024-07-23 01:09:47.806582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.704 [2024-07-23 01:09:47.806606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.704 qpair failed and we were unable to recover it. 00:30:03.704 [2024-07-23 01:09:47.806811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.704 [2024-07-23 01:09:47.806966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.704 [2024-07-23 01:09:47.806996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.704 qpair failed and we were unable to recover it. 00:30:03.704 [2024-07-23 01:09:47.807176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.704 [2024-07-23 01:09:47.807355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.704 [2024-07-23 01:09:47.807383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.704 qpair failed and we were unable to recover it. 00:30:03.704 [2024-07-23 01:09:47.807571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.704 [2024-07-23 01:09:47.807767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.704 [2024-07-23 01:09:47.807797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.704 qpair failed and we were unable to recover it. 00:30:03.704 [2024-07-23 01:09:47.807952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.704 [2024-07-23 01:09:47.808141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.704 [2024-07-23 01:09:47.808168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.704 qpair failed and we were unable to recover it. 00:30:03.704 [2024-07-23 01:09:47.808322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.704 [2024-07-23 01:09:47.808475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.704 [2024-07-23 01:09:47.808502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.704 qpair failed and we were unable to recover it. 00:30:03.704 [2024-07-23 01:09:47.808690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.704 [2024-07-23 01:09:47.808871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.704 [2024-07-23 01:09:47.808898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.704 qpair failed and we were unable to recover it. 00:30:03.704 [2024-07-23 01:09:47.809087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.704 [2024-07-23 01:09:47.809230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.704 [2024-07-23 01:09:47.809255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.704 qpair failed and we were unable to recover it. 00:30:03.704 [2024-07-23 01:09:47.809421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.704 [2024-07-23 01:09:47.809582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.704 [2024-07-23 01:09:47.809606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.704 qpair failed and we were unable to recover it. 00:30:03.704 [2024-07-23 01:09:47.809775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.704 [2024-07-23 01:09:47.809938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.704 [2024-07-23 01:09:47.809962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.704 qpair failed and we were unable to recover it. 00:30:03.704 [2024-07-23 01:09:47.810127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.704 [2024-07-23 01:09:47.810293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.704 [2024-07-23 01:09:47.810335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.704 qpair failed and we were unable to recover it. 00:30:03.704 [2024-07-23 01:09:47.810544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.704 [2024-07-23 01:09:47.810731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.704 [2024-07-23 01:09:47.810756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.705 qpair failed and we were unable to recover it. 00:30:03.705 [2024-07-23 01:09:47.810938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.705 [2024-07-23 01:09:47.811144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.705 [2024-07-23 01:09:47.811169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.705 qpair failed and we were unable to recover it. 00:30:03.705 [2024-07-23 01:09:47.811357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.705 [2024-07-23 01:09:47.811505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.705 [2024-07-23 01:09:47.811534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.705 qpair failed and we were unable to recover it. 00:30:03.705 [2024-07-23 01:09:47.811714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.705 [2024-07-23 01:09:47.811921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.705 [2024-07-23 01:09:47.811949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.705 qpair failed and we were unable to recover it. 00:30:03.705 [2024-07-23 01:09:47.812108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.705 [2024-07-23 01:09:47.812267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.705 [2024-07-23 01:09:47.812311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.705 qpair failed and we were unable to recover it. 00:30:03.705 [2024-07-23 01:09:47.812494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.705 [2024-07-23 01:09:47.812711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.705 [2024-07-23 01:09:47.812736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.705 qpair failed and we were unable to recover it. 00:30:03.705 [2024-07-23 01:09:47.812903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.705 [2024-07-23 01:09:47.813065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.705 [2024-07-23 01:09:47.813106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.705 qpair failed and we were unable to recover it. 00:30:03.705 [2024-07-23 01:09:47.813279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.705 [2024-07-23 01:09:47.813446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.705 [2024-07-23 01:09:47.813470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.705 qpair failed and we were unable to recover it. 00:30:03.705 [2024-07-23 01:09:47.813668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.705 [2024-07-23 01:09:47.813858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.705 [2024-07-23 01:09:47.813885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.705 qpair failed and we were unable to recover it. 00:30:03.705 [2024-07-23 01:09:47.814064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.705 [2024-07-23 01:09:47.814217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.705 [2024-07-23 01:09:47.814246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.705 qpair failed and we were unable to recover it. 00:30:03.705 [2024-07-23 01:09:47.814423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.705 [2024-07-23 01:09:47.814603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.705 [2024-07-23 01:09:47.814639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.705 qpair failed and we were unable to recover it. 00:30:03.705 [2024-07-23 01:09:47.814853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.705 [2024-07-23 01:09:47.815057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.705 [2024-07-23 01:09:47.815084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.705 qpair failed and we were unable to recover it. 00:30:03.705 [2024-07-23 01:09:47.815273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.705 [2024-07-23 01:09:47.815456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.705 [2024-07-23 01:09:47.815483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.705 qpair failed and we were unable to recover it. 00:30:03.705 [2024-07-23 01:09:47.815662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.705 [2024-07-23 01:09:47.815837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.705 [2024-07-23 01:09:47.815865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.705 qpair failed and we were unable to recover it. 00:30:03.705 [2024-07-23 01:09:47.816052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.705 [2024-07-23 01:09:47.816197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.705 [2024-07-23 01:09:47.816223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.705 qpair failed and we were unable to recover it. 00:30:03.705 [2024-07-23 01:09:47.816401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.705 [2024-07-23 01:09:47.816582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.705 [2024-07-23 01:09:47.816610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.705 qpair failed and we were unable to recover it. 00:30:03.705 [2024-07-23 01:09:47.816808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.705 [2024-07-23 01:09:47.816983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.705 [2024-07-23 01:09:47.817010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.705 qpair failed and we were unable to recover it. 00:30:03.705 [2024-07-23 01:09:47.817221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.705 [2024-07-23 01:09:47.817392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.705 [2024-07-23 01:09:47.817419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.705 qpair failed and we were unable to recover it. 00:30:03.705 [2024-07-23 01:09:47.817601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.705 [2024-07-23 01:09:47.817794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.705 [2024-07-23 01:09:47.817821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.705 qpair failed and we were unable to recover it. 00:30:03.705 [2024-07-23 01:09:47.818025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.705 [2024-07-23 01:09:47.818231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.705 [2024-07-23 01:09:47.818257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.705 qpair failed and we were unable to recover it. 00:30:03.705 [2024-07-23 01:09:47.818397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.705 [2024-07-23 01:09:47.818556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.705 [2024-07-23 01:09:47.818580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.705 qpair failed and we were unable to recover it. 00:30:03.705 [2024-07-23 01:09:47.818756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.705 [2024-07-23 01:09:47.818933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.705 [2024-07-23 01:09:47.818960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.705 qpair failed and we were unable to recover it. 00:30:03.705 [2024-07-23 01:09:47.819149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.705 [2024-07-23 01:09:47.819326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.706 [2024-07-23 01:09:47.819353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.706 qpair failed and we were unable to recover it. 00:30:03.706 [2024-07-23 01:09:47.819531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.706 [2024-07-23 01:09:47.819748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.706 [2024-07-23 01:09:47.819776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.706 qpair failed and we were unable to recover it. 00:30:03.706 [2024-07-23 01:09:47.819935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.706 [2024-07-23 01:09:47.820101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.706 [2024-07-23 01:09:47.820125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.706 qpair failed and we were unable to recover it. 00:30:03.706 [2024-07-23 01:09:47.820308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.706 [2024-07-23 01:09:47.820468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.706 [2024-07-23 01:09:47.820492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.706 qpair failed and we were unable to recover it. 00:30:03.706 [2024-07-23 01:09:47.820712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.706 [2024-07-23 01:09:47.820915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.706 [2024-07-23 01:09:47.820942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.706 qpair failed and we were unable to recover it. 00:30:03.706 [2024-07-23 01:09:47.821158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.706 [2024-07-23 01:09:47.821382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.706 [2024-07-23 01:09:47.821407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.706 qpair failed and we were unable to recover it. 00:30:03.706 [2024-07-23 01:09:47.821570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.706 [2024-07-23 01:09:47.821742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.706 [2024-07-23 01:09:47.821784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.706 qpair failed and we were unable to recover it. 00:30:03.706 [2024-07-23 01:09:47.821927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.706 [2024-07-23 01:09:47.822131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.706 [2024-07-23 01:09:47.822159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.706 qpair failed and we were unable to recover it. 00:30:03.706 [2024-07-23 01:09:47.822344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.706 [2024-07-23 01:09:47.822535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.706 [2024-07-23 01:09:47.822558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.706 qpair failed and we were unable to recover it. 00:30:03.706 [2024-07-23 01:09:47.822759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.706 [2024-07-23 01:09:47.822930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.706 [2024-07-23 01:09:47.822955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.706 qpair failed and we were unable to recover it. 00:30:03.706 [2024-07-23 01:09:47.823084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.706 [2024-07-23 01:09:47.823293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.706 [2024-07-23 01:09:47.823319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.706 qpair failed and we were unable to recover it. 00:30:03.706 [2024-07-23 01:09:47.823523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.706 [2024-07-23 01:09:47.823740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.706 [2024-07-23 01:09:47.823765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.706 qpair failed and we were unable to recover it. 00:30:03.706 [2024-07-23 01:09:47.823978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.706 [2024-07-23 01:09:47.824145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.706 [2024-07-23 01:09:47.824173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.706 qpair failed and we were unable to recover it. 00:30:03.706 [2024-07-23 01:09:47.824329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.706 [2024-07-23 01:09:47.824536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.706 [2024-07-23 01:09:47.824564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.706 qpair failed and we were unable to recover it. 00:30:03.706 [2024-07-23 01:09:47.824753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.706 [2024-07-23 01:09:47.824921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.706 [2024-07-23 01:09:47.824946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.706 qpair failed and we were unable to recover it. 00:30:03.706 [2024-07-23 01:09:47.825155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.706 [2024-07-23 01:09:47.825332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.706 [2024-07-23 01:09:47.825361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.706 qpair failed and we were unable to recover it. 00:30:03.706 [2024-07-23 01:09:47.825542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.706 [2024-07-23 01:09:47.825705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.706 [2024-07-23 01:09:47.825730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.706 qpair failed and we were unable to recover it. 00:30:03.706 [2024-07-23 01:09:47.825896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.706 [2024-07-23 01:09:47.826094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.706 [2024-07-23 01:09:47.826121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.706 qpair failed and we were unable to recover it. 00:30:03.706 [2024-07-23 01:09:47.826313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.706 [2024-07-23 01:09:47.826469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.706 [2024-07-23 01:09:47.826496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.706 qpair failed and we were unable to recover it. 00:30:03.706 [2024-07-23 01:09:47.826677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.706 [2024-07-23 01:09:47.826894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.706 [2024-07-23 01:09:47.826920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.706 qpair failed and we were unable to recover it. 00:30:03.706 [2024-07-23 01:09:47.827108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.706 [2024-07-23 01:09:47.827289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.706 [2024-07-23 01:09:47.827316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.706 qpair failed and we were unable to recover it. 00:30:03.706 [2024-07-23 01:09:47.827521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.706 [2024-07-23 01:09:47.827677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.706 [2024-07-23 01:09:47.827705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.706 qpair failed and we were unable to recover it. 00:30:03.706 [2024-07-23 01:09:47.827912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.706 [2024-07-23 01:09:47.828079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.706 [2024-07-23 01:09:47.828104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.706 qpair failed and we were unable to recover it. 00:30:03.706 [2024-07-23 01:09:47.828266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.706 [2024-07-23 01:09:47.828410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.706 [2024-07-23 01:09:47.828435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.706 qpair failed and we were unable to recover it. 00:30:03.706 [2024-07-23 01:09:47.828596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.706 [2024-07-23 01:09:47.828802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.706 [2024-07-23 01:09:47.828827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.706 qpair failed and we were unable to recover it. 00:30:03.706 [2024-07-23 01:09:47.829032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.706 [2024-07-23 01:09:47.829236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.706 [2024-07-23 01:09:47.829263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.706 qpair failed and we were unable to recover it. 00:30:03.706 [2024-07-23 01:09:47.829457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.706 [2024-07-23 01:09:47.829597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.706 [2024-07-23 01:09:47.829647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.706 qpair failed and we were unable to recover it. 00:30:03.706 [2024-07-23 01:09:47.829829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.706 [2024-07-23 01:09:47.829980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.706 [2024-07-23 01:09:47.830009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.706 qpair failed and we were unable to recover it. 00:30:03.706 [2024-07-23 01:09:47.830212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.706 [2024-07-23 01:09:47.830388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.706 [2024-07-23 01:09:47.830415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.706 qpair failed and we were unable to recover it. 00:30:03.707 [2024-07-23 01:09:47.830566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.707 [2024-07-23 01:09:47.830762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.707 [2024-07-23 01:09:47.830792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.707 qpair failed and we were unable to recover it. 00:30:03.707 [2024-07-23 01:09:47.830981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.707 [2024-07-23 01:09:47.831164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.707 [2024-07-23 01:09:47.831190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.707 qpair failed and we were unable to recover it. 00:30:03.707 [2024-07-23 01:09:47.831376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.707 [2024-07-23 01:09:47.831559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.707 [2024-07-23 01:09:47.831587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.707 qpair failed and we were unable to recover it. 00:30:03.707 [2024-07-23 01:09:47.831764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.707 [2024-07-23 01:09:47.831943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.707 [2024-07-23 01:09:47.831971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.707 qpair failed and we were unable to recover it. 00:30:03.707 [2024-07-23 01:09:47.832163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.707 [2024-07-23 01:09:47.832329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.707 [2024-07-23 01:09:47.832371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.707 qpair failed and we were unable to recover it. 00:30:03.707 [2024-07-23 01:09:47.832561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.707 [2024-07-23 01:09:47.832703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.707 [2024-07-23 01:09:47.832748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.707 qpair failed and we were unable to recover it. 00:30:03.707 [2024-07-23 01:09:47.832942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.707 [2024-07-23 01:09:47.833142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.707 [2024-07-23 01:09:47.833170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.707 qpair failed and we were unable to recover it. 00:30:03.707 [2024-07-23 01:09:47.833350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.707 [2024-07-23 01:09:47.833501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.707 [2024-07-23 01:09:47.833529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.707 qpair failed and we were unable to recover it. 00:30:03.707 [2024-07-23 01:09:47.833750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.707 [2024-07-23 01:09:47.833891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.707 [2024-07-23 01:09:47.833916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.707 qpair failed and we were unable to recover it. 00:30:03.707 [2024-07-23 01:09:47.834110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.707 [2024-07-23 01:09:47.834302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.707 [2024-07-23 01:09:47.834332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.707 qpair failed and we were unable to recover it. 00:30:03.707 [2024-07-23 01:09:47.834509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.707 [2024-07-23 01:09:47.834698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.707 [2024-07-23 01:09:47.834727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.707 qpair failed and we were unable to recover it. 00:30:03.707 [2024-07-23 01:09:47.834894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.707 [2024-07-23 01:09:47.835063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.707 [2024-07-23 01:09:47.835089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.707 qpair failed and we were unable to recover it. 00:30:03.707 [2024-07-23 01:09:47.835301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.707 [2024-07-23 01:09:47.835518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.707 [2024-07-23 01:09:47.835542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.707 qpair failed and we were unable to recover it. 00:30:03.707 [2024-07-23 01:09:47.835734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.707 [2024-07-23 01:09:47.835922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.707 [2024-07-23 01:09:47.835949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.707 qpair failed and we were unable to recover it. 00:30:03.707 [2024-07-23 01:09:47.836129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.707 [2024-07-23 01:09:47.836279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.707 [2024-07-23 01:09:47.836309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.707 qpair failed and we were unable to recover it. 00:30:03.707 [2024-07-23 01:09:47.836477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.707 [2024-07-23 01:09:47.836670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.707 [2024-07-23 01:09:47.836712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.707 qpair failed and we were unable to recover it. 00:30:03.707 [2024-07-23 01:09:47.836916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.707 [2024-07-23 01:09:47.837088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.707 [2024-07-23 01:09:47.837115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.707 qpair failed and we were unable to recover it. 00:30:03.707 [2024-07-23 01:09:47.837308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.707 [2024-07-23 01:09:47.837493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.707 [2024-07-23 01:09:47.837516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.707 qpair failed and we were unable to recover it. 00:30:03.707 [2024-07-23 01:09:47.837681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.707 [2024-07-23 01:09:47.837835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.707 [2024-07-23 01:09:47.837863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.707 qpair failed and we were unable to recover it. 00:30:03.707 [2024-07-23 01:09:47.838044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.707 [2024-07-23 01:09:47.838182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.707 [2024-07-23 01:09:47.838207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.707 qpair failed and we were unable to recover it. 00:30:03.707 [2024-07-23 01:09:47.838424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.707 [2024-07-23 01:09:47.838604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.707 [2024-07-23 01:09:47.838651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.707 qpair failed and we were unable to recover it. 00:30:03.707 [2024-07-23 01:09:47.838836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.707 [2024-07-23 01:09:47.839039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.707 [2024-07-23 01:09:47.839065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.707 qpair failed and we were unable to recover it. 00:30:03.707 [2024-07-23 01:09:47.839260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.707 [2024-07-23 01:09:47.839419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.707 [2024-07-23 01:09:47.839458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.707 qpair failed and we were unable to recover it. 00:30:03.707 [2024-07-23 01:09:47.839636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.707 [2024-07-23 01:09:47.839801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.707 [2024-07-23 01:09:47.839840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.707 qpair failed and we were unable to recover it. 00:30:03.707 [2024-07-23 01:09:47.839986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.707 [2024-07-23 01:09:47.840175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.707 [2024-07-23 01:09:47.840202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.707 qpair failed and we were unable to recover it. 00:30:03.707 [2024-07-23 01:09:47.840412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.707 [2024-07-23 01:09:47.840632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.707 [2024-07-23 01:09:47.840660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.707 qpair failed and we were unable to recover it. 00:30:03.707 [2024-07-23 01:09:47.840815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.707 [2024-07-23 01:09:47.840967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.707 [2024-07-23 01:09:47.840994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.707 qpair failed and we were unable to recover it. 00:30:03.707 [2024-07-23 01:09:47.841187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.707 [2024-07-23 01:09:47.841348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.707 [2024-07-23 01:09:47.841388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.707 qpair failed and we were unable to recover it. 00:30:03.707 [2024-07-23 01:09:47.841566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.707 [2024-07-23 01:09:47.841746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.708 [2024-07-23 01:09:47.841775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.708 qpair failed and we were unable to recover it. 00:30:03.708 [2024-07-23 01:09:47.841929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.708 [2024-07-23 01:09:47.842093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.708 [2024-07-23 01:09:47.842134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.708 qpair failed and we were unable to recover it. 00:30:03.708 [2024-07-23 01:09:47.842288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.708 [2024-07-23 01:09:47.842457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.708 [2024-07-23 01:09:47.842489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.708 qpair failed and we were unable to recover it. 00:30:03.708 [2024-07-23 01:09:47.842688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.708 [2024-07-23 01:09:47.842892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.708 [2024-07-23 01:09:47.842919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.708 qpair failed and we were unable to recover it. 00:30:03.708 [2024-07-23 01:09:47.843095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.708 [2024-07-23 01:09:47.843305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.708 [2024-07-23 01:09:47.843330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.708 qpair failed and we were unable to recover it. 00:30:03.708 [2024-07-23 01:09:47.843495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.708 [2024-07-23 01:09:47.843703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.708 [2024-07-23 01:09:47.843731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.708 qpair failed and we were unable to recover it. 00:30:03.708 [2024-07-23 01:09:47.843917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.708 [2024-07-23 01:09:47.844096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.708 [2024-07-23 01:09:47.844123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.708 qpair failed and we were unable to recover it. 00:30:03.708 [2024-07-23 01:09:47.844301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.708 [2024-07-23 01:09:47.844475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.708 [2024-07-23 01:09:47.844501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.708 qpair failed and we were unable to recover it. 00:30:03.708 [2024-07-23 01:09:47.844666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.708 [2024-07-23 01:09:47.844857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.708 [2024-07-23 01:09:47.844897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.708 qpair failed and we were unable to recover it. 00:30:03.708 [2024-07-23 01:09:47.845079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.708 [2024-07-23 01:09:47.845219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.708 [2024-07-23 01:09:47.845244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.708 qpair failed and we were unable to recover it. 00:30:03.708 [2024-07-23 01:09:47.845423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.708 [2024-07-23 01:09:47.845636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.708 [2024-07-23 01:09:47.845678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.708 qpair failed and we were unable to recover it. 00:30:03.708 [2024-07-23 01:09:47.845826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.708 [2024-07-23 01:09:47.845962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.708 [2024-07-23 01:09:47.845986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.708 qpair failed and we were unable to recover it. 00:30:03.708 [2024-07-23 01:09:47.846145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.708 [2024-07-23 01:09:47.846305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.708 [2024-07-23 01:09:47.846333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.708 qpair failed and we were unable to recover it. 00:30:03.708 [2024-07-23 01:09:47.846523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.708 [2024-07-23 01:09:47.846705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.708 [2024-07-23 01:09:47.846733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.708 qpair failed and we were unable to recover it. 00:30:03.708 [2024-07-23 01:09:47.846940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.708 [2024-07-23 01:09:47.847112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.708 [2024-07-23 01:09:47.847138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.708 qpair failed and we were unable to recover it. 00:30:03.708 [2024-07-23 01:09:47.847323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.708 [2024-07-23 01:09:47.847526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.708 [2024-07-23 01:09:47.847553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.708 qpair failed and we were unable to recover it. 00:30:03.708 [2024-07-23 01:09:47.847737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.708 [2024-07-23 01:09:47.847927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.708 [2024-07-23 01:09:47.847954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.708 qpair failed and we were unable to recover it. 00:30:03.708 [2024-07-23 01:09:47.848115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.708 [2024-07-23 01:09:47.848276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.708 [2024-07-23 01:09:47.848300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.708 qpair failed and we were unable to recover it. 00:30:03.708 [2024-07-23 01:09:47.848522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.708 [2024-07-23 01:09:47.848733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.708 [2024-07-23 01:09:47.848762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.708 qpair failed and we were unable to recover it. 00:30:03.708 [2024-07-23 01:09:47.848938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.708 [2024-07-23 01:09:47.849147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.708 [2024-07-23 01:09:47.849173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.708 qpair failed and we were unable to recover it. 00:30:03.708 [2024-07-23 01:09:47.849353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.708 [2024-07-23 01:09:47.849505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.708 [2024-07-23 01:09:47.849532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.708 qpair failed and we were unable to recover it. 00:30:03.708 [2024-07-23 01:09:47.849709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.708 [2024-07-23 01:09:47.849881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.708 [2024-07-23 01:09:47.849907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.708 qpair failed and we were unable to recover it. 00:30:03.708 [2024-07-23 01:09:47.850072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.708 [2024-07-23 01:09:47.850202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.708 [2024-07-23 01:09:47.850226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.708 qpair failed and we were unable to recover it. 00:30:03.708 [2024-07-23 01:09:47.850440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.708 [2024-07-23 01:09:47.850641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.708 [2024-07-23 01:09:47.850670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.708 qpair failed and we were unable to recover it. 00:30:03.708 [2024-07-23 01:09:47.850874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.708 [2024-07-23 01:09:47.851076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.708 [2024-07-23 01:09:47.851104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.708 qpair failed and we were unable to recover it. 00:30:03.708 [2024-07-23 01:09:47.851292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.708 [2024-07-23 01:09:47.851432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.708 [2024-07-23 01:09:47.851456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.708 qpair failed and we were unable to recover it. 00:30:03.708 [2024-07-23 01:09:47.851587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.708 [2024-07-23 01:09:47.851776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.708 [2024-07-23 01:09:47.851804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.708 qpair failed and we were unable to recover it. 00:30:03.708 [2024-07-23 01:09:47.851991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.708 [2024-07-23 01:09:47.852176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.708 [2024-07-23 01:09:47.852203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.708 qpair failed and we were unable to recover it. 00:30:03.708 [2024-07-23 01:09:47.852415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.708 [2024-07-23 01:09:47.852595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.709 [2024-07-23 01:09:47.852633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.709 qpair failed and we were unable to recover it. 00:30:03.709 [2024-07-23 01:09:47.852801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.709 [2024-07-23 01:09:47.852965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.709 [2024-07-23 01:09:47.852990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.709 qpair failed and we were unable to recover it. 00:30:03.709 [2024-07-23 01:09:47.853140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.709 [2024-07-23 01:09:47.853338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.709 [2024-07-23 01:09:47.853365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.709 qpair failed and we were unable to recover it. 00:30:03.709 [2024-07-23 01:09:47.853540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.709 [2024-07-23 01:09:47.853723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.709 [2024-07-23 01:09:47.853751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.709 qpair failed and we were unable to recover it. 00:30:03.709 [2024-07-23 01:09:47.853921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.709 [2024-07-23 01:09:47.854133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.709 [2024-07-23 01:09:47.854160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.709 qpair failed and we were unable to recover it. 00:30:03.709 [2024-07-23 01:09:47.854364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.709 [2024-07-23 01:09:47.854548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.709 [2024-07-23 01:09:47.854578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.709 qpair failed and we were unable to recover it. 00:30:03.709 [2024-07-23 01:09:47.854736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.709 [2024-07-23 01:09:47.854915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.709 [2024-07-23 01:09:47.854945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.709 qpair failed and we were unable to recover it. 00:30:03.709 [2024-07-23 01:09:47.855120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.709 [2024-07-23 01:09:47.855298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.709 [2024-07-23 01:09:47.855327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.709 qpair failed and we were unable to recover it. 00:30:03.709 [2024-07-23 01:09:47.855508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.709 [2024-07-23 01:09:47.855716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.709 [2024-07-23 01:09:47.855744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.709 qpair failed and we were unable to recover it. 00:30:03.709 [2024-07-23 01:09:47.855934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.709 [2024-07-23 01:09:47.856097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.709 [2024-07-23 01:09:47.856121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.709 qpair failed and we were unable to recover it. 00:30:03.709 [2024-07-23 01:09:47.856282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.709 [2024-07-23 01:09:47.856420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.709 [2024-07-23 01:09:47.856446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.709 qpair failed and we were unable to recover it. 00:30:03.709 [2024-07-23 01:09:47.856608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.709 [2024-07-23 01:09:47.856788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.709 [2024-07-23 01:09:47.856813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.709 qpair failed and we were unable to recover it. 00:30:03.709 [2024-07-23 01:09:47.856950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.709 [2024-07-23 01:09:47.857146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.709 [2024-07-23 01:09:47.857169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.709 qpair failed and we were unable to recover it. 00:30:03.709 [2024-07-23 01:09:47.857307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.709 [2024-07-23 01:09:47.857496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.709 [2024-07-23 01:09:47.857521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.709 qpair failed and we were unable to recover it. 00:30:03.709 [2024-07-23 01:09:47.857745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.709 [2024-07-23 01:09:47.857916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.709 [2024-07-23 01:09:47.857941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.709 qpair failed and we were unable to recover it. 00:30:03.709 [2024-07-23 01:09:47.858086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.709 [2024-07-23 01:09:47.858273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.709 [2024-07-23 01:09:47.858298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.709 qpair failed and we were unable to recover it. 00:30:03.709 [2024-07-23 01:09:47.858483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.709 [2024-07-23 01:09:47.858633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.709 [2024-07-23 01:09:47.858662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.709 qpair failed and we were unable to recover it. 00:30:03.709 [2024-07-23 01:09:47.858878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.709 [2024-07-23 01:09:47.859049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.709 [2024-07-23 01:09:47.859077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.709 qpair failed and we were unable to recover it. 00:30:03.709 [2024-07-23 01:09:47.859284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.709 [2024-07-23 01:09:47.859452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.709 [2024-07-23 01:09:47.859479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.709 qpair failed and we were unable to recover it. 00:30:03.709 [2024-07-23 01:09:47.859687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.709 [2024-07-23 01:09:47.859862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.709 [2024-07-23 01:09:47.859890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.709 qpair failed and we were unable to recover it. 00:30:03.709 [2024-07-23 01:09:47.860093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.709 [2024-07-23 01:09:47.860303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.709 [2024-07-23 01:09:47.860327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.709 qpair failed and we were unable to recover it. 00:30:03.709 [2024-07-23 01:09:47.860493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.709 [2024-07-23 01:09:47.860626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.709 [2024-07-23 01:09:47.860651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.709 qpair failed and we were unable to recover it. 00:30:03.709 [2024-07-23 01:09:47.860822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.709 [2024-07-23 01:09:47.861001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.709 [2024-07-23 01:09:47.861029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.709 qpair failed and we were unable to recover it. 00:30:03.709 [2024-07-23 01:09:47.861237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.709 [2024-07-23 01:09:47.861379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.709 [2024-07-23 01:09:47.861404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.709 qpair failed and we were unable to recover it. 00:30:03.709 [2024-07-23 01:09:47.861620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.709 [2024-07-23 01:09:47.861790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.709 [2024-07-23 01:09:47.861817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.709 qpair failed and we were unable to recover it. 00:30:03.710 [2024-07-23 01:09:47.861979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.710 [2024-07-23 01:09:47.862145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.710 [2024-07-23 01:09:47.862169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.710 qpair failed and we were unable to recover it. 00:30:03.710 [2024-07-23 01:09:47.862384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.710 [2024-07-23 01:09:47.862555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.710 [2024-07-23 01:09:47.862582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.710 qpair failed and we were unable to recover it. 00:30:03.710 [2024-07-23 01:09:47.862750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.710 [2024-07-23 01:09:47.862932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.710 [2024-07-23 01:09:47.862962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.710 qpair failed and we were unable to recover it. 00:30:03.710 [2024-07-23 01:09:47.863173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.710 [2024-07-23 01:09:47.863332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.710 [2024-07-23 01:09:47.863356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.710 qpair failed and we were unable to recover it. 00:30:03.710 [2024-07-23 01:09:47.863555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.710 [2024-07-23 01:09:47.863707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.710 [2024-07-23 01:09:47.863735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.710 qpair failed and we were unable to recover it. 00:30:03.710 [2024-07-23 01:09:47.863915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.710 [2024-07-23 01:09:47.864097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.710 [2024-07-23 01:09:47.864124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.710 qpair failed and we were unable to recover it. 00:30:03.710 [2024-07-23 01:09:47.864325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.710 [2024-07-23 01:09:47.864491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.710 [2024-07-23 01:09:47.864516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.710 qpair failed and we were unable to recover it. 00:30:03.710 [2024-07-23 01:09:47.864670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.710 [2024-07-23 01:09:47.864832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.710 [2024-07-23 01:09:47.864858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.710 qpair failed and we were unable to recover it. 00:30:03.710 [2024-07-23 01:09:47.865013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.710 [2024-07-23 01:09:47.865216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.710 [2024-07-23 01:09:47.865243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.710 qpair failed and we were unable to recover it. 00:30:03.710 [2024-07-23 01:09:47.865422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.710 [2024-07-23 01:09:47.865600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.710 [2024-07-23 01:09:47.865632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.710 qpair failed and we were unable to recover it. 00:30:03.710 [2024-07-23 01:09:47.865844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.710 [2024-07-23 01:09:47.866005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.710 [2024-07-23 01:09:47.866032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.710 qpair failed and we were unable to recover it. 00:30:03.710 [2024-07-23 01:09:47.866215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.710 [2024-07-23 01:09:47.866344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.710 [2024-07-23 01:09:47.866369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.710 qpair failed and we were unable to recover it. 00:30:03.710 [2024-07-23 01:09:47.866527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.710 [2024-07-23 01:09:47.866689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.710 [2024-07-23 01:09:47.866714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.710 qpair failed and we were unable to recover it. 00:30:03.710 [2024-07-23 01:09:47.866900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.710 [2024-07-23 01:09:47.867078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.710 [2024-07-23 01:09:47.867106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.710 qpair failed and we were unable to recover it. 00:30:03.710 [2024-07-23 01:09:47.867264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.710 [2024-07-23 01:09:47.867445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.710 [2024-07-23 01:09:47.867475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.710 qpair failed and we were unable to recover it. 00:30:03.710 [2024-07-23 01:09:47.867695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.710 [2024-07-23 01:09:47.867884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.710 [2024-07-23 01:09:47.867909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.710 qpair failed and we were unable to recover it. 00:30:03.710 [2024-07-23 01:09:47.868074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.710 [2024-07-23 01:09:47.868202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.710 [2024-07-23 01:09:47.868242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.710 qpair failed and we were unable to recover it. 00:30:03.710 [2024-07-23 01:09:47.868406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.710 [2024-07-23 01:09:47.868567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.710 [2024-07-23 01:09:47.868591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.710 qpair failed and we were unable to recover it. 00:30:03.710 [2024-07-23 01:09:47.868760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.710 [2024-07-23 01:09:47.868943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.710 [2024-07-23 01:09:47.868970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.710 qpair failed and we were unable to recover it. 00:30:03.710 [2024-07-23 01:09:47.869183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.710 [2024-07-23 01:09:47.869403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.710 [2024-07-23 01:09:47.869446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.710 qpair failed and we were unable to recover it. 00:30:03.710 [2024-07-23 01:09:47.869640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.710 [2024-07-23 01:09:47.869839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.710 [2024-07-23 01:09:47.869868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.710 qpair failed and we were unable to recover it. 00:30:03.710 [2024-07-23 01:09:47.870060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.710 [2024-07-23 01:09:47.870233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.710 [2024-07-23 01:09:47.870260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.710 qpair failed and we were unable to recover it. 00:30:03.710 [2024-07-23 01:09:47.870425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.710 [2024-07-23 01:09:47.870647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.710 [2024-07-23 01:09:47.870675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.710 qpair failed and we were unable to recover it. 00:30:03.710 [2024-07-23 01:09:47.870868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.710 [2024-07-23 01:09:47.871049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.710 [2024-07-23 01:09:47.871077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.710 qpair failed and we were unable to recover it. 00:30:03.710 [2024-07-23 01:09:47.871263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.710 [2024-07-23 01:09:47.871458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.710 [2024-07-23 01:09:47.871494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.710 qpair failed and we were unable to recover it. 00:30:03.710 [2024-07-23 01:09:47.871685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.710 [2024-07-23 01:09:47.871875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.710 [2024-07-23 01:09:47.871907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.710 qpair failed and we were unable to recover it. 00:30:03.710 [2024-07-23 01:09:47.872121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.710 [2024-07-23 01:09:47.872282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.710 [2024-07-23 01:09:47.872309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.710 qpair failed and we were unable to recover it. 00:30:03.710 [2024-07-23 01:09:47.872494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.710 [2024-07-23 01:09:47.872679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.710 [2024-07-23 01:09:47.872707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.710 qpair failed and we were unable to recover it. 00:30:03.710 [2024-07-23 01:09:47.872896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.711 [2024-07-23 01:09:47.873098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.711 [2024-07-23 01:09:47.873133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.711 qpair failed and we were unable to recover it. 00:30:03.711 [2024-07-23 01:09:47.873373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.982 [2024-07-23 01:09:47.873559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.982 [2024-07-23 01:09:47.873587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.982 qpair failed and we were unable to recover it. 00:30:03.982 [2024-07-23 01:09:47.873770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.982 [2024-07-23 01:09:47.873935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.982 [2024-07-23 01:09:47.873969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.982 qpair failed and we were unable to recover it. 00:30:03.982 [2024-07-23 01:09:47.874216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.982 [2024-07-23 01:09:47.874426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.982 [2024-07-23 01:09:47.874469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.982 qpair failed and we were unable to recover it. 00:30:03.982 [2024-07-23 01:09:47.874682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.982 [2024-07-23 01:09:47.874838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.982 [2024-07-23 01:09:47.874875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.982 qpair failed and we were unable to recover it. 00:30:03.982 [2024-07-23 01:09:47.875032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.982 [2024-07-23 01:09:47.875231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.982 [2024-07-23 01:09:47.875264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.982 qpair failed and we were unable to recover it. 00:30:03.982 [2024-07-23 01:09:47.875424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.982 [2024-07-23 01:09:47.875652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.982 [2024-07-23 01:09:47.875691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.982 qpair failed and we were unable to recover it. 00:30:03.982 [2024-07-23 01:09:47.875921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.982 [2024-07-23 01:09:47.876094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.982 [2024-07-23 01:09:47.876130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.982 qpair failed and we were unable to recover it. 00:30:03.982 [2024-07-23 01:09:47.876353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.982 [2024-07-23 01:09:47.876497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.982 [2024-07-23 01:09:47.876549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.982 qpair failed and we were unable to recover it. 00:30:03.982 [2024-07-23 01:09:47.876757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.982 [2024-07-23 01:09:47.876939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.982 [2024-07-23 01:09:47.876969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.982 qpair failed and we were unable to recover it. 00:30:03.982 [2024-07-23 01:09:47.877150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.982 [2024-07-23 01:09:47.877358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.982 [2024-07-23 01:09:47.877388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.982 qpair failed and we were unable to recover it. 00:30:03.982 [2024-07-23 01:09:47.877535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.982 [2024-07-23 01:09:47.877723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.982 [2024-07-23 01:09:47.877750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.982 qpair failed and we were unable to recover it. 00:30:03.982 [2024-07-23 01:09:47.877919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.982 [2024-07-23 01:09:47.878134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.982 [2024-07-23 01:09:47.878162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.982 qpair failed and we were unable to recover it. 00:30:03.982 [2024-07-23 01:09:47.878352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.982 [2024-07-23 01:09:47.878556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.982 [2024-07-23 01:09:47.878583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.982 qpair failed and we were unable to recover it. 00:30:03.982 [2024-07-23 01:09:47.878786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.982 [2024-07-23 01:09:47.878945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.982 [2024-07-23 01:09:47.878975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.982 qpair failed and we were unable to recover it. 00:30:03.982 [2024-07-23 01:09:47.879180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.982 [2024-07-23 01:09:47.879373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.982 [2024-07-23 01:09:47.879397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.982 qpair failed and we were unable to recover it. 00:30:03.982 [2024-07-23 01:09:47.879563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.982 [2024-07-23 01:09:47.879728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.982 [2024-07-23 01:09:47.879754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.982 qpair failed and we were unable to recover it. 00:30:03.982 [2024-07-23 01:09:47.879978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.982 [2024-07-23 01:09:47.880169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.982 [2024-07-23 01:09:47.880193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.982 qpair failed and we were unable to recover it. 00:30:03.982 [2024-07-23 01:09:47.880353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.982 [2024-07-23 01:09:47.880559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.982 [2024-07-23 01:09:47.880586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.982 qpair failed and we were unable to recover it. 00:30:03.982 [2024-07-23 01:09:47.880757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.982 [2024-07-23 01:09:47.880957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.982 [2024-07-23 01:09:47.880983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.982 qpair failed and we were unable to recover it. 00:30:03.983 [2024-07-23 01:09:47.881165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.983 [2024-07-23 01:09:47.881348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.983 [2024-07-23 01:09:47.881376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.983 qpair failed and we were unable to recover it. 00:30:03.983 [2024-07-23 01:09:47.881552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.983 [2024-07-23 01:09:47.881731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.983 [2024-07-23 01:09:47.881759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.983 qpair failed and we were unable to recover it. 00:30:03.983 [2024-07-23 01:09:47.881936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.983 [2024-07-23 01:09:47.882129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.983 [2024-07-23 01:09:47.882158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.983 qpair failed and we were unable to recover it. 00:30:03.983 [2024-07-23 01:09:47.882362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.983 [2024-07-23 01:09:47.882538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.983 [2024-07-23 01:09:47.882566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.983 qpair failed and we were unable to recover it. 00:30:03.983 [2024-07-23 01:09:47.882763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.983 [2024-07-23 01:09:47.882943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.983 [2024-07-23 01:09:47.882969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.983 qpair failed and we were unable to recover it. 00:30:03.983 [2024-07-23 01:09:47.883154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.983 [2024-07-23 01:09:47.883360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.983 [2024-07-23 01:09:47.883388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.983 qpair failed and we were unable to recover it. 00:30:03.983 [2024-07-23 01:09:47.883594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.983 [2024-07-23 01:09:47.883818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.983 [2024-07-23 01:09:47.883846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.983 qpair failed and we were unable to recover it. 00:30:03.983 [2024-07-23 01:09:47.884024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.983 [2024-07-23 01:09:47.884197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.983 [2024-07-23 01:09:47.884224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.983 qpair failed and we were unable to recover it. 00:30:03.983 [2024-07-23 01:09:47.884393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.983 [2024-07-23 01:09:47.884583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.983 [2024-07-23 01:09:47.884635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.983 qpair failed and we were unable to recover it. 00:30:03.983 [2024-07-23 01:09:47.884781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.983 [2024-07-23 01:09:47.884960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.983 [2024-07-23 01:09:47.884988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.983 qpair failed and we were unable to recover it. 00:30:03.983 [2024-07-23 01:09:47.885201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.983 [2024-07-23 01:09:47.885379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.983 [2024-07-23 01:09:47.885405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.983 qpair failed and we were unable to recover it. 00:30:03.983 [2024-07-23 01:09:47.885583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.983 [2024-07-23 01:09:47.885769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.983 [2024-07-23 01:09:47.885796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.983 qpair failed and we were unable to recover it. 00:30:03.983 [2024-07-23 01:09:47.885957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.983 [2024-07-23 01:09:47.886163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.983 [2024-07-23 01:09:47.886190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.983 qpair failed and we were unable to recover it. 00:30:03.983 [2024-07-23 01:09:47.886351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.983 [2024-07-23 01:09:47.886552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.983 [2024-07-23 01:09:47.886579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.983 qpair failed and we were unable to recover it. 00:30:03.983 [2024-07-23 01:09:47.886730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.983 [2024-07-23 01:09:47.886924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.983 [2024-07-23 01:09:47.886948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.983 qpair failed and we were unable to recover it. 00:30:03.983 [2024-07-23 01:09:47.887112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.983 [2024-07-23 01:09:47.887285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.983 [2024-07-23 01:09:47.887309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.983 qpair failed and we were unable to recover it. 00:30:03.983 [2024-07-23 01:09:47.887475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.983 [2024-07-23 01:09:47.887636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.983 [2024-07-23 01:09:47.887661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.983 qpair failed and we were unable to recover it. 00:30:03.983 [2024-07-23 01:09:47.887824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.983 [2024-07-23 01:09:47.888030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.983 [2024-07-23 01:09:47.888057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.983 qpair failed and we were unable to recover it. 00:30:03.983 [2024-07-23 01:09:47.888224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.983 [2024-07-23 01:09:47.888384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.983 [2024-07-23 01:09:47.888408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.983 qpair failed and we were unable to recover it. 00:30:03.983 [2024-07-23 01:09:47.888611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.983 [2024-07-23 01:09:47.888820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.983 [2024-07-23 01:09:47.888848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.983 qpair failed and we were unable to recover it. 00:30:03.983 [2024-07-23 01:09:47.889004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.983 [2024-07-23 01:09:47.889149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.983 [2024-07-23 01:09:47.889173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.983 qpair failed and we were unable to recover it. 00:30:03.983 [2024-07-23 01:09:47.889382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.983 [2024-07-23 01:09:47.889557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.983 [2024-07-23 01:09:47.889584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.983 qpair failed and we were unable to recover it. 00:30:03.983 [2024-07-23 01:09:47.889783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.983 [2024-07-23 01:09:47.889954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.983 [2024-07-23 01:09:47.889979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.983 qpair failed and we were unable to recover it. 00:30:03.983 [2024-07-23 01:09:47.890161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.983 [2024-07-23 01:09:47.890351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.983 [2024-07-23 01:09:47.890376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.983 qpair failed and we were unable to recover it. 00:30:03.983 [2024-07-23 01:09:47.890533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.983 [2024-07-23 01:09:47.890695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.983 [2024-07-23 01:09:47.890737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.983 qpair failed and we were unable to recover it. 00:30:03.983 [2024-07-23 01:09:47.890890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.983 [2024-07-23 01:09:47.891098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.983 [2024-07-23 01:09:47.891126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.983 qpair failed and we were unable to recover it. 00:30:03.983 [2024-07-23 01:09:47.891304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.983 [2024-07-23 01:09:47.891498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.983 [2024-07-23 01:09:47.891522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.983 qpair failed and we were unable to recover it. 00:30:03.983 [2024-07-23 01:09:47.891695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.983 [2024-07-23 01:09:47.891902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.983 [2024-07-23 01:09:47.891930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.983 qpair failed and we were unable to recover it. 00:30:03.983 [2024-07-23 01:09:47.892086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.984 [2024-07-23 01:09:47.892217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.984 [2024-07-23 01:09:47.892241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.984 qpair failed and we were unable to recover it. 00:30:03.984 [2024-07-23 01:09:47.892428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.984 [2024-07-23 01:09:47.892601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.984 [2024-07-23 01:09:47.892636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.984 qpair failed and we were unable to recover it. 00:30:03.984 [2024-07-23 01:09:47.892839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.984 [2024-07-23 01:09:47.893005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.984 [2024-07-23 01:09:47.893028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.984 qpair failed and we were unable to recover it. 00:30:03.984 [2024-07-23 01:09:47.893216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.984 [2024-07-23 01:09:47.893421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.984 [2024-07-23 01:09:47.893448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.984 qpair failed and we were unable to recover it. 00:30:03.984 [2024-07-23 01:09:47.893634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.984 [2024-07-23 01:09:47.893828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.984 [2024-07-23 01:09:47.893854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.984 qpair failed and we were unable to recover it. 00:30:03.984 [2024-07-23 01:09:47.894035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.984 [2024-07-23 01:09:47.894214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.984 [2024-07-23 01:09:47.894241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.984 qpair failed and we were unable to recover it. 00:30:03.984 [2024-07-23 01:09:47.894424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.984 [2024-07-23 01:09:47.894587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.984 [2024-07-23 01:09:47.894611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.984 qpair failed and we were unable to recover it. 00:30:03.984 [2024-07-23 01:09:47.894778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.984 [2024-07-23 01:09:47.894921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.984 [2024-07-23 01:09:47.894945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.984 qpair failed and we were unable to recover it. 00:30:03.984 [2024-07-23 01:09:47.895111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.984 [2024-07-23 01:09:47.895290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.984 [2024-07-23 01:09:47.895317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.984 qpair failed and we were unable to recover it. 00:30:03.984 [2024-07-23 01:09:47.895524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.984 [2024-07-23 01:09:47.895713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.984 [2024-07-23 01:09:47.895740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.984 qpair failed and we were unable to recover it. 00:30:03.984 [2024-07-23 01:09:47.895924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.984 [2024-07-23 01:09:47.896133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.984 [2024-07-23 01:09:47.896156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.984 qpair failed and we were unable to recover it. 00:30:03.984 [2024-07-23 01:09:47.896348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.984 [2024-07-23 01:09:47.896512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.984 [2024-07-23 01:09:47.896538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.984 qpair failed and we were unable to recover it. 00:30:03.984 [2024-07-23 01:09:47.896707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.984 [2024-07-23 01:09:47.896867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.984 [2024-07-23 01:09:47.896891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.984 qpair failed and we were unable to recover it. 00:30:03.984 [2024-07-23 01:09:47.897080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.984 [2024-07-23 01:09:47.897232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.984 [2024-07-23 01:09:47.897262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.984 qpair failed and we were unable to recover it. 00:30:03.984 [2024-07-23 01:09:47.897473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.984 [2024-07-23 01:09:47.897687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.984 [2024-07-23 01:09:47.897720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.984 qpair failed and we were unable to recover it. 00:30:03.984 [2024-07-23 01:09:47.897894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.984 [2024-07-23 01:09:47.898076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.984 [2024-07-23 01:09:47.898101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.984 qpair failed and we were unable to recover it. 00:30:03.984 [2024-07-23 01:09:47.898291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.984 [2024-07-23 01:09:47.898451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.984 [2024-07-23 01:09:47.898477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.984 qpair failed and we were unable to recover it. 00:30:03.984 [2024-07-23 01:09:47.898670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.984 [2024-07-23 01:09:47.898811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.984 [2024-07-23 01:09:47.898837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.984 qpair failed and we were unable to recover it. 00:30:03.984 [2024-07-23 01:09:47.899048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.984 [2024-07-23 01:09:47.899190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.984 [2024-07-23 01:09:47.899217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.984 qpair failed and we were unable to recover it. 00:30:03.984 [2024-07-23 01:09:47.899371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.984 [2024-07-23 01:09:47.899588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.984 [2024-07-23 01:09:47.899623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.984 qpair failed and we were unable to recover it. 00:30:03.984 [2024-07-23 01:09:47.899774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.984 [2024-07-23 01:09:47.899936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.984 [2024-07-23 01:09:47.899977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.984 qpair failed and we were unable to recover it. 00:30:03.984 [2024-07-23 01:09:47.900174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.984 [2024-07-23 01:09:47.900343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.984 [2024-07-23 01:09:47.900367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.984 qpair failed and we were unable to recover it. 00:30:03.984 [2024-07-23 01:09:47.900590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.984 [2024-07-23 01:09:47.900780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.984 [2024-07-23 01:09:47.900808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.984 qpair failed and we were unable to recover it. 00:30:03.984 [2024-07-23 01:09:47.900963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.984 [2024-07-23 01:09:47.901169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.984 [2024-07-23 01:09:47.901195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.984 qpair failed and we were unable to recover it. 00:30:03.984 [2024-07-23 01:09:47.901350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.984 [2024-07-23 01:09:47.901514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.984 [2024-07-23 01:09:47.901543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.984 qpair failed and we were unable to recover it. 00:30:03.984 [2024-07-23 01:09:47.901738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.984 [2024-07-23 01:09:47.901948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.984 [2024-07-23 01:09:47.901975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.984 qpair failed and we were unable to recover it. 00:30:03.984 [2024-07-23 01:09:47.902157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.984 [2024-07-23 01:09:47.902336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.984 [2024-07-23 01:09:47.902364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.984 qpair failed and we were unable to recover it. 00:30:03.984 [2024-07-23 01:09:47.902520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.984 [2024-07-23 01:09:47.902698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.984 [2024-07-23 01:09:47.902725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.984 qpair failed and we were unable to recover it. 00:30:03.984 [2024-07-23 01:09:47.902905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.984 [2024-07-23 01:09:47.903090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.984 [2024-07-23 01:09:47.903117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.985 qpair failed and we were unable to recover it. 00:30:03.985 [2024-07-23 01:09:47.903322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.985 [2024-07-23 01:09:47.903498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.985 [2024-07-23 01:09:47.903524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.985 qpair failed and we were unable to recover it. 00:30:03.985 [2024-07-23 01:09:47.903691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.985 [2024-07-23 01:09:47.903829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.985 [2024-07-23 01:09:47.903853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.985 qpair failed and we were unable to recover it. 00:30:03.985 [2024-07-23 01:09:47.904048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.985 [2024-07-23 01:09:47.904229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.985 [2024-07-23 01:09:47.904256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.985 qpair failed and we were unable to recover it. 00:30:03.985 [2024-07-23 01:09:47.904467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.985 [2024-07-23 01:09:47.904650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.985 [2024-07-23 01:09:47.904677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.985 qpair failed and we were unable to recover it. 00:30:03.985 [2024-07-23 01:09:47.904835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.985 [2024-07-23 01:09:47.904993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.985 [2024-07-23 01:09:47.905020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.985 qpair failed and we were unable to recover it. 00:30:03.985 [2024-07-23 01:09:47.905190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.985 [2024-07-23 01:09:47.905376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.985 [2024-07-23 01:09:47.905407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.985 qpair failed and we were unable to recover it. 00:30:03.985 [2024-07-23 01:09:47.905597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.985 [2024-07-23 01:09:47.905786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.985 [2024-07-23 01:09:47.905814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.985 qpair failed and we were unable to recover it. 00:30:03.985 [2024-07-23 01:09:47.906028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.985 [2024-07-23 01:09:47.906252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.985 [2024-07-23 01:09:47.906278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.985 qpair failed and we were unable to recover it. 00:30:03.985 [2024-07-23 01:09:47.906461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.985 [2024-07-23 01:09:47.906664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.985 [2024-07-23 01:09:47.906692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.985 qpair failed and we were unable to recover it. 00:30:03.985 [2024-07-23 01:09:47.906840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.985 [2024-07-23 01:09:47.906997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.985 [2024-07-23 01:09:47.907025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.985 qpair failed and we were unable to recover it. 00:30:03.985 [2024-07-23 01:09:47.907206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.985 [2024-07-23 01:09:47.907361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.985 [2024-07-23 01:09:47.907390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.985 qpair failed and we were unable to recover it. 00:30:03.985 [2024-07-23 01:09:47.907551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.985 [2024-07-23 01:09:47.907752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.985 [2024-07-23 01:09:47.907780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.985 qpair failed and we were unable to recover it. 00:30:03.985 [2024-07-23 01:09:47.907967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.985 [2024-07-23 01:09:47.908157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.985 [2024-07-23 01:09:47.908198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.985 qpair failed and we were unable to recover it. 00:30:03.985 [2024-07-23 01:09:47.908376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.985 [2024-07-23 01:09:47.908529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.985 [2024-07-23 01:09:47.908556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.985 qpair failed and we were unable to recover it. 00:30:03.985 [2024-07-23 01:09:47.908698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.985 [2024-07-23 01:09:47.908878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.985 [2024-07-23 01:09:47.908906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.985 qpair failed and we were unable to recover it. 00:30:03.985 [2024-07-23 01:09:47.909052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.985 [2024-07-23 01:09:47.909217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.985 [2024-07-23 01:09:47.909241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.985 qpair failed and we were unable to recover it. 00:30:03.985 [2024-07-23 01:09:47.909466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.985 [2024-07-23 01:09:47.909631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.985 [2024-07-23 01:09:47.909661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.985 qpair failed and we were unable to recover it. 00:30:03.985 [2024-07-23 01:09:47.909839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.985 [2024-07-23 01:09:47.910019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.985 [2024-07-23 01:09:47.910045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.985 qpair failed and we were unable to recover it. 00:30:03.985 [2024-07-23 01:09:47.910247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.985 [2024-07-23 01:09:47.910423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.985 [2024-07-23 01:09:47.910450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.985 qpair failed and we were unable to recover it. 00:30:03.985 [2024-07-23 01:09:47.910604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.985 [2024-07-23 01:09:47.910829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.985 [2024-07-23 01:09:47.910857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.985 qpair failed and we were unable to recover it. 00:30:03.985 [2024-07-23 01:09:47.910996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.985 [2024-07-23 01:09:47.911175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.985 [2024-07-23 01:09:47.911202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.985 qpair failed and we were unable to recover it. 00:30:03.985 [2024-07-23 01:09:47.911353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.985 [2024-07-23 01:09:47.911530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.985 [2024-07-23 01:09:47.911556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.985 qpair failed and we were unable to recover it. 00:30:03.985 [2024-07-23 01:09:47.911742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.985 [2024-07-23 01:09:47.911950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.985 [2024-07-23 01:09:47.911977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.985 qpair failed and we were unable to recover it. 00:30:03.985 [2024-07-23 01:09:47.912140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.985 [2024-07-23 01:09:47.912303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.985 [2024-07-23 01:09:47.912326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.985 qpair failed and we were unable to recover it. 00:30:03.985 [2024-07-23 01:09:47.912489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.985 [2024-07-23 01:09:47.912651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.985 [2024-07-23 01:09:47.912693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.985 qpair failed and we were unable to recover it. 00:30:03.985 [2024-07-23 01:09:47.912869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.985 [2024-07-23 01:09:47.913052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.985 [2024-07-23 01:09:47.913079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.985 qpair failed and we were unable to recover it. 00:30:03.985 [2024-07-23 01:09:47.913298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.985 [2024-07-23 01:09:47.913437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.985 [2024-07-23 01:09:47.913464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.985 qpair failed and we were unable to recover it. 00:30:03.985 [2024-07-23 01:09:47.913641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.985 [2024-07-23 01:09:47.913807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.985 [2024-07-23 01:09:47.913831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.985 qpair failed and we were unable to recover it. 00:30:03.985 [2024-07-23 01:09:47.913966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.985 [2024-07-23 01:09:47.914167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.986 [2024-07-23 01:09:47.914199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.986 qpair failed and we were unable to recover it. 00:30:03.986 [2024-07-23 01:09:47.914381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.986 [2024-07-23 01:09:47.914574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.986 [2024-07-23 01:09:47.914601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.986 qpair failed and we were unable to recover it. 00:30:03.986 [2024-07-23 01:09:47.914803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.986 [2024-07-23 01:09:47.915008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.986 [2024-07-23 01:09:47.915042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.986 qpair failed and we were unable to recover it. 00:30:03.986 [2024-07-23 01:09:47.915211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.986 [2024-07-23 01:09:47.915376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.986 [2024-07-23 01:09:47.915421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.986 qpair failed and we were unable to recover it. 00:30:03.986 [2024-07-23 01:09:47.915583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.986 [2024-07-23 01:09:47.915813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.986 [2024-07-23 01:09:47.915839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.986 qpair failed and we were unable to recover it. 00:30:03.986 [2024-07-23 01:09:47.916012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.986 [2024-07-23 01:09:47.916191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.986 [2024-07-23 01:09:47.916219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.986 qpair failed and we were unable to recover it. 00:30:03.986 [2024-07-23 01:09:47.916424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.986 [2024-07-23 01:09:47.916601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.986 [2024-07-23 01:09:47.916638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.986 qpair failed and we were unable to recover it. 00:30:03.986 [2024-07-23 01:09:47.916804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.986 [2024-07-23 01:09:47.916983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.986 [2024-07-23 01:09:47.917010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.986 qpair failed and we were unable to recover it. 00:30:03.986 [2024-07-23 01:09:47.917205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.986 [2024-07-23 01:09:47.917369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.986 [2024-07-23 01:09:47.917395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.986 qpair failed and we were unable to recover it. 00:30:03.986 [2024-07-23 01:09:47.917565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.986 [2024-07-23 01:09:47.917759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.986 [2024-07-23 01:09:47.917785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.986 qpair failed and we were unable to recover it. 00:30:03.986 [2024-07-23 01:09:47.917990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.986 [2024-07-23 01:09:47.918177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.986 [2024-07-23 01:09:47.918204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.986 qpair failed and we were unable to recover it. 00:30:03.986 [2024-07-23 01:09:47.918369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.986 [2024-07-23 01:09:47.918559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.986 [2024-07-23 01:09:47.918587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.986 qpair failed and we were unable to recover it. 00:30:03.986 [2024-07-23 01:09:47.918808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.986 [2024-07-23 01:09:47.918972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.986 [2024-07-23 01:09:47.918998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.986 qpair failed and we were unable to recover it. 00:30:03.986 [2024-07-23 01:09:47.919180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.986 [2024-07-23 01:09:47.919357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.986 [2024-07-23 01:09:47.919384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.986 qpair failed and we were unable to recover it. 00:30:03.986 [2024-07-23 01:09:47.919561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.986 [2024-07-23 01:09:47.919752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.986 [2024-07-23 01:09:47.919781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.986 qpair failed and we were unable to recover it. 00:30:03.986 [2024-07-23 01:09:47.919967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.986 [2024-07-23 01:09:47.920108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.986 [2024-07-23 01:09:47.920134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.986 qpair failed and we were unable to recover it. 00:30:03.986 [2024-07-23 01:09:47.920304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.986 [2024-07-23 01:09:47.920513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.986 [2024-07-23 01:09:47.920539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.986 qpair failed and we were unable to recover it. 00:30:03.986 [2024-07-23 01:09:47.920762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.986 [2024-07-23 01:09:47.920939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.986 [2024-07-23 01:09:47.920966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.986 qpair failed and we were unable to recover it. 00:30:03.986 [2024-07-23 01:09:47.921177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.986 [2024-07-23 01:09:47.921359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.986 [2024-07-23 01:09:47.921387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.986 qpair failed and we were unable to recover it. 00:30:03.986 [2024-07-23 01:09:47.921602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.986 [2024-07-23 01:09:47.921763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.986 [2024-07-23 01:09:47.921790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.986 qpair failed and we were unable to recover it. 00:30:03.986 [2024-07-23 01:09:47.921968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.986 [2024-07-23 01:09:47.922145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.986 [2024-07-23 01:09:47.922172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.986 qpair failed and we were unable to recover it. 00:30:03.986 [2024-07-23 01:09:47.922344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.986 [2024-07-23 01:09:47.922506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.986 [2024-07-23 01:09:47.922530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.986 qpair failed and we were unable to recover it. 00:30:03.986 [2024-07-23 01:09:47.922753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.986 [2024-07-23 01:09:47.922964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.986 [2024-07-23 01:09:47.922992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.986 qpair failed and we were unable to recover it. 00:30:03.986 [2024-07-23 01:09:47.923153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.986 [2024-07-23 01:09:47.923319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.986 [2024-07-23 01:09:47.923342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.986 qpair failed and we were unable to recover it. 00:30:03.986 [2024-07-23 01:09:47.923494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.986 [2024-07-23 01:09:47.923685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.987 [2024-07-23 01:09:47.923715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.987 qpair failed and we were unable to recover it. 00:30:03.987 [2024-07-23 01:09:47.923888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.987 [2024-07-23 01:09:47.924066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.987 [2024-07-23 01:09:47.924090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.987 qpair failed and we were unable to recover it. 00:30:03.987 [2024-07-23 01:09:47.924222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.987 [2024-07-23 01:09:47.924358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.987 [2024-07-23 01:09:47.924382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.987 qpair failed and we were unable to recover it. 00:30:03.987 [2024-07-23 01:09:47.924571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.987 [2024-07-23 01:09:47.924770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.987 [2024-07-23 01:09:47.924795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.987 qpair failed and we were unable to recover it. 00:30:03.987 [2024-07-23 01:09:47.924984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.987 [2024-07-23 01:09:47.925141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.987 [2024-07-23 01:09:47.925171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.987 qpair failed and we were unable to recover it. 00:30:03.987 [2024-07-23 01:09:47.925354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.987 [2024-07-23 01:09:47.925505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.987 [2024-07-23 01:09:47.925532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.987 qpair failed and we were unable to recover it. 00:30:03.987 [2024-07-23 01:09:47.925750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.987 [2024-07-23 01:09:47.925933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.987 [2024-07-23 01:09:47.925961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.987 qpair failed and we were unable to recover it. 00:30:03.987 [2024-07-23 01:09:47.926145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.987 [2024-07-23 01:09:47.926370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.987 [2024-07-23 01:09:47.926397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.987 qpair failed and we were unable to recover it. 00:30:03.987 [2024-07-23 01:09:47.926607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.987 [2024-07-23 01:09:47.926775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.987 [2024-07-23 01:09:47.926799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.987 qpair failed and we were unable to recover it. 00:30:03.987 [2024-07-23 01:09:47.926966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.987 [2024-07-23 01:09:47.927182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.987 [2024-07-23 01:09:47.927209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.987 qpair failed and we were unable to recover it. 00:30:03.987 [2024-07-23 01:09:47.927385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.987 [2024-07-23 01:09:47.927585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.987 [2024-07-23 01:09:47.927621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.987 qpair failed and we were unable to recover it. 00:30:03.987 [2024-07-23 01:09:47.927781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.987 [2024-07-23 01:09:47.927954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.987 [2024-07-23 01:09:47.927979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.987 qpair failed and we were unable to recover it. 00:30:03.987 [2024-07-23 01:09:47.928185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.987 [2024-07-23 01:09:47.928364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.987 [2024-07-23 01:09:47.928391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.987 qpair failed and we were unable to recover it. 00:30:03.987 [2024-07-23 01:09:47.928575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.987 [2024-07-23 01:09:47.928748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.987 [2024-07-23 01:09:47.928777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.987 qpair failed and we were unable to recover it. 00:30:03.987 [2024-07-23 01:09:47.928960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.987 [2024-07-23 01:09:47.929133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.987 [2024-07-23 01:09:47.929161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.987 qpair failed and we were unable to recover it. 00:30:03.987 [2024-07-23 01:09:47.929343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.987 [2024-07-23 01:09:47.929527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.987 [2024-07-23 01:09:47.929554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.987 qpair failed and we were unable to recover it. 00:30:03.987 [2024-07-23 01:09:47.929749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.987 [2024-07-23 01:09:47.929933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.987 [2024-07-23 01:09:47.929961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.987 qpair failed and we were unable to recover it. 00:30:03.987 [2024-07-23 01:09:47.930140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.987 [2024-07-23 01:09:47.930350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.987 [2024-07-23 01:09:47.930376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.987 qpair failed and we were unable to recover it. 00:30:03.987 [2024-07-23 01:09:47.930559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.987 [2024-07-23 01:09:47.930739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.987 [2024-07-23 01:09:47.930767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.987 qpair failed and we were unable to recover it. 00:30:03.987 [2024-07-23 01:09:47.930932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.987 [2024-07-23 01:09:47.931094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.987 [2024-07-23 01:09:47.931118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.987 qpair failed and we were unable to recover it. 00:30:03.987 [2024-07-23 01:09:47.931256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.987 [2024-07-23 01:09:47.931465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.987 [2024-07-23 01:09:47.931492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.987 qpair failed and we were unable to recover it. 00:30:03.987 [2024-07-23 01:09:47.931675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.987 [2024-07-23 01:09:47.931890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.987 [2024-07-23 01:09:47.931914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.987 qpair failed and we were unable to recover it. 00:30:03.987 [2024-07-23 01:09:47.932081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.987 [2024-07-23 01:09:47.932266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.987 [2024-07-23 01:09:47.932292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.987 qpair failed and we were unable to recover it. 00:30:03.987 [2024-07-23 01:09:47.932482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.987 [2024-07-23 01:09:47.932671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.987 [2024-07-23 01:09:47.932700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.987 qpair failed and we were unable to recover it. 00:30:03.987 [2024-07-23 01:09:47.932886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.987 [2024-07-23 01:09:47.933100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.987 [2024-07-23 01:09:47.933126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.987 qpair failed and we were unable to recover it. 00:30:03.987 [2024-07-23 01:09:47.933310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.987 [2024-07-23 01:09:47.933524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.987 [2024-07-23 01:09:47.933549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.987 qpair failed and we were unable to recover it. 00:30:03.987 [2024-07-23 01:09:47.933696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.987 [2024-07-23 01:09:47.933861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.987 [2024-07-23 01:09:47.933888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.987 qpair failed and we were unable to recover it. 00:30:03.987 [2024-07-23 01:09:47.934075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.987 [2024-07-23 01:09:47.934256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.987 [2024-07-23 01:09:47.934283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.987 qpair failed and we were unable to recover it. 00:30:03.987 [2024-07-23 01:09:47.934489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.987 [2024-07-23 01:09:47.934668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.988 [2024-07-23 01:09:47.934696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.988 qpair failed and we were unable to recover it. 00:30:03.988 [2024-07-23 01:09:47.934887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.988 [2024-07-23 01:09:47.935066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.988 [2024-07-23 01:09:47.935093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.988 qpair failed and we were unable to recover it. 00:30:03.988 [2024-07-23 01:09:47.935272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.988 [2024-07-23 01:09:47.935454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.988 [2024-07-23 01:09:47.935484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.988 qpair failed and we were unable to recover it. 00:30:03.988 [2024-07-23 01:09:47.935698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.988 [2024-07-23 01:09:47.935841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.988 [2024-07-23 01:09:47.935866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.988 qpair failed and we were unable to recover it. 00:30:03.988 [2024-07-23 01:09:47.936029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.988 [2024-07-23 01:09:47.936259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.988 [2024-07-23 01:09:47.936283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.988 qpair failed and we were unable to recover it. 00:30:03.988 [2024-07-23 01:09:47.936445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.988 [2024-07-23 01:09:47.936634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.988 [2024-07-23 01:09:47.936662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.988 qpair failed and we were unable to recover it. 00:30:03.988 [2024-07-23 01:09:47.936855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.988 [2024-07-23 01:09:47.937027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.988 [2024-07-23 01:09:47.937055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.988 qpair failed and we were unable to recover it. 00:30:03.988 [2024-07-23 01:09:47.937268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.988 [2024-07-23 01:09:47.937452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.988 [2024-07-23 01:09:47.937479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.988 qpair failed and we were unable to recover it. 00:30:03.988 [2024-07-23 01:09:47.937646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.988 [2024-07-23 01:09:47.937786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.988 [2024-07-23 01:09:47.937812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.988 qpair failed and we were unable to recover it. 00:30:03.988 [2024-07-23 01:09:47.938014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.988 [2024-07-23 01:09:47.938196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.988 [2024-07-23 01:09:47.938223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.988 qpair failed and we were unable to recover it. 00:30:03.988 [2024-07-23 01:09:47.938402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.988 [2024-07-23 01:09:47.938578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.988 [2024-07-23 01:09:47.938605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.988 qpair failed and we were unable to recover it. 00:30:03.988 [2024-07-23 01:09:47.938780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.988 [2024-07-23 01:09:47.938984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.988 [2024-07-23 01:09:47.939012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.988 qpair failed and we were unable to recover it. 00:30:03.988 [2024-07-23 01:09:47.939216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.988 [2024-07-23 01:09:47.939393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.988 [2024-07-23 01:09:47.939421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.988 qpair failed and we were unable to recover it. 00:30:03.988 [2024-07-23 01:09:47.939581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.988 [2024-07-23 01:09:47.939787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.988 [2024-07-23 01:09:47.939812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.988 qpair failed and we were unable to recover it. 00:30:03.988 [2024-07-23 01:09:47.939991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.988 [2024-07-23 01:09:47.940158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.988 [2024-07-23 01:09:47.940183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.988 qpair failed and we were unable to recover it. 00:30:03.988 [2024-07-23 01:09:47.940373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.988 [2024-07-23 01:09:47.940555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.988 [2024-07-23 01:09:47.940584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.988 qpair failed and we were unable to recover it. 00:30:03.988 [2024-07-23 01:09:47.940806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.988 [2024-07-23 01:09:47.941000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.988 [2024-07-23 01:09:47.941028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.988 qpair failed and we were unable to recover it. 00:30:03.988 [2024-07-23 01:09:47.941188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.988 [2024-07-23 01:09:47.941371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.988 [2024-07-23 01:09:47.941398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.988 qpair failed and we were unable to recover it. 00:30:03.988 [2024-07-23 01:09:47.941603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.988 [2024-07-23 01:09:47.941816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.988 [2024-07-23 01:09:47.941843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.988 qpair failed and we were unable to recover it. 00:30:03.988 [2024-07-23 01:09:47.942026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.988 [2024-07-23 01:09:47.942158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.988 [2024-07-23 01:09:47.942183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.988 qpair failed and we were unable to recover it. 00:30:03.988 [2024-07-23 01:09:47.942339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.988 [2024-07-23 01:09:47.942533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.988 [2024-07-23 01:09:47.942562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.988 qpair failed and we were unable to recover it. 00:30:03.988 [2024-07-23 01:09:47.942742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.988 [2024-07-23 01:09:47.942949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.988 [2024-07-23 01:09:47.942976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.988 qpair failed and we were unable to recover it. 00:30:03.988 [2024-07-23 01:09:47.943183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.988 [2024-07-23 01:09:47.943354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.988 [2024-07-23 01:09:47.943381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.988 qpair failed and we were unable to recover it. 00:30:03.988 [2024-07-23 01:09:47.943553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.988 [2024-07-23 01:09:47.943762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.988 [2024-07-23 01:09:47.943790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.988 qpair failed and we were unable to recover it. 00:30:03.988 [2024-07-23 01:09:47.943968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.988 [2024-07-23 01:09:47.944149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.988 [2024-07-23 01:09:47.944177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.988 qpair failed and we were unable to recover it. 00:30:03.988 [2024-07-23 01:09:47.944353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.988 [2024-07-23 01:09:47.944566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.988 [2024-07-23 01:09:47.944589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.988 qpair failed and we were unable to recover it. 00:30:03.988 [2024-07-23 01:09:47.944764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.988 [2024-07-23 01:09:47.944928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.988 [2024-07-23 01:09:47.944957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.988 qpair failed and we were unable to recover it. 00:30:03.988 [2024-07-23 01:09:47.945137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.988 [2024-07-23 01:09:47.945318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.988 [2024-07-23 01:09:47.945346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.988 qpair failed and we were unable to recover it. 00:30:03.988 [2024-07-23 01:09:47.945528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.988 [2024-07-23 01:09:47.945700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.988 [2024-07-23 01:09:47.945725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.988 qpair failed and we were unable to recover it. 00:30:03.989 [2024-07-23 01:09:47.945894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.989 [2024-07-23 01:09:47.946079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.989 [2024-07-23 01:09:47.946106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.989 qpair failed and we were unable to recover it. 00:30:03.989 [2024-07-23 01:09:47.946277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.989 [2024-07-23 01:09:47.946488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.989 [2024-07-23 01:09:47.946515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.989 qpair failed and we were unable to recover it. 00:30:03.989 [2024-07-23 01:09:47.946704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.989 [2024-07-23 01:09:47.946889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.989 [2024-07-23 01:09:47.946916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.989 qpair failed and we were unable to recover it. 00:30:03.989 [2024-07-23 01:09:47.947094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.989 [2024-07-23 01:09:47.947270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.989 [2024-07-23 01:09:47.947297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.989 qpair failed and we were unable to recover it. 00:30:03.989 [2024-07-23 01:09:47.947516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.989 [2024-07-23 01:09:47.947688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.989 [2024-07-23 01:09:47.947713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.989 qpair failed and we were unable to recover it. 00:30:03.989 [2024-07-23 01:09:47.947882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.989 [2024-07-23 01:09:47.948069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.989 [2024-07-23 01:09:47.948098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.989 qpair failed and we were unable to recover it. 00:30:03.989 [2024-07-23 01:09:47.948313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.989 [2024-07-23 01:09:47.948495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.989 [2024-07-23 01:09:47.948524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.989 qpair failed and we were unable to recover it. 00:30:03.989 [2024-07-23 01:09:47.948682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.989 [2024-07-23 01:09:47.948848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.989 [2024-07-23 01:09:47.948876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.989 qpair failed and we were unable to recover it. 00:30:03.989 [2024-07-23 01:09:47.949062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.989 [2024-07-23 01:09:47.949239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.989 [2024-07-23 01:09:47.949266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.989 qpair failed and we were unable to recover it. 00:30:03.989 [2024-07-23 01:09:47.949444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.989 [2024-07-23 01:09:47.949636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.989 [2024-07-23 01:09:47.949662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.989 qpair failed and we were unable to recover it. 00:30:03.989 [2024-07-23 01:09:47.949804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.989 [2024-07-23 01:09:47.950001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.989 [2024-07-23 01:09:47.950024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.989 qpair failed and we were unable to recover it. 00:30:03.989 [2024-07-23 01:09:47.950182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.989 [2024-07-23 01:09:47.950384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.989 [2024-07-23 01:09:47.950408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.989 qpair failed and we were unable to recover it. 00:30:03.989 [2024-07-23 01:09:47.950593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.989 [2024-07-23 01:09:47.950804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.989 [2024-07-23 01:09:47.950830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.989 qpair failed and we were unable to recover it. 00:30:03.989 [2024-07-23 01:09:47.951039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.989 [2024-07-23 01:09:47.951219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.989 [2024-07-23 01:09:47.951246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.989 qpair failed and we were unable to recover it. 00:30:03.989 [2024-07-23 01:09:47.951441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.989 [2024-07-23 01:09:47.951646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.989 [2024-07-23 01:09:47.951687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.989 qpair failed and we were unable to recover it. 00:30:03.989 [2024-07-23 01:09:47.951855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.989 [2024-07-23 01:09:47.952054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.989 [2024-07-23 01:09:47.952079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.989 qpair failed and we were unable to recover it. 00:30:03.989 [2024-07-23 01:09:47.952244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.989 [2024-07-23 01:09:47.952456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.989 [2024-07-23 01:09:47.952483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.989 qpair failed and we were unable to recover it. 00:30:03.989 [2024-07-23 01:09:47.952658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.989 [2024-07-23 01:09:47.952842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.989 [2024-07-23 01:09:47.952870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.989 qpair failed and we were unable to recover it. 00:30:03.989 [2024-07-23 01:09:47.953026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.989 [2024-07-23 01:09:47.953165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.989 [2024-07-23 01:09:47.953191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.989 qpair failed and we were unable to recover it. 00:30:03.989 [2024-07-23 01:09:47.953355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.989 [2024-07-23 01:09:47.953544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.989 [2024-07-23 01:09:47.953570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.989 qpair failed and we were unable to recover it. 00:30:03.989 [2024-07-23 01:09:47.953793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.989 [2024-07-23 01:09:47.954003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.989 [2024-07-23 01:09:47.954028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.989 qpair failed and we were unable to recover it. 00:30:03.989 [2024-07-23 01:09:47.954212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.989 [2024-07-23 01:09:47.954406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.989 [2024-07-23 01:09:47.954431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.989 qpair failed and we were unable to recover it. 00:30:03.989 [2024-07-23 01:09:47.954664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.989 [2024-07-23 01:09:47.954833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.989 [2024-07-23 01:09:47.954857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.989 qpair failed and we were unable to recover it. 00:30:03.989 [2024-07-23 01:09:47.955068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.989 [2024-07-23 01:09:47.955221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.989 [2024-07-23 01:09:47.955250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.989 qpair failed and we were unable to recover it. 00:30:03.989 [2024-07-23 01:09:47.955454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.989 [2024-07-23 01:09:47.955638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.989 [2024-07-23 01:09:47.955668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.989 qpair failed and we were unable to recover it. 00:30:03.989 [2024-07-23 01:09:47.955877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.989 [2024-07-23 01:09:47.956055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.989 [2024-07-23 01:09:47.956085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.989 qpair failed and we were unable to recover it. 00:30:03.989 [2024-07-23 01:09:47.956268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.989 [2024-07-23 01:09:47.956478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.989 [2024-07-23 01:09:47.956506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.989 qpair failed and we were unable to recover it. 00:30:03.989 [2024-07-23 01:09:47.956677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.989 [2024-07-23 01:09:47.956830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.989 [2024-07-23 01:09:47.956861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.989 qpair failed and we were unable to recover it. 00:30:03.989 [2024-07-23 01:09:47.957039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.989 [2024-07-23 01:09:47.957219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.990 [2024-07-23 01:09:47.957246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.990 qpair failed and we were unable to recover it. 00:30:03.990 [2024-07-23 01:09:47.957463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.990 [2024-07-23 01:09:47.957624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.990 [2024-07-23 01:09:47.957653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.990 qpair failed and we were unable to recover it. 00:30:03.990 [2024-07-23 01:09:47.957840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.990 [2024-07-23 01:09:47.957984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.990 [2024-07-23 01:09:47.958011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.990 qpair failed and we were unable to recover it. 00:30:03.990 [2024-07-23 01:09:47.958193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.990 [2024-07-23 01:09:47.958400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.990 [2024-07-23 01:09:47.958428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.990 qpair failed and we were unable to recover it. 00:30:03.990 [2024-07-23 01:09:47.958607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.990 [2024-07-23 01:09:47.958768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.990 [2024-07-23 01:09:47.958796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.990 qpair failed and we were unable to recover it. 00:30:03.990 [2024-07-23 01:09:47.958949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.990 [2024-07-23 01:09:47.959167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.990 [2024-07-23 01:09:47.959193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.990 qpair failed and we were unable to recover it. 00:30:03.990 [2024-07-23 01:09:47.959372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.990 [2024-07-23 01:09:47.959537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.990 [2024-07-23 01:09:47.959562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.990 qpair failed and we were unable to recover it. 00:30:03.990 [2024-07-23 01:09:47.959743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.990 [2024-07-23 01:09:47.959948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.990 [2024-07-23 01:09:47.959975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.990 qpair failed and we were unable to recover it. 00:30:03.990 [2024-07-23 01:09:47.960182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.990 [2024-07-23 01:09:47.960352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.990 [2024-07-23 01:09:47.960379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.990 qpair failed and we were unable to recover it. 00:30:03.990 [2024-07-23 01:09:47.960584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.990 [2024-07-23 01:09:47.960771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.990 [2024-07-23 01:09:47.960806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.990 qpair failed and we were unable to recover it. 00:30:03.990 [2024-07-23 01:09:47.961010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.990 [2024-07-23 01:09:47.961194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.990 [2024-07-23 01:09:47.961220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.990 qpair failed and we were unable to recover it. 00:30:03.990 [2024-07-23 01:09:47.961408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.990 [2024-07-23 01:09:47.961570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.990 [2024-07-23 01:09:47.961595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.990 qpair failed and we were unable to recover it. 00:30:03.990 [2024-07-23 01:09:47.961786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.990 [2024-07-23 01:09:47.961966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.990 [2024-07-23 01:09:47.961992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.990 qpair failed and we were unable to recover it. 00:30:03.990 [2024-07-23 01:09:47.962175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.990 [2024-07-23 01:09:47.962383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.990 [2024-07-23 01:09:47.962411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.990 qpair failed and we were unable to recover it. 00:30:03.990 [2024-07-23 01:09:47.962678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.990 [2024-07-23 01:09:47.962838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.990 [2024-07-23 01:09:47.962863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.990 qpair failed and we were unable to recover it. 00:30:03.990 [2024-07-23 01:09:47.963056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.990 [2024-07-23 01:09:47.963203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.990 [2024-07-23 01:09:47.963231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.990 qpair failed and we were unable to recover it. 00:30:03.990 [2024-07-23 01:09:47.963412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.990 [2024-07-23 01:09:47.963591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.990 [2024-07-23 01:09:47.963629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.990 qpair failed and we were unable to recover it. 00:30:03.990 [2024-07-23 01:09:47.963854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.990 [2024-07-23 01:09:47.964021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.990 [2024-07-23 01:09:47.964045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.990 qpair failed and we were unable to recover it. 00:30:03.990 [2024-07-23 01:09:47.964233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.990 [2024-07-23 01:09:47.964447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.990 [2024-07-23 01:09:47.964474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.990 qpair failed and we were unable to recover it. 00:30:03.990 [2024-07-23 01:09:47.964626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.990 [2024-07-23 01:09:47.964773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.990 [2024-07-23 01:09:47.964801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.990 qpair failed and we were unable to recover it. 00:30:03.990 [2024-07-23 01:09:47.964948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.990 [2024-07-23 01:09:47.965111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.990 [2024-07-23 01:09:47.965137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.990 qpair failed and we were unable to recover it. 00:30:03.990 [2024-07-23 01:09:47.965364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.990 [2024-07-23 01:09:47.965543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.990 [2024-07-23 01:09:47.965569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.990 qpair failed and we were unable to recover it. 00:30:03.990 [2024-07-23 01:09:47.965734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.990 [2024-07-23 01:09:47.965902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.990 [2024-07-23 01:09:47.965926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.990 qpair failed and we were unable to recover it. 00:30:03.990 [2024-07-23 01:09:47.966064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.990 [2024-07-23 01:09:47.966281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.990 [2024-07-23 01:09:47.966308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.990 qpair failed and we were unable to recover it. 00:30:03.990 [2024-07-23 01:09:47.966456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.990 [2024-07-23 01:09:47.966599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.990 [2024-07-23 01:09:47.966634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.990 qpair failed and we were unable to recover it. 00:30:03.990 [2024-07-23 01:09:47.966826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.990 [2024-07-23 01:09:47.967062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.990 [2024-07-23 01:09:47.967088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.990 qpair failed and we were unable to recover it. 00:30:03.990 [2024-07-23 01:09:47.967274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.990 [2024-07-23 01:09:47.967482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.990 [2024-07-23 01:09:47.967511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.990 qpair failed and we were unable to recover it. 00:30:03.990 [2024-07-23 01:09:47.967673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.990 [2024-07-23 01:09:47.967854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.990 [2024-07-23 01:09:47.967880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.990 qpair failed and we were unable to recover it. 00:30:03.990 [2024-07-23 01:09:47.968088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.990 [2024-07-23 01:09:47.968294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.990 [2024-07-23 01:09:47.968321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.990 qpair failed and we were unable to recover it. 00:30:03.990 [2024-07-23 01:09:47.968514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.991 [2024-07-23 01:09:47.968648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.991 [2024-07-23 01:09:47.968677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.991 qpair failed and we were unable to recover it. 00:30:03.991 [2024-07-23 01:09:47.968823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.991 [2024-07-23 01:09:47.969003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.991 [2024-07-23 01:09:47.969030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.991 qpair failed and we were unable to recover it. 00:30:03.991 [2024-07-23 01:09:47.969236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.991 [2024-07-23 01:09:47.969444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.991 [2024-07-23 01:09:47.969469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.991 qpair failed and we were unable to recover it. 00:30:03.991 [2024-07-23 01:09:47.969638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.991 [2024-07-23 01:09:47.969815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.991 [2024-07-23 01:09:47.969844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.991 qpair failed and we were unable to recover it. 00:30:03.991 [2024-07-23 01:09:47.970052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.991 [2024-07-23 01:09:47.970254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.991 [2024-07-23 01:09:47.970282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.991 qpair failed and we were unable to recover it. 00:30:03.991 [2024-07-23 01:09:47.970454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.991 [2024-07-23 01:09:47.970641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.991 [2024-07-23 01:09:47.970668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.991 qpair failed and we were unable to recover it. 00:30:03.991 [2024-07-23 01:09:47.970877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.991 [2024-07-23 01:09:47.971037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.991 [2024-07-23 01:09:47.971067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.991 qpair failed and we were unable to recover it. 00:30:03.991 [2024-07-23 01:09:47.971254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.991 [2024-07-23 01:09:47.971441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.991 [2024-07-23 01:09:47.971465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.991 qpair failed and we were unable to recover it. 00:30:03.991 [2024-07-23 01:09:47.971634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.991 [2024-07-23 01:09:47.971823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.991 [2024-07-23 01:09:47.971850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.991 qpair failed and we were unable to recover it. 00:30:03.991 [2024-07-23 01:09:47.972061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.991 [2024-07-23 01:09:47.972202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.991 [2024-07-23 01:09:47.972227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.991 qpair failed and we were unable to recover it. 00:30:03.991 [2024-07-23 01:09:47.972367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.991 [2024-07-23 01:09:47.972533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.991 [2024-07-23 01:09:47.972558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.991 qpair failed and we were unable to recover it. 00:30:03.991 [2024-07-23 01:09:47.972719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.991 [2024-07-23 01:09:47.972896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.991 [2024-07-23 01:09:47.972923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.991 qpair failed and we were unable to recover it. 00:30:03.991 [2024-07-23 01:09:47.973099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.991 [2024-07-23 01:09:47.973280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.991 [2024-07-23 01:09:47.973307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.991 qpair failed and we were unable to recover it. 00:30:03.991 [2024-07-23 01:09:47.973487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.991 [2024-07-23 01:09:47.973641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.991 [2024-07-23 01:09:47.973685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.991 qpair failed and we were unable to recover it. 00:30:03.991 [2024-07-23 01:09:47.973888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.991 [2024-07-23 01:09:47.974019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.991 [2024-07-23 01:09:47.974043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.991 qpair failed and we were unable to recover it. 00:30:03.991 [2024-07-23 01:09:47.974217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.991 [2024-07-23 01:09:47.974395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.991 [2024-07-23 01:09:47.974423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.991 qpair failed and we were unable to recover it. 00:30:03.991 [2024-07-23 01:09:47.974605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.991 [2024-07-23 01:09:47.974806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.991 [2024-07-23 01:09:47.974833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.991 qpair failed and we were unable to recover it. 00:30:03.991 [2024-07-23 01:09:47.975013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.991 [2024-07-23 01:09:47.975198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.991 [2024-07-23 01:09:47.975228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.991 qpair failed and we were unable to recover it. 00:30:03.991 [2024-07-23 01:09:47.975435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.991 [2024-07-23 01:09:47.975629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.991 [2024-07-23 01:09:47.975656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.991 qpair failed and we were unable to recover it. 00:30:03.991 [2024-07-23 01:09:47.975814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.991 [2024-07-23 01:09:47.976022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.991 [2024-07-23 01:09:47.976050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.991 qpair failed and we were unable to recover it. 00:30:03.991 [2024-07-23 01:09:47.976228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.991 [2024-07-23 01:09:47.976382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.991 [2024-07-23 01:09:47.976411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.991 qpair failed and we were unable to recover it. 00:30:03.991 [2024-07-23 01:09:47.976609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.991 [2024-07-23 01:09:47.976799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.991 [2024-07-23 01:09:47.976829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.991 qpair failed and we were unable to recover it. 00:30:03.991 [2024-07-23 01:09:47.976981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.991 [2024-07-23 01:09:47.977171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.991 [2024-07-23 01:09:47.977199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.991 qpair failed and we were unable to recover it. 00:30:03.991 [2024-07-23 01:09:47.977406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.991 [2024-07-23 01:09:47.977559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.991 [2024-07-23 01:09:47.977586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.991 qpair failed and we were unable to recover it. 00:30:03.991 [2024-07-23 01:09:47.977786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.991 [2024-07-23 01:09:47.977949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.991 [2024-07-23 01:09:47.977973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.991 qpair failed and we were unable to recover it. 00:30:03.992 [2024-07-23 01:09:47.978115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.992 [2024-07-23 01:09:47.978321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.992 [2024-07-23 01:09:47.978348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.992 qpair failed and we were unable to recover it. 00:30:03.992 [2024-07-23 01:09:47.978523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.992 [2024-07-23 01:09:47.978703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.992 [2024-07-23 01:09:47.978733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.992 qpair failed and we were unable to recover it. 00:30:03.992 [2024-07-23 01:09:47.978918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.992 [2024-07-23 01:09:47.979108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.992 [2024-07-23 01:09:47.979132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.992 qpair failed and we were unable to recover it. 00:30:03.992 [2024-07-23 01:09:47.979325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.992 [2024-07-23 01:09:47.979459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.992 [2024-07-23 01:09:47.979486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.992 qpair failed and we were unable to recover it. 00:30:03.992 [2024-07-23 01:09:47.979730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.992 [2024-07-23 01:09:47.979889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.992 [2024-07-23 01:09:47.979918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.992 qpair failed and we were unable to recover it. 00:30:03.992 [2024-07-23 01:09:47.980111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.992 [2024-07-23 01:09:47.980319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.992 [2024-07-23 01:09:47.980347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.992 qpair failed and we were unable to recover it. 00:30:03.992 [2024-07-23 01:09:47.980539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.992 [2024-07-23 01:09:47.980698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.992 [2024-07-23 01:09:47.980728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.992 qpair failed and we were unable to recover it. 00:30:03.992 [2024-07-23 01:09:47.980917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.992 [2024-07-23 01:09:47.981067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.992 [2024-07-23 01:09:47.981095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.992 qpair failed and we were unable to recover it. 00:30:03.992 [2024-07-23 01:09:47.981246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.992 [2024-07-23 01:09:47.981405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.992 [2024-07-23 01:09:47.981445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.992 qpair failed and we were unable to recover it. 00:30:03.992 [2024-07-23 01:09:47.981656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.992 [2024-07-23 01:09:47.981817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.992 [2024-07-23 01:09:47.981842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.992 qpair failed and we were unable to recover it. 00:30:03.992 [2024-07-23 01:09:47.982012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.992 [2024-07-23 01:09:47.982208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.992 [2024-07-23 01:09:47.982236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.992 qpair failed and we were unable to recover it. 00:30:03.992 [2024-07-23 01:09:47.982415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.992 [2024-07-23 01:09:47.982628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.992 [2024-07-23 01:09:47.982656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.992 qpair failed and we were unable to recover it. 00:30:03.992 [2024-07-23 01:09:47.982856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.992 [2024-07-23 01:09:47.983058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.992 [2024-07-23 01:09:47.983085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.992 qpair failed and we were unable to recover it. 00:30:03.992 [2024-07-23 01:09:47.983292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.992 [2024-07-23 01:09:47.983441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.992 [2024-07-23 01:09:47.983468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.992 qpair failed and we were unable to recover it. 00:30:03.992 [2024-07-23 01:09:47.983622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.992 [2024-07-23 01:09:47.983796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.992 [2024-07-23 01:09:47.983821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.992 qpair failed and we were unable to recover it. 00:30:03.992 [2024-07-23 01:09:47.983970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.992 [2024-07-23 01:09:47.984164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.992 [2024-07-23 01:09:47.984192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.992 qpair failed and we were unable to recover it. 00:30:03.992 [2024-07-23 01:09:47.984361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.992 [2024-07-23 01:09:47.984552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.992 [2024-07-23 01:09:47.984577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.992 qpair failed and we were unable to recover it. 00:30:03.992 [2024-07-23 01:09:47.984749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.992 [2024-07-23 01:09:47.984891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.992 [2024-07-23 01:09:47.984918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.992 qpair failed and we were unable to recover it. 00:30:03.992 [2024-07-23 01:09:47.985104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.992 [2024-07-23 01:09:47.985256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.992 [2024-07-23 01:09:47.985283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.992 qpair failed and we were unable to recover it. 00:30:03.992 [2024-07-23 01:09:47.985436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.992 [2024-07-23 01:09:47.985644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.992 [2024-07-23 01:09:47.985672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.992 qpair failed and we were unable to recover it. 00:30:03.992 [2024-07-23 01:09:47.985883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.992 [2024-07-23 01:09:47.986092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.992 [2024-07-23 01:09:47.986120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.992 qpair failed and we were unable to recover it. 00:30:03.992 [2024-07-23 01:09:47.986296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.992 [2024-07-23 01:09:47.986479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.992 [2024-07-23 01:09:47.986506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.992 qpair failed and we were unable to recover it. 00:30:03.992 [2024-07-23 01:09:47.986680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.992 [2024-07-23 01:09:47.986859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.992 [2024-07-23 01:09:47.986888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.992 qpair failed and we were unable to recover it. 00:30:03.992 [2024-07-23 01:09:47.987031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.992 [2024-07-23 01:09:47.987205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.992 [2024-07-23 01:09:47.987232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.992 qpair failed and we were unable to recover it. 00:30:03.992 [2024-07-23 01:09:47.987394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.992 [2024-07-23 01:09:47.987585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.992 [2024-07-23 01:09:47.987610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.992 qpair failed and we were unable to recover it. 00:30:03.992 [2024-07-23 01:09:47.987840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.992 [2024-07-23 01:09:47.987986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.992 [2024-07-23 01:09:47.988013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.992 qpair failed and we were unable to recover it. 00:30:03.992 [2024-07-23 01:09:47.988165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.992 [2024-07-23 01:09:47.988383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.992 [2024-07-23 01:09:47.988407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.992 qpair failed and we were unable to recover it. 00:30:03.992 [2024-07-23 01:09:47.988611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.992 [2024-07-23 01:09:47.988798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.992 [2024-07-23 01:09:47.988827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.992 qpair failed and we were unable to recover it. 00:30:03.992 [2024-07-23 01:09:47.989001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.992 [2024-07-23 01:09:47.989160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.993 [2024-07-23 01:09:47.989184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.993 qpair failed and we were unable to recover it. 00:30:03.993 [2024-07-23 01:09:47.989366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.993 [2024-07-23 01:09:47.989567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.993 [2024-07-23 01:09:47.989595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.993 qpair failed and we were unable to recover it. 00:30:03.993 [2024-07-23 01:09:47.989760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.993 [2024-07-23 01:09:47.989966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.993 [2024-07-23 01:09:47.989994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.993 qpair failed and we were unable to recover it. 00:30:03.993 [2024-07-23 01:09:47.990208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.993 [2024-07-23 01:09:47.990340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.993 [2024-07-23 01:09:47.990364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.993 qpair failed and we were unable to recover it. 00:30:03.993 [2024-07-23 01:09:47.990600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.993 [2024-07-23 01:09:47.990812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.993 [2024-07-23 01:09:47.990840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.993 qpair failed and we were unable to recover it. 00:30:03.993 [2024-07-23 01:09:47.990988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.993 [2024-07-23 01:09:47.991172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.993 [2024-07-23 01:09:47.991199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.993 qpair failed and we were unable to recover it. 00:30:03.993 [2024-07-23 01:09:47.991377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.993 [2024-07-23 01:09:47.991552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.993 [2024-07-23 01:09:47.991580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.993 qpair failed and we were unable to recover it. 00:30:03.993 [2024-07-23 01:09:47.991794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.993 [2024-07-23 01:09:47.991931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.993 [2024-07-23 01:09:47.991957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.993 qpair failed and we were unable to recover it. 00:30:03.993 [2024-07-23 01:09:47.992125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.993 [2024-07-23 01:09:47.992333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.993 [2024-07-23 01:09:47.992360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.993 qpair failed and we were unable to recover it. 00:30:03.993 [2024-07-23 01:09:47.992537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.993 [2024-07-23 01:09:47.992694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.993 [2024-07-23 01:09:47.992724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.993 qpair failed and we were unable to recover it. 00:30:03.993 [2024-07-23 01:09:47.992884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.993 [2024-07-23 01:09:47.993090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.993 [2024-07-23 01:09:47.993117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.993 qpair failed and we were unable to recover it. 00:30:03.993 [2024-07-23 01:09:47.993300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.993 [2024-07-23 01:09:47.993491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.993 [2024-07-23 01:09:47.993515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.993 qpair failed and we were unable to recover it. 00:30:03.993 [2024-07-23 01:09:47.993682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.993 [2024-07-23 01:09:47.993844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.993 [2024-07-23 01:09:47.993870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.993 qpair failed and we were unable to recover it. 00:30:03.993 [2024-07-23 01:09:47.994078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.993 [2024-07-23 01:09:47.994253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.993 [2024-07-23 01:09:47.994280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.993 qpair failed and we were unable to recover it. 00:30:03.993 [2024-07-23 01:09:47.994465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.993 [2024-07-23 01:09:47.994656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.993 [2024-07-23 01:09:47.994681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.993 qpair failed and we were unable to recover it. 00:30:03.993 [2024-07-23 01:09:47.994866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.993 [2024-07-23 01:09:47.995057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.993 [2024-07-23 01:09:47.995081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.993 qpair failed and we were unable to recover it. 00:30:03.993 [2024-07-23 01:09:47.995250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.993 [2024-07-23 01:09:47.995384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.993 [2024-07-23 01:09:47.995407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.993 qpair failed and we were unable to recover it. 00:30:03.993 [2024-07-23 01:09:47.995621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.993 [2024-07-23 01:09:47.995784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.993 [2024-07-23 01:09:47.995810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.993 qpair failed and we were unable to recover it. 00:30:03.993 [2024-07-23 01:09:47.995952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.993 [2024-07-23 01:09:47.996147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.993 [2024-07-23 01:09:47.996174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.993 qpair failed and we were unable to recover it. 00:30:03.993 [2024-07-23 01:09:47.996365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.993 [2024-07-23 01:09:47.996529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.993 [2024-07-23 01:09:47.996569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.993 qpair failed and we were unable to recover it. 00:30:03.993 [2024-07-23 01:09:47.996758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.993 [2024-07-23 01:09:47.996970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.993 [2024-07-23 01:09:47.996997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.993 qpair failed and we were unable to recover it. 00:30:03.993 [2024-07-23 01:09:47.997169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.993 [2024-07-23 01:09:47.997354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.993 [2024-07-23 01:09:47.997382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.993 qpair failed and we were unable to recover it. 00:30:03.993 [2024-07-23 01:09:47.997555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.993 [2024-07-23 01:09:47.997706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.993 [2024-07-23 01:09:47.997732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.993 qpair failed and we were unable to recover it. 00:30:03.993 [2024-07-23 01:09:47.997943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.993 [2024-07-23 01:09:47.998088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.993 [2024-07-23 01:09:47.998116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.993 qpair failed and we were unable to recover it. 00:30:03.993 [2024-07-23 01:09:47.998294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.993 [2024-07-23 01:09:47.998500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.993 [2024-07-23 01:09:47.998528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.993 qpair failed and we were unable to recover it. 00:30:03.993 [2024-07-23 01:09:47.998699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.993 [2024-07-23 01:09:47.998863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.993 [2024-07-23 01:09:47.998888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.993 qpair failed and we were unable to recover it. 00:30:03.993 [2024-07-23 01:09:47.999100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.993 [2024-07-23 01:09:47.999306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.993 [2024-07-23 01:09:47.999333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.993 qpair failed and we were unable to recover it. 00:30:03.993 [2024-07-23 01:09:47.999540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.993 [2024-07-23 01:09:47.999742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.993 [2024-07-23 01:09:47.999770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.993 qpair failed and we were unable to recover it. 00:30:03.993 [2024-07-23 01:09:47.999953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.993 [2024-07-23 01:09:48.000095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.993 [2024-07-23 01:09:48.000120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.993 qpair failed and we were unable to recover it. 00:30:03.993 [2024-07-23 01:09:48.000288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.994 [2024-07-23 01:09:48.000478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.994 [2024-07-23 01:09:48.000505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.994 qpair failed and we were unable to recover it. 00:30:03.994 [2024-07-23 01:09:48.000690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.994 [2024-07-23 01:09:48.000837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.994 [2024-07-23 01:09:48.000865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.994 qpair failed and we were unable to recover it. 00:30:03.994 [2024-07-23 01:09:48.001057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.994 [2024-07-23 01:09:48.001224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.994 [2024-07-23 01:09:48.001248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.994 qpair failed and we were unable to recover it. 00:30:03.994 [2024-07-23 01:09:48.001418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.994 [2024-07-23 01:09:48.001600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.994 [2024-07-23 01:09:48.001636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.994 qpair failed and we were unable to recover it. 00:30:03.994 [2024-07-23 01:09:48.001844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.994 [2024-07-23 01:09:48.001995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.994 [2024-07-23 01:09:48.002022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.994 qpair failed and we were unable to recover it. 00:30:03.994 [2024-07-23 01:09:48.002182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.994 [2024-07-23 01:09:48.002365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.994 [2024-07-23 01:09:48.002388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.994 qpair failed and we were unable to recover it. 00:30:03.994 [2024-07-23 01:09:48.002553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.994 [2024-07-23 01:09:48.002765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.994 [2024-07-23 01:09:48.002790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.994 qpair failed and we were unable to recover it. 00:30:03.994 [2024-07-23 01:09:48.003009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.994 [2024-07-23 01:09:48.003160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.994 [2024-07-23 01:09:48.003186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.994 qpair failed and we were unable to recover it. 00:30:03.994 [2024-07-23 01:09:48.003359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.994 [2024-07-23 01:09:48.003518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.994 [2024-07-23 01:09:48.003543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.994 qpair failed and we were unable to recover it. 00:30:03.994 [2024-07-23 01:09:48.003736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.994 [2024-07-23 01:09:48.003934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.994 [2024-07-23 01:09:48.003961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.994 qpair failed and we were unable to recover it. 00:30:03.994 [2024-07-23 01:09:48.004124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.994 [2024-07-23 01:09:48.004280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.994 [2024-07-23 01:09:48.004305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.994 qpair failed and we were unable to recover it. 00:30:03.994 [2024-07-23 01:09:48.004476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.994 [2024-07-23 01:09:48.004647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.994 [2024-07-23 01:09:48.004672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.994 qpair failed and we were unable to recover it. 00:30:03.994 [2024-07-23 01:09:48.004880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.994 [2024-07-23 01:09:48.005062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.994 [2024-07-23 01:09:48.005087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.994 qpair failed and we were unable to recover it. 00:30:03.994 [2024-07-23 01:09:48.005263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.994 [2024-07-23 01:09:48.005422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.994 [2024-07-23 01:09:48.005448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.994 qpair failed and we were unable to recover it. 00:30:03.994 [2024-07-23 01:09:48.005658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.994 [2024-07-23 01:09:48.005843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.994 [2024-07-23 01:09:48.005869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.994 qpair failed and we were unable to recover it. 00:30:03.994 [2024-07-23 01:09:48.006034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.994 [2024-07-23 01:09:48.006210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.994 [2024-07-23 01:09:48.006236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.994 qpair failed and we were unable to recover it. 00:30:03.994 [2024-07-23 01:09:48.006415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.994 [2024-07-23 01:09:48.006594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.994 [2024-07-23 01:09:48.006630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.994 qpair failed and we were unable to recover it. 00:30:03.994 [2024-07-23 01:09:48.006788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.994 [2024-07-23 01:09:48.006963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.994 [2024-07-23 01:09:48.006991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.994 qpair failed and we were unable to recover it. 00:30:03.994 [2024-07-23 01:09:48.007183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.994 [2024-07-23 01:09:48.007356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.994 [2024-07-23 01:09:48.007383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.994 qpair failed and we were unable to recover it. 00:30:03.994 [2024-07-23 01:09:48.007595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.994 [2024-07-23 01:09:48.007791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.994 [2024-07-23 01:09:48.007819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.994 qpair failed and we were unable to recover it. 00:30:03.994 [2024-07-23 01:09:48.007988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.994 [2024-07-23 01:09:48.008152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.994 [2024-07-23 01:09:48.008176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.994 qpair failed and we were unable to recover it. 00:30:03.994 [2024-07-23 01:09:48.008393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.994 [2024-07-23 01:09:48.008583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.994 [2024-07-23 01:09:48.008608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.994 qpair failed and we were unable to recover it. 00:30:03.994 [2024-07-23 01:09:48.008757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.994 [2024-07-23 01:09:48.008924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.994 [2024-07-23 01:09:48.008948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.994 qpair failed and we were unable to recover it. 00:30:03.994 [2024-07-23 01:09:48.009140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.994 [2024-07-23 01:09:48.009327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.994 [2024-07-23 01:09:48.009355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.994 qpair failed and we were unable to recover it. 00:30:03.994 [2024-07-23 01:09:48.009508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.994 [2024-07-23 01:09:48.009661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.994 [2024-07-23 01:09:48.009690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.994 qpair failed and we were unable to recover it. 00:30:03.994 [2024-07-23 01:09:48.009839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.994 [2024-07-23 01:09:48.010021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.994 [2024-07-23 01:09:48.010050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.994 qpair failed and we were unable to recover it. 00:30:03.994 [2024-07-23 01:09:48.010223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.994 [2024-07-23 01:09:48.010428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.994 [2024-07-23 01:09:48.010456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.994 qpair failed and we were unable to recover it. 00:30:03.994 [2024-07-23 01:09:48.010622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.994 [2024-07-23 01:09:48.010770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.994 [2024-07-23 01:09:48.010795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.994 qpair failed and we were unable to recover it. 00:30:03.994 [2024-07-23 01:09:48.011007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.994 [2024-07-23 01:09:48.011196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.994 [2024-07-23 01:09:48.011221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.995 qpair failed and we were unable to recover it. 00:30:03.995 [2024-07-23 01:09:48.011360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.995 [2024-07-23 01:09:48.011510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.995 [2024-07-23 01:09:48.011536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.995 qpair failed and we were unable to recover it. 00:30:03.995 [2024-07-23 01:09:48.011732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.995 [2024-07-23 01:09:48.011943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.995 [2024-07-23 01:09:48.011969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.995 qpair failed and we were unable to recover it. 00:30:03.995 [2024-07-23 01:09:48.012130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.995 [2024-07-23 01:09:48.012314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.995 [2024-07-23 01:09:48.012341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.995 qpair failed and we were unable to recover it. 00:30:03.995 [2024-07-23 01:09:48.012499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.995 [2024-07-23 01:09:48.012683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.995 [2024-07-23 01:09:48.012711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.995 qpair failed and we were unable to recover it. 00:30:03.995 [2024-07-23 01:09:48.012892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.995 [2024-07-23 01:09:48.013044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.995 [2024-07-23 01:09:48.013071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.995 qpair failed and we were unable to recover it. 00:30:03.995 [2024-07-23 01:09:48.013267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.995 [2024-07-23 01:09:48.013411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.995 [2024-07-23 01:09:48.013435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.995 qpair failed and we were unable to recover it. 00:30:03.995 [2024-07-23 01:09:48.013627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.995 [2024-07-23 01:09:48.013791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.995 [2024-07-23 01:09:48.013814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.995 qpair failed and we were unable to recover it. 00:30:03.995 [2024-07-23 01:09:48.013997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.995 [2024-07-23 01:09:48.014174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.995 [2024-07-23 01:09:48.014204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.995 qpair failed and we were unable to recover it. 00:30:03.995 [2024-07-23 01:09:48.014402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.995 [2024-07-23 01:09:48.014605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.995 [2024-07-23 01:09:48.014640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.995 qpair failed and we were unable to recover it. 00:30:03.995 [2024-07-23 01:09:48.014792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.995 [2024-07-23 01:09:48.014986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.995 [2024-07-23 01:09:48.015011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.995 qpair failed and we were unable to recover it. 00:30:03.995 [2024-07-23 01:09:48.015153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.995 [2024-07-23 01:09:48.015368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.995 [2024-07-23 01:09:48.015395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.995 qpair failed and we were unable to recover it. 00:30:03.995 [2024-07-23 01:09:48.015571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.995 [2024-07-23 01:09:48.015746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.995 [2024-07-23 01:09:48.015773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.995 qpair failed and we were unable to recover it. 00:30:03.995 [2024-07-23 01:09:48.015935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.995 [2024-07-23 01:09:48.016123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.995 [2024-07-23 01:09:48.016151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.995 qpair failed and we were unable to recover it. 00:30:03.995 [2024-07-23 01:09:48.016335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.995 [2024-07-23 01:09:48.016502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.995 [2024-07-23 01:09:48.016527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.995 qpair failed and we were unable to recover it. 00:30:03.995 [2024-07-23 01:09:48.016712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.995 [2024-07-23 01:09:48.016902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.995 [2024-07-23 01:09:48.016931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.995 qpair failed and we were unable to recover it. 00:30:03.995 [2024-07-23 01:09:48.017110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.995 [2024-07-23 01:09:48.017259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.995 [2024-07-23 01:09:48.017288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.995 qpair failed and we were unable to recover it. 00:30:03.995 [2024-07-23 01:09:48.017447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.995 [2024-07-23 01:09:48.017644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.995 [2024-07-23 01:09:48.017672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.995 qpair failed and we were unable to recover it. 00:30:03.995 [2024-07-23 01:09:48.017828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.995 [2024-07-23 01:09:48.018046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.995 [2024-07-23 01:09:48.018074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.995 qpair failed and we were unable to recover it. 00:30:03.995 [2024-07-23 01:09:48.018270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.995 [2024-07-23 01:09:48.018412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.995 [2024-07-23 01:09:48.018436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.995 qpair failed and we were unable to recover it. 00:30:03.995 [2024-07-23 01:09:48.018629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.995 [2024-07-23 01:09:48.018808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.995 [2024-07-23 01:09:48.018836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.995 qpair failed and we were unable to recover it. 00:30:03.995 [2024-07-23 01:09:48.019019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.995 [2024-07-23 01:09:48.019225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.995 [2024-07-23 01:09:48.019257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.995 qpair failed and we were unable to recover it. 00:30:03.995 [2024-07-23 01:09:48.019448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.995 [2024-07-23 01:09:48.019622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.995 [2024-07-23 01:09:48.019648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.995 qpair failed and we were unable to recover it. 00:30:03.995 [2024-07-23 01:09:48.019796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.995 [2024-07-23 01:09:48.019925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.995 [2024-07-23 01:09:48.019964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.995 qpair failed and we were unable to recover it. 00:30:03.995 [2024-07-23 01:09:48.020174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.995 [2024-07-23 01:09:48.020387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.995 [2024-07-23 01:09:48.020412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.995 qpair failed and we were unable to recover it. 00:30:03.995 [2024-07-23 01:09:48.020607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.995 [2024-07-23 01:09:48.020800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.995 [2024-07-23 01:09:48.020825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.995 qpair failed and we were unable to recover it. 00:30:03.995 [2024-07-23 01:09:48.021015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.995 [2024-07-23 01:09:48.021202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.995 [2024-07-23 01:09:48.021231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.995 qpair failed and we were unable to recover it. 00:30:03.995 [2024-07-23 01:09:48.021413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.995 [2024-07-23 01:09:48.021572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.995 [2024-07-23 01:09:48.021599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.995 qpair failed and we were unable to recover it. 00:30:03.995 [2024-07-23 01:09:48.021795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.995 [2024-07-23 01:09:48.022008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.995 [2024-07-23 01:09:48.022034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.995 qpair failed and we were unable to recover it. 00:30:03.995 [2024-07-23 01:09:48.022192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.996 [2024-07-23 01:09:48.022367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.996 [2024-07-23 01:09:48.022394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.996 qpair failed and we were unable to recover it. 00:30:03.996 [2024-07-23 01:09:48.022624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.996 [2024-07-23 01:09:48.022827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.996 [2024-07-23 01:09:48.022853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.996 qpair failed and we were unable to recover it. 00:30:03.996 [2024-07-23 01:09:48.023048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.996 [2024-07-23 01:09:48.023240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.996 [2024-07-23 01:09:48.023269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.996 qpair failed and we were unable to recover it. 00:30:03.996 [2024-07-23 01:09:48.023453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.996 [2024-07-23 01:09:48.023642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.996 [2024-07-23 01:09:48.023669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.996 qpair failed and we were unable to recover it. 00:30:03.996 [2024-07-23 01:09:48.023882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.996 [2024-07-23 01:09:48.024033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.996 [2024-07-23 01:09:48.024060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.996 qpair failed and we were unable to recover it. 00:30:03.996 [2024-07-23 01:09:48.024238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.996 [2024-07-23 01:09:48.024449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.996 [2024-07-23 01:09:48.024475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.996 qpair failed and we were unable to recover it. 00:30:03.996 [2024-07-23 01:09:48.024639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.996 [2024-07-23 01:09:48.024820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.996 [2024-07-23 01:09:48.024847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.996 qpair failed and we were unable to recover it. 00:30:03.996 [2024-07-23 01:09:48.025063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.996 [2024-07-23 01:09:48.025238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.996 [2024-07-23 01:09:48.025267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.996 qpair failed and we were unable to recover it. 00:30:03.996 [2024-07-23 01:09:48.025471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.996 [2024-07-23 01:09:48.025653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.996 [2024-07-23 01:09:48.025681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.996 qpair failed and we were unable to recover it. 00:30:03.996 [2024-07-23 01:09:48.025875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.996 [2024-07-23 01:09:48.026015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.996 [2024-07-23 01:09:48.026039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.996 qpair failed and we were unable to recover it. 00:30:03.996 [2024-07-23 01:09:48.026201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.996 [2024-07-23 01:09:48.026340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.996 [2024-07-23 01:09:48.026365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.996 qpair failed and we were unable to recover it. 00:30:03.996 [2024-07-23 01:09:48.026547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.996 [2024-07-23 01:09:48.026723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.996 [2024-07-23 01:09:48.026751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.996 qpair failed and we were unable to recover it. 00:30:03.996 [2024-07-23 01:09:48.026957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.996 [2024-07-23 01:09:48.027166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.996 [2024-07-23 01:09:48.027197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.996 qpair failed and we were unable to recover it. 00:30:03.996 [2024-07-23 01:09:48.027383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.996 [2024-07-23 01:09:48.027570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.996 [2024-07-23 01:09:48.027594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.996 qpair failed and we were unable to recover it. 00:30:03.996 [2024-07-23 01:09:48.027795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.996 [2024-07-23 01:09:48.027967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.996 [2024-07-23 01:09:48.027991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.996 qpair failed and we were unable to recover it. 00:30:03.996 [2024-07-23 01:09:48.028181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.996 [2024-07-23 01:09:48.028383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.996 [2024-07-23 01:09:48.028407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.996 qpair failed and we were unable to recover it. 00:30:03.996 [2024-07-23 01:09:48.028586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.996 [2024-07-23 01:09:48.028783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.996 [2024-07-23 01:09:48.028809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.996 qpair failed and we were unable to recover it. 00:30:03.996 [2024-07-23 01:09:48.028982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.996 [2024-07-23 01:09:48.029195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.996 [2024-07-23 01:09:48.029222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.996 qpair failed and we were unable to recover it. 00:30:03.996 [2024-07-23 01:09:48.029404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.996 [2024-07-23 01:09:48.029579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.996 [2024-07-23 01:09:48.029607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.996 qpair failed and we were unable to recover it. 00:30:03.996 [2024-07-23 01:09:48.029796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.996 [2024-07-23 01:09:48.029990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.996 [2024-07-23 01:09:48.030015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.996 qpair failed and we were unable to recover it. 00:30:03.996 [2024-07-23 01:09:48.030194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.996 [2024-07-23 01:09:48.030401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.996 [2024-07-23 01:09:48.030428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.996 qpair failed and we were unable to recover it. 00:30:03.996 [2024-07-23 01:09:48.030640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.996 [2024-07-23 01:09:48.030825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.996 [2024-07-23 01:09:48.030854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.996 qpair failed and we were unable to recover it. 00:30:03.996 [2024-07-23 01:09:48.031020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.996 [2024-07-23 01:09:48.031160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.996 [2024-07-23 01:09:48.031189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.996 qpair failed and we were unable to recover it. 00:30:03.996 [2024-07-23 01:09:48.031373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.996 [2024-07-23 01:09:48.031573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.996 [2024-07-23 01:09:48.031601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.996 qpair failed and we were unable to recover it. 00:30:03.996 [2024-07-23 01:09:48.031795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.996 [2024-07-23 01:09:48.031964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.996 [2024-07-23 01:09:48.031988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.996 qpair failed and we were unable to recover it. 00:30:03.996 [2024-07-23 01:09:48.032176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.997 [2024-07-23 01:09:48.032362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.997 [2024-07-23 01:09:48.032391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.997 qpair failed and we were unable to recover it. 00:30:03.997 [2024-07-23 01:09:48.032601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.997 [2024-07-23 01:09:48.032756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.997 [2024-07-23 01:09:48.032785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.997 qpair failed and we were unable to recover it. 00:30:03.997 [2024-07-23 01:09:48.032964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.997 [2024-07-23 01:09:48.033141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.997 [2024-07-23 01:09:48.033168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.997 qpair failed and we were unable to recover it. 00:30:03.997 [2024-07-23 01:09:48.033348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.997 [2024-07-23 01:09:48.033525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.997 [2024-07-23 01:09:48.033555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.997 qpair failed and we were unable to recover it. 00:30:03.997 [2024-07-23 01:09:48.033767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.997 [2024-07-23 01:09:48.033981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.997 [2024-07-23 01:09:48.034008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.997 qpair failed and we were unable to recover it. 00:30:03.997 [2024-07-23 01:09:48.034167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.997 [2024-07-23 01:09:48.034305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.997 [2024-07-23 01:09:48.034329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.997 qpair failed and we were unable to recover it. 00:30:03.997 [2024-07-23 01:09:48.034526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.997 [2024-07-23 01:09:48.034695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.997 [2024-07-23 01:09:48.034723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.997 qpair failed and we were unable to recover it. 00:30:03.997 [2024-07-23 01:09:48.034911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.997 [2024-07-23 01:09:48.035048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.997 [2024-07-23 01:09:48.035074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.997 qpair failed and we were unable to recover it. 00:30:03.997 [2024-07-23 01:09:48.035266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.997 [2024-07-23 01:09:48.035442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.997 [2024-07-23 01:09:48.035469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.997 qpair failed and we were unable to recover it. 00:30:03.997 [2024-07-23 01:09:48.035659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.997 [2024-07-23 01:09:48.035843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.997 [2024-07-23 01:09:48.035870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.997 qpair failed and we were unable to recover it. 00:30:03.997 [2024-07-23 01:09:48.036080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.997 [2024-07-23 01:09:48.036221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.997 [2024-07-23 01:09:48.036245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.997 qpair failed and we were unable to recover it. 00:30:03.997 [2024-07-23 01:09:48.036454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.997 [2024-07-23 01:09:48.036637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.997 [2024-07-23 01:09:48.036663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.997 qpair failed and we were unable to recover it. 00:30:03.997 [2024-07-23 01:09:48.036800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.997 [2024-07-23 01:09:48.036995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.997 [2024-07-23 01:09:48.037022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.997 qpair failed and we were unable to recover it. 00:30:03.997 [2024-07-23 01:09:48.037203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.997 [2024-07-23 01:09:48.037383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.997 [2024-07-23 01:09:48.037411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.997 qpair failed and we were unable to recover it. 00:30:03.997 [2024-07-23 01:09:48.037602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.997 [2024-07-23 01:09:48.037773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.997 [2024-07-23 01:09:48.037800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.997 qpair failed and we were unable to recover it. 00:30:03.997 [2024-07-23 01:09:48.038001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.997 [2024-07-23 01:09:48.038165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.997 [2024-07-23 01:09:48.038205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.997 qpair failed and we were unable to recover it. 00:30:03.997 [2024-07-23 01:09:48.038396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.997 [2024-07-23 01:09:48.038561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.997 [2024-07-23 01:09:48.038602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.997 qpair failed and we were unable to recover it. 00:30:03.997 [2024-07-23 01:09:48.038839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.997 [2024-07-23 01:09:48.039023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.997 [2024-07-23 01:09:48.039050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.997 qpair failed and we were unable to recover it. 00:30:03.997 [2024-07-23 01:09:48.039256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.997 [2024-07-23 01:09:48.039432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.997 [2024-07-23 01:09:48.039460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.997 qpair failed and we were unable to recover it. 00:30:03.997 [2024-07-23 01:09:48.039644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.997 [2024-07-23 01:09:48.039851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.997 [2024-07-23 01:09:48.039879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.997 qpair failed and we were unable to recover it. 00:30:03.997 [2024-07-23 01:09:48.040085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.997 [2024-07-23 01:09:48.040221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.997 [2024-07-23 01:09:48.040248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.997 qpair failed and we were unable to recover it. 00:30:03.997 [2024-07-23 01:09:48.040411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.997 [2024-07-23 01:09:48.040578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.997 [2024-07-23 01:09:48.040602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.997 qpair failed and we were unable to recover it. 00:30:03.997 [2024-07-23 01:09:48.040772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.997 [2024-07-23 01:09:48.040930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.997 [2024-07-23 01:09:48.040956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.997 qpair failed and we were unable to recover it. 00:30:03.997 [2024-07-23 01:09:48.041115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.997 [2024-07-23 01:09:48.041275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.997 [2024-07-23 01:09:48.041299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.997 qpair failed and we were unable to recover it. 00:30:03.997 [2024-07-23 01:09:48.041435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.997 [2024-07-23 01:09:48.041641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.997 [2024-07-23 01:09:48.041669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.997 qpair failed and we were unable to recover it. 00:30:03.997 [2024-07-23 01:09:48.041866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.997 [2024-07-23 01:09:48.042045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.997 [2024-07-23 01:09:48.042073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.997 qpair failed and we were unable to recover it. 00:30:03.997 [2024-07-23 01:09:48.042273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.997 [2024-07-23 01:09:48.042434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.997 [2024-07-23 01:09:48.042476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.997 qpair failed and we were unable to recover it. 00:30:03.997 [2024-07-23 01:09:48.042687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.997 [2024-07-23 01:09:48.042876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.997 [2024-07-23 01:09:48.042902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.997 qpair failed and we were unable to recover it. 00:30:03.997 [2024-07-23 01:09:48.043120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.998 [2024-07-23 01:09:48.043303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.998 [2024-07-23 01:09:48.043331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.998 qpair failed and we were unable to recover it. 00:30:03.998 [2024-07-23 01:09:48.043522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.998 [2024-07-23 01:09:48.043651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.998 [2024-07-23 01:09:48.043677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.998 qpair failed and we were unable to recover it. 00:30:03.998 [2024-07-23 01:09:48.043820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.998 [2024-07-23 01:09:48.044037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.998 [2024-07-23 01:09:48.044064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.998 qpair failed and we were unable to recover it. 00:30:03.998 [2024-07-23 01:09:48.044253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.998 [2024-07-23 01:09:48.044415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.998 [2024-07-23 01:09:48.044440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.998 qpair failed and we were unable to recover it. 00:30:03.998 [2024-07-23 01:09:48.044625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.998 [2024-07-23 01:09:48.044803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.998 [2024-07-23 01:09:48.044831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.998 qpair failed and we were unable to recover it. 00:30:03.998 [2024-07-23 01:09:48.044992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.998 [2024-07-23 01:09:48.045123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.998 [2024-07-23 01:09:48.045149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.998 qpair failed and we were unable to recover it. 00:30:03.998 [2024-07-23 01:09:48.045345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.998 [2024-07-23 01:09:48.045546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.998 [2024-07-23 01:09:48.045573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.998 qpair failed and we were unable to recover it. 00:30:03.998 [2024-07-23 01:09:48.045761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.998 [2024-07-23 01:09:48.045915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.998 [2024-07-23 01:09:48.045942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.998 qpair failed and we were unable to recover it. 00:30:03.998 [2024-07-23 01:09:48.046106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.998 [2024-07-23 01:09:48.046274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.998 [2024-07-23 01:09:48.046300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.998 qpair failed and we were unable to recover it. 00:30:03.998 [2024-07-23 01:09:48.046496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.998 [2024-07-23 01:09:48.046638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.998 [2024-07-23 01:09:48.046663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.998 qpair failed and we were unable to recover it. 00:30:03.998 [2024-07-23 01:09:48.046871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.998 [2024-07-23 01:09:48.047076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.998 [2024-07-23 01:09:48.047103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.998 qpair failed and we were unable to recover it. 00:30:03.998 [2024-07-23 01:09:48.047276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.998 [2024-07-23 01:09:48.047463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.998 [2024-07-23 01:09:48.047491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.998 qpair failed and we were unable to recover it. 00:30:03.998 [2024-07-23 01:09:48.047700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.998 [2024-07-23 01:09:48.047911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.998 [2024-07-23 01:09:48.047939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.998 qpair failed and we were unable to recover it. 00:30:03.998 [2024-07-23 01:09:48.048125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.998 [2024-07-23 01:09:48.048341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.998 [2024-07-23 01:09:48.048368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.998 qpair failed and we were unable to recover it. 00:30:03.998 [2024-07-23 01:09:48.048555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.998 [2024-07-23 01:09:48.048734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.998 [2024-07-23 01:09:48.048762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.998 qpair failed and we were unable to recover it. 00:30:03.998 [2024-07-23 01:09:48.048975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.998 [2024-07-23 01:09:48.049179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.998 [2024-07-23 01:09:48.049206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.998 qpair failed and we were unable to recover it. 00:30:03.998 [2024-07-23 01:09:48.049357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.998 [2024-07-23 01:09:48.049529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.998 [2024-07-23 01:09:48.049557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.998 qpair failed and we were unable to recover it. 00:30:03.998 [2024-07-23 01:09:48.049767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.998 [2024-07-23 01:09:48.049952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.998 [2024-07-23 01:09:48.049979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.998 qpair failed and we were unable to recover it. 00:30:03.998 [2024-07-23 01:09:48.050160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.998 [2024-07-23 01:09:48.050349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.998 [2024-07-23 01:09:48.050379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.998 qpair failed and we were unable to recover it. 00:30:03.998 [2024-07-23 01:09:48.050584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.998 [2024-07-23 01:09:48.050773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.998 [2024-07-23 01:09:48.050801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.998 qpair failed and we were unable to recover it. 00:30:03.998 [2024-07-23 01:09:48.050957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.998 [2024-07-23 01:09:48.051183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.998 [2024-07-23 01:09:48.051208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.998 qpair failed and we were unable to recover it. 00:30:03.998 [2024-07-23 01:09:48.051375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.998 [2024-07-23 01:09:48.051586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.998 [2024-07-23 01:09:48.051622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.998 qpair failed and we were unable to recover it. 00:30:03.998 [2024-07-23 01:09:48.051807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.998 [2024-07-23 01:09:48.052017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.998 [2024-07-23 01:09:48.052042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.998 qpair failed and we were unable to recover it. 00:30:03.998 [2024-07-23 01:09:48.052223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.998 [2024-07-23 01:09:48.052431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.998 [2024-07-23 01:09:48.052458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.998 qpair failed and we were unable to recover it. 00:30:03.998 [2024-07-23 01:09:48.052627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.998 [2024-07-23 01:09:48.052765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.998 [2024-07-23 01:09:48.052789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.998 qpair failed and we were unable to recover it. 00:30:03.998 [2024-07-23 01:09:48.052959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.998 [2024-07-23 01:09:48.053126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.998 [2024-07-23 01:09:48.053151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.998 qpair failed and we were unable to recover it. 00:30:03.998 [2024-07-23 01:09:48.053307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.998 [2024-07-23 01:09:48.053474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.998 [2024-07-23 01:09:48.053517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.998 qpair failed and we were unable to recover it. 00:30:03.998 [2024-07-23 01:09:48.053714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.998 [2024-07-23 01:09:48.053879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.998 [2024-07-23 01:09:48.053903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.998 qpair failed and we were unable to recover it. 00:30:03.998 [2024-07-23 01:09:48.054038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.998 [2024-07-23 01:09:48.054226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.998 [2024-07-23 01:09:48.054251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.999 qpair failed and we were unable to recover it. 00:30:03.999 [2024-07-23 01:09:48.054416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.999 [2024-07-23 01:09:48.054578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.999 [2024-07-23 01:09:48.054640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.999 qpair failed and we were unable to recover it. 00:30:03.999 [2024-07-23 01:09:48.054850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.999 [2024-07-23 01:09:48.055058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.999 [2024-07-23 01:09:48.055085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.999 qpair failed and we were unable to recover it. 00:30:03.999 [2024-07-23 01:09:48.055289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.999 [2024-07-23 01:09:48.055495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.999 [2024-07-23 01:09:48.055522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.999 qpair failed and we were unable to recover it. 00:30:03.999 [2024-07-23 01:09:48.055730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.999 [2024-07-23 01:09:48.055914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.999 [2024-07-23 01:09:48.055941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.999 qpair failed and we were unable to recover it. 00:30:03.999 [2024-07-23 01:09:48.056133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.999 [2024-07-23 01:09:48.056277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.999 [2024-07-23 01:09:48.056318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.999 qpair failed and we were unable to recover it. 00:30:03.999 [2024-07-23 01:09:48.056501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.999 [2024-07-23 01:09:48.056708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.999 [2024-07-23 01:09:48.056736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.999 qpair failed and we were unable to recover it. 00:30:03.999 [2024-07-23 01:09:48.056919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.999 [2024-07-23 01:09:48.057121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.999 [2024-07-23 01:09:48.057148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.999 qpair failed and we were unable to recover it. 00:30:03.999 [2024-07-23 01:09:48.057324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.999 [2024-07-23 01:09:48.057506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.999 [2024-07-23 01:09:48.057535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.999 qpair failed and we were unable to recover it. 00:30:03.999 [2024-07-23 01:09:48.057761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.999 [2024-07-23 01:09:48.057916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.999 [2024-07-23 01:09:48.057943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.999 qpair failed and we were unable to recover it. 00:30:03.999 [2024-07-23 01:09:48.058151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.999 [2024-07-23 01:09:48.058359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.999 [2024-07-23 01:09:48.058387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.999 qpair failed and we were unable to recover it. 00:30:03.999 [2024-07-23 01:09:48.058565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.999 [2024-07-23 01:09:48.058724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.999 [2024-07-23 01:09:48.058753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.999 qpair failed and we were unable to recover it. 00:30:03.999 [2024-07-23 01:09:48.058930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.999 [2024-07-23 01:09:48.059085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.999 [2024-07-23 01:09:48.059112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.999 qpair failed and we were unable to recover it. 00:30:03.999 [2024-07-23 01:09:48.059315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.999 [2024-07-23 01:09:48.059467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.999 [2024-07-23 01:09:48.059496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.999 qpair failed and we were unable to recover it. 00:30:03.999 [2024-07-23 01:09:48.059685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.999 [2024-07-23 01:09:48.059841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.999 [2024-07-23 01:09:48.059871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.999 qpair failed and we were unable to recover it. 00:30:03.999 [2024-07-23 01:09:48.060052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.999 [2024-07-23 01:09:48.060261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.999 [2024-07-23 01:09:48.060288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.999 qpair failed and we were unable to recover it. 00:30:03.999 [2024-07-23 01:09:48.060439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.999 [2024-07-23 01:09:48.060648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.999 [2024-07-23 01:09:48.060677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.999 qpair failed and we were unable to recover it. 00:30:03.999 [2024-07-23 01:09:48.060858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.999 [2024-07-23 01:09:48.060998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.999 [2024-07-23 01:09:48.061038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.999 qpair failed and we were unable to recover it. 00:30:03.999 [2024-07-23 01:09:48.061226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.999 [2024-07-23 01:09:48.061429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.999 [2024-07-23 01:09:48.061458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.999 qpair failed and we were unable to recover it. 00:30:03.999 [2024-07-23 01:09:48.061666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.999 [2024-07-23 01:09:48.061854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.999 [2024-07-23 01:09:48.061888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.999 qpair failed and we were unable to recover it. 00:30:03.999 [2024-07-23 01:09:48.062054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.999 [2024-07-23 01:09:48.062249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.999 [2024-07-23 01:09:48.062273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.999 qpair failed and we were unable to recover it. 00:30:03.999 [2024-07-23 01:09:48.062440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.999 [2024-07-23 01:09:48.062585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.999 [2024-07-23 01:09:48.062610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.999 qpair failed and we were unable to recover it. 00:30:03.999 [2024-07-23 01:09:48.062810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.999 [2024-07-23 01:09:48.062972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.999 [2024-07-23 01:09:48.063002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.999 qpair failed and we were unable to recover it. 00:30:03.999 [2024-07-23 01:09:48.063191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.999 [2024-07-23 01:09:48.064467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.999 [2024-07-23 01:09:48.064501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.999 qpair failed and we were unable to recover it. 00:30:03.999 [2024-07-23 01:09:48.064694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.999 [2024-07-23 01:09:48.064884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.999 [2024-07-23 01:09:48.064913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.999 qpair failed and we were unable to recover it. 00:30:03.999 [2024-07-23 01:09:48.065100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.999 [2024-07-23 01:09:48.065263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.999 [2024-07-23 01:09:48.065288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.999 qpair failed and we were unable to recover it. 00:30:03.999 [2024-07-23 01:09:48.065465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.999 [2024-07-23 01:09:48.065668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.999 [2024-07-23 01:09:48.065694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.999 qpair failed and we were unable to recover it. 00:30:03.999 [2024-07-23 01:09:48.065834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.999 [2024-07-23 01:09:48.065964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.999 [2024-07-23 01:09:48.065990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.999 qpair failed and we were unable to recover it. 00:30:03.999 [2024-07-23 01:09:48.066168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.999 [2024-07-23 01:09:48.066365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.999 [2024-07-23 01:09:48.066389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:03.999 qpair failed and we were unable to recover it. 00:30:04.000 [2024-07-23 01:09:48.066561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.000 [2024-07-23 01:09:48.066754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.000 [2024-07-23 01:09:48.066781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.000 qpair failed and we were unable to recover it. 00:30:04.000 [2024-07-23 01:09:48.066946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.000 [2024-07-23 01:09:48.067113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.000 [2024-07-23 01:09:48.067138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.000 qpair failed and we were unable to recover it. 00:30:04.000 [2024-07-23 01:09:48.067341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.000 [2024-07-23 01:09:48.067478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.000 [2024-07-23 01:09:48.067504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.000 qpair failed and we were unable to recover it. 00:30:04.000 [2024-07-23 01:09:48.067708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.000 [2024-07-23 01:09:48.067882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.000 [2024-07-23 01:09:48.067907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.000 qpair failed and we were unable to recover it. 00:30:04.000 [2024-07-23 01:09:48.068068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.000 [2024-07-23 01:09:48.068235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.000 [2024-07-23 01:09:48.068259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.000 qpair failed and we were unable to recover it. 00:30:04.000 [2024-07-23 01:09:48.068425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.000 [2024-07-23 01:09:48.068594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.000 [2024-07-23 01:09:48.068628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.000 qpair failed and we were unable to recover it. 00:30:04.000 [2024-07-23 01:09:48.068792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.000 [2024-07-23 01:09:48.068945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.000 [2024-07-23 01:09:48.068969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.000 qpair failed and we were unable to recover it. 00:30:04.000 [2024-07-23 01:09:48.069136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.000 [2024-07-23 01:09:48.069350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.000 [2024-07-23 01:09:48.069377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.000 qpair failed and we were unable to recover it. 00:30:04.000 [2024-07-23 01:09:48.069527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.000 [2024-07-23 01:09:48.069681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.000 [2024-07-23 01:09:48.069707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.000 qpair failed and we were unable to recover it. 00:30:04.000 [2024-07-23 01:09:48.069884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.000 [2024-07-23 01:09:48.070093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.000 [2024-07-23 01:09:48.070120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.000 qpair failed and we were unable to recover it. 00:30:04.000 [2024-07-23 01:09:48.070331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.000 [2024-07-23 01:09:48.070540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.000 [2024-07-23 01:09:48.070567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.000 qpair failed and we were unable to recover it. 00:30:04.000 [2024-07-23 01:09:48.070760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.000 [2024-07-23 01:09:48.070913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.000 [2024-07-23 01:09:48.070942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.000 qpair failed and we were unable to recover it. 00:30:04.000 [2024-07-23 01:09:48.071155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.000 [2024-07-23 01:09:48.071339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.000 [2024-07-23 01:09:48.071367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.000 qpair failed and we were unable to recover it. 00:30:04.000 [2024-07-23 01:09:48.071519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.000 [2024-07-23 01:09:48.071708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.000 [2024-07-23 01:09:48.071737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.000 qpair failed and we were unable to recover it. 00:30:04.000 [2024-07-23 01:09:48.071919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.000 [2024-07-23 01:09:48.072095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.000 [2024-07-23 01:09:48.072123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.000 qpair failed and we were unable to recover it. 00:30:04.000 [2024-07-23 01:09:48.072307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.000 [2024-07-23 01:09:48.072468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.000 [2024-07-23 01:09:48.072492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.000 qpair failed and we were unable to recover it. 00:30:04.000 [2024-07-23 01:09:48.072664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.000 [2024-07-23 01:09:48.072807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.000 [2024-07-23 01:09:48.072831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.000 qpair failed and we were unable to recover it. 00:30:04.000 [2024-07-23 01:09:48.072991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.000 [2024-07-23 01:09:48.073154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.000 [2024-07-23 01:09:48.073180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.000 qpair failed and we were unable to recover it. 00:30:04.000 [2024-07-23 01:09:48.073352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.000 [2024-07-23 01:09:48.073599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.000 [2024-07-23 01:09:48.073636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.000 qpair failed and we were unable to recover it. 00:30:04.000 [2024-07-23 01:09:48.073808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.000 [2024-07-23 01:09:48.073971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.000 [2024-07-23 01:09:48.074012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.000 qpair failed and we were unable to recover it. 00:30:04.000 [2024-07-23 01:09:48.074207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.000 [2024-07-23 01:09:48.074374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.000 [2024-07-23 01:09:48.074402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.000 qpair failed and we were unable to recover it. 00:30:04.000 [2024-07-23 01:09:48.074558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.000 [2024-07-23 01:09:48.074760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.000 [2024-07-23 01:09:48.074788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.000 qpair failed and we were unable to recover it. 00:30:04.000 [2024-07-23 01:09:48.074947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.000 [2024-07-23 01:09:48.075106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.000 [2024-07-23 01:09:48.075130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.001 qpair failed and we were unable to recover it. 00:30:04.001 [2024-07-23 01:09:48.075367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.001 [2024-07-23 01:09:48.075525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.001 [2024-07-23 01:09:48.075553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.001 qpair failed and we were unable to recover it. 00:30:04.001 [2024-07-23 01:09:48.075744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.001 [2024-07-23 01:09:48.075914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.001 [2024-07-23 01:09:48.075955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.001 qpair failed and we were unable to recover it. 00:30:04.001 [2024-07-23 01:09:48.076157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.001 [2024-07-23 01:09:48.076332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.001 [2024-07-23 01:09:48.076359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.001 qpair failed and we were unable to recover it. 00:30:04.001 [2024-07-23 01:09:48.076540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.001 [2024-07-23 01:09:48.076714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.001 [2024-07-23 01:09:48.076740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.001 qpair failed and we were unable to recover it. 00:30:04.001 [2024-07-23 01:09:48.076916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.001 [2024-07-23 01:09:48.077079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.001 [2024-07-23 01:09:48.077103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.001 qpair failed and we were unable to recover it. 00:30:04.001 [2024-07-23 01:09:48.077273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.001 [2024-07-23 01:09:48.077485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.001 [2024-07-23 01:09:48.077514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.001 qpair failed and we were unable to recover it. 00:30:04.001 [2024-07-23 01:09:48.077667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.001 [2024-07-23 01:09:48.077845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.001 [2024-07-23 01:09:48.077872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.001 qpair failed and we were unable to recover it. 00:30:04.001 [2024-07-23 01:09:48.078011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.001 [2024-07-23 01:09:48.078219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.001 [2024-07-23 01:09:48.078244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.001 qpair failed and we were unable to recover it. 00:30:04.001 [2024-07-23 01:09:48.078420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.001 [2024-07-23 01:09:48.078633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.001 [2024-07-23 01:09:48.078667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.001 qpair failed and we were unable to recover it. 00:30:04.001 [2024-07-23 01:09:48.078855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.001 [2024-07-23 01:09:48.078999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.001 [2024-07-23 01:09:48.079041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.001 qpair failed and we were unable to recover it. 00:30:04.001 [2024-07-23 01:09:48.079253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.001 [2024-07-23 01:09:48.079443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.001 [2024-07-23 01:09:48.079471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.001 qpair failed and we were unable to recover it. 00:30:04.001 [2024-07-23 01:09:48.079658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.001 [2024-07-23 01:09:48.079840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.001 [2024-07-23 01:09:48.079867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.001 qpair failed and we were unable to recover it. 00:30:04.001 [2024-07-23 01:09:48.080019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.001 [2024-07-23 01:09:48.080193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.001 [2024-07-23 01:09:48.080220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.001 qpair failed and we were unable to recover it. 00:30:04.001 [2024-07-23 01:09:48.080430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.001 [2024-07-23 01:09:48.080652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.001 [2024-07-23 01:09:48.080680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.001 qpair failed and we were unable to recover it. 00:30:04.001 [2024-07-23 01:09:48.080859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.001 [2024-07-23 01:09:48.081124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.001 [2024-07-23 01:09:48.081151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.001 qpair failed and we were unable to recover it. 00:30:04.001 [2024-07-23 01:09:48.081303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.001 [2024-07-23 01:09:48.081445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.001 [2024-07-23 01:09:48.081472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.001 qpair failed and we were unable to recover it. 00:30:04.001 [2024-07-23 01:09:48.081662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.001 [2024-07-23 01:09:48.081830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.001 [2024-07-23 01:09:48.081855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.001 qpair failed and we were unable to recover it. 00:30:04.001 [2024-07-23 01:09:48.082102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.001 [2024-07-23 01:09:48.082320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.001 [2024-07-23 01:09:48.082348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.001 qpair failed and we were unable to recover it. 00:30:04.001 [2024-07-23 01:09:48.082517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.001 [2024-07-23 01:09:48.082706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.001 [2024-07-23 01:09:48.082735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.001 qpair failed and we were unable to recover it. 00:30:04.001 [2024-07-23 01:09:48.082893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.001 [2024-07-23 01:09:48.083109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.001 [2024-07-23 01:09:48.083134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.001 qpair failed and we were unable to recover it. 00:30:04.001 [2024-07-23 01:09:48.083295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.002 [2024-07-23 01:09:48.083538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.002 [2024-07-23 01:09:48.083567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.002 qpair failed and we were unable to recover it. 00:30:04.002 [2024-07-23 01:09:48.083745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.002 [2024-07-23 01:09:48.083890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.002 [2024-07-23 01:09:48.083932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.002 qpair failed and we were unable to recover it. 00:30:04.002 [2024-07-23 01:09:48.084114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.002 [2024-07-23 01:09:48.084324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.002 [2024-07-23 01:09:48.084352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.002 qpair failed and we were unable to recover it. 00:30:04.002 [2024-07-23 01:09:48.084506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.002 [2024-07-23 01:09:48.084666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.002 [2024-07-23 01:09:48.084695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.002 qpair failed and we were unable to recover it. 00:30:04.002 [2024-07-23 01:09:48.084840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.002 [2024-07-23 01:09:48.085041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.002 [2024-07-23 01:09:48.085067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.002 qpair failed and we were unable to recover it. 00:30:04.002 [2024-07-23 01:09:48.085206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.002 [2024-07-23 01:09:48.085388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.002 [2024-07-23 01:09:48.085416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.002 qpair failed and we were unable to recover it. 00:30:04.002 [2024-07-23 01:09:48.085561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.002 [2024-07-23 01:09:48.085755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.002 [2024-07-23 01:09:48.085781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.002 qpair failed and we were unable to recover it. 00:30:04.002 [2024-07-23 01:09:48.085917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.002 [2024-07-23 01:09:48.086080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.002 [2024-07-23 01:09:48.086105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.002 qpair failed and we were unable to recover it. 00:30:04.002 [2024-07-23 01:09:48.086282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.002 [2024-07-23 01:09:48.086449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.002 [2024-07-23 01:09:48.086474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.002 qpair failed and we were unable to recover it. 00:30:04.002 [2024-07-23 01:09:48.086641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.002 [2024-07-23 01:09:48.086793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.002 [2024-07-23 01:09:48.086819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.002 qpair failed and we were unable to recover it. 00:30:04.002 [2024-07-23 01:09:48.087016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.002 [2024-07-23 01:09:48.087275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.002 [2024-07-23 01:09:48.087308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.002 qpair failed and we were unable to recover it. 00:30:04.002 [2024-07-23 01:09:48.087521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.002 [2024-07-23 01:09:48.087707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.002 [2024-07-23 01:09:48.087736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.002 qpair failed and we were unable to recover it. 00:30:04.002 [2024-07-23 01:09:48.087919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.002 [2024-07-23 01:09:48.088594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.002 [2024-07-23 01:09:48.088636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.002 qpair failed and we were unable to recover it. 00:30:04.002 [2024-07-23 01:09:48.088812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.002 [2024-07-23 01:09:48.089017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.002 [2024-07-23 01:09:48.089044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.002 qpair failed and we were unable to recover it. 00:30:04.002 [2024-07-23 01:09:48.089242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.002 [2024-07-23 01:09:48.089395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.002 [2024-07-23 01:09:48.089424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.002 qpair failed and we were unable to recover it. 00:30:04.002 [2024-07-23 01:09:48.089620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.002 [2024-07-23 01:09:48.089814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.002 [2024-07-23 01:09:48.089843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.002 qpair failed and we were unable to recover it. 00:30:04.002 [2024-07-23 01:09:48.090025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.002 [2024-07-23 01:09:48.090195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.002 [2024-07-23 01:09:48.090234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.002 qpair failed and we were unable to recover it. 00:30:04.002 [2024-07-23 01:09:48.090433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.002 [2024-07-23 01:09:48.090598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.002 [2024-07-23 01:09:48.090649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.002 qpair failed and we were unable to recover it. 00:30:04.002 [2024-07-23 01:09:48.090834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.002 [2024-07-23 01:09:48.091021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.002 [2024-07-23 01:09:48.091049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.002 qpair failed and we were unable to recover it. 00:30:04.002 [2024-07-23 01:09:48.091229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.002 [2024-07-23 01:09:48.091398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.002 [2024-07-23 01:09:48.091426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.002 qpair failed and we were unable to recover it. 00:30:04.002 [2024-07-23 01:09:48.091588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.002 [2024-07-23 01:09:48.091774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.002 [2024-07-23 01:09:48.091808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.002 qpair failed and we were unable to recover it. 00:30:04.002 [2024-07-23 01:09:48.091983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.002 [2024-07-23 01:09:48.092108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.002 [2024-07-23 01:09:48.092148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.002 qpair failed and we were unable to recover it. 00:30:04.002 [2024-07-23 01:09:48.092361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.002 [2024-07-23 01:09:48.092627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.002 [2024-07-23 01:09:48.092658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.002 qpair failed and we were unable to recover it. 00:30:04.002 [2024-07-23 01:09:48.092823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.002 [2024-07-23 01:09:48.093001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.002 [2024-07-23 01:09:48.093027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.002 qpair failed and we were unable to recover it. 00:30:04.002 [2024-07-23 01:09:48.093198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.002 [2024-07-23 01:09:48.093361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.003 [2024-07-23 01:09:48.093387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.003 qpair failed and we were unable to recover it. 00:30:04.003 [2024-07-23 01:09:48.093561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.003 [2024-07-23 01:09:48.093701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.003 [2024-07-23 01:09:48.093726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.003 qpair failed and we were unable to recover it. 00:30:04.003 [2024-07-23 01:09:48.093899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.003 [2024-07-23 01:09:48.094100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.003 [2024-07-23 01:09:48.094125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.003 qpair failed and we were unable to recover it. 00:30:04.003 [2024-07-23 01:09:48.094318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.003 [2024-07-23 01:09:48.094448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.003 [2024-07-23 01:09:48.094472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.003 qpair failed and we were unable to recover it. 00:30:04.003 [2024-07-23 01:09:48.094637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.003 [2024-07-23 01:09:48.094781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.003 [2024-07-23 01:09:48.094805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.003 qpair failed and we were unable to recover it. 00:30:04.003 [2024-07-23 01:09:48.094975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.003 [2024-07-23 01:09:48.095138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.003 [2024-07-23 01:09:48.095163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.003 qpair failed and we were unable to recover it. 00:30:04.003 [2024-07-23 01:09:48.095357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.003 [2024-07-23 01:09:48.095491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.003 [2024-07-23 01:09:48.095517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.003 qpair failed and we were unable to recover it. 00:30:04.003 [2024-07-23 01:09:48.095662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.003 [2024-07-23 01:09:48.095802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.003 [2024-07-23 01:09:48.095827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.003 qpair failed and we were unable to recover it. 00:30:04.003 [2024-07-23 01:09:48.096017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.003 [2024-07-23 01:09:48.096180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.003 [2024-07-23 01:09:48.096205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.003 qpair failed and we were unable to recover it. 00:30:04.003 [2024-07-23 01:09:48.096375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.003 [2024-07-23 01:09:48.096565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.003 [2024-07-23 01:09:48.096589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.003 qpair failed and we were unable to recover it. 00:30:04.003 [2024-07-23 01:09:48.096743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.003 [2024-07-23 01:09:48.096881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.003 [2024-07-23 01:09:48.096904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.003 qpair failed and we were unable to recover it. 00:30:04.003 [2024-07-23 01:09:48.097095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.003 [2024-07-23 01:09:48.097228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.003 [2024-07-23 01:09:48.097252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.003 qpair failed and we were unable to recover it. 00:30:04.003 [2024-07-23 01:09:48.097416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.003 [2024-07-23 01:09:48.097570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.003 [2024-07-23 01:09:48.097594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.003 qpair failed and we were unable to recover it. 00:30:04.003 [2024-07-23 01:09:48.097748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.003 [2024-07-23 01:09:48.097883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.003 [2024-07-23 01:09:48.097911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.003 qpair failed and we were unable to recover it. 00:30:04.003 [2024-07-23 01:09:48.098071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.003 [2024-07-23 01:09:48.098260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.003 [2024-07-23 01:09:48.098284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.003 qpair failed and we were unable to recover it. 00:30:04.003 [2024-07-23 01:09:48.098422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.003 [2024-07-23 01:09:48.098561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.003 [2024-07-23 01:09:48.098586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.003 qpair failed and we were unable to recover it. 00:30:04.003 [2024-07-23 01:09:48.098731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.003 [2024-07-23 01:09:48.098866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.003 [2024-07-23 01:09:48.098891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.003 qpair failed and we were unable to recover it. 00:30:04.003 [2024-07-23 01:09:48.099069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.003 [2024-07-23 01:09:48.099249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.003 [2024-07-23 01:09:48.099276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.003 qpair failed and we were unable to recover it. 00:30:04.003 [2024-07-23 01:09:48.099420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.003 [2024-07-23 01:09:48.099593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.003 [2024-07-23 01:09:48.099635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.003 qpair failed and we were unable to recover it. 00:30:04.003 [2024-07-23 01:09:48.099806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.003 [2024-07-23 01:09:48.099947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.003 [2024-07-23 01:09:48.099971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.003 qpair failed and we were unable to recover it. 00:30:04.003 [2024-07-23 01:09:48.100136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.003 [2024-07-23 01:09:48.100327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.003 [2024-07-23 01:09:48.100351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.003 qpair failed and we were unable to recover it. 00:30:04.003 [2024-07-23 01:09:48.100492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.003 [2024-07-23 01:09:48.100682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.003 [2024-07-23 01:09:48.100709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.003 qpair failed and we were unable to recover it. 00:30:04.003 [2024-07-23 01:09:48.100853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.003 [2024-07-23 01:09:48.101054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.003 [2024-07-23 01:09:48.101078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.003 qpair failed and we were unable to recover it. 00:30:04.003 [2024-07-23 01:09:48.101241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.003 [2024-07-23 01:09:48.101375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.003 [2024-07-23 01:09:48.101401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.003 qpair failed and we were unable to recover it. 00:30:04.003 [2024-07-23 01:09:48.101568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.003 [2024-07-23 01:09:48.101744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.003 [2024-07-23 01:09:48.101769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.003 qpair failed and we were unable to recover it. 00:30:04.003 [2024-07-23 01:09:48.101926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.003 [2024-07-23 01:09:48.102077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.003 [2024-07-23 01:09:48.102101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.003 qpair failed and we were unable to recover it. 00:30:04.003 [2024-07-23 01:09:48.102292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.003 [2024-07-23 01:09:48.102462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.003 [2024-07-23 01:09:48.102487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.003 qpair failed and we were unable to recover it. 00:30:04.003 [2024-07-23 01:09:48.102659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.003 [2024-07-23 01:09:48.102807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.003 [2024-07-23 01:09:48.102832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.003 qpair failed and we were unable to recover it. 00:30:04.003 [2024-07-23 01:09:48.103023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.004 [2024-07-23 01:09:48.103187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.004 [2024-07-23 01:09:48.103211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.004 qpair failed and we were unable to recover it. 00:30:04.004 [2024-07-23 01:09:48.103388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.004 [2024-07-23 01:09:48.103586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.004 [2024-07-23 01:09:48.103611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.004 qpair failed and we were unable to recover it. 00:30:04.004 [2024-07-23 01:09:48.103797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.004 [2024-07-23 01:09:48.103940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.004 [2024-07-23 01:09:48.103965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.004 qpair failed and we were unable to recover it. 00:30:04.004 [2024-07-23 01:09:48.104130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.004 [2024-07-23 01:09:48.104332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.004 [2024-07-23 01:09:48.104357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.004 qpair failed and we were unable to recover it. 00:30:04.004 [2024-07-23 01:09:48.104511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.004 [2024-07-23 01:09:48.104651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.004 [2024-07-23 01:09:48.104677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.004 qpair failed and we were unable to recover it. 00:30:04.004 [2024-07-23 01:09:48.104841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.004 [2024-07-23 01:09:48.104977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.004 [2024-07-23 01:09:48.105002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.004 qpair failed and we were unable to recover it. 00:30:04.004 [2024-07-23 01:09:48.105171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.004 [2024-07-23 01:09:48.105338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.004 [2024-07-23 01:09:48.105362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.004 qpair failed and we were unable to recover it. 00:30:04.004 [2024-07-23 01:09:48.105512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.004 [2024-07-23 01:09:48.105699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.004 [2024-07-23 01:09:48.105724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.004 qpair failed and we were unable to recover it. 00:30:04.004 [2024-07-23 01:09:48.105872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.004 [2024-07-23 01:09:48.106009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.004 [2024-07-23 01:09:48.106034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.004 qpair failed and we were unable to recover it. 00:30:04.004 [2024-07-23 01:09:48.106210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.004 [2024-07-23 01:09:48.106396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.004 [2024-07-23 01:09:48.106420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.004 qpair failed and we were unable to recover it. 00:30:04.004 [2024-07-23 01:09:48.106580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.004 [2024-07-23 01:09:48.106741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.004 [2024-07-23 01:09:48.106767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.004 qpair failed and we were unable to recover it. 00:30:04.004 [2024-07-23 01:09:48.106938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.004 [2024-07-23 01:09:48.107114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.004 [2024-07-23 01:09:48.107138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.004 qpair failed and we were unable to recover it. 00:30:04.004 [2024-07-23 01:09:48.107274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.004 [2024-07-23 01:09:48.107435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.004 [2024-07-23 01:09:48.107459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.004 qpair failed and we were unable to recover it. 00:30:04.004 [2024-07-23 01:09:48.107649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.004 [2024-07-23 01:09:48.107791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.004 [2024-07-23 01:09:48.107815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.004 qpair failed and we were unable to recover it. 00:30:04.004 [2024-07-23 01:09:48.107977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.004 [2024-07-23 01:09:48.108139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.004 [2024-07-23 01:09:48.108165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.004 qpair failed and we were unable to recover it. 00:30:04.004 [2024-07-23 01:09:48.108329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.004 [2024-07-23 01:09:48.108464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.004 [2024-07-23 01:09:48.108489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.004 qpair failed and we were unable to recover it. 00:30:04.004 [2024-07-23 01:09:48.108635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.004 [2024-07-23 01:09:48.108805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.004 [2024-07-23 01:09:48.108830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.004 qpair failed and we were unable to recover it. 00:30:04.004 [2024-07-23 01:09:48.109013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.004 [2024-07-23 01:09:48.109185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.004 [2024-07-23 01:09:48.109211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.004 qpair failed and we were unable to recover it. 00:30:04.004 [2024-07-23 01:09:48.109375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.004 [2024-07-23 01:09:48.109538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.004 [2024-07-23 01:09:48.109562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.004 qpair failed and we were unable to recover it. 00:30:04.004 [2024-07-23 01:09:48.109743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.004 [2024-07-23 01:09:48.109877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.004 [2024-07-23 01:09:48.109901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.004 qpair failed and we were unable to recover it. 00:30:04.004 [2024-07-23 01:09:48.110030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.004 [2024-07-23 01:09:48.110192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.004 [2024-07-23 01:09:48.110216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.004 qpair failed and we were unable to recover it. 00:30:04.004 [2024-07-23 01:09:48.110355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.004 [2024-07-23 01:09:48.110518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.004 [2024-07-23 01:09:48.110543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.004 qpair failed and we were unable to recover it. 00:30:04.004 [2024-07-23 01:09:48.110692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.004 [2024-07-23 01:09:48.110856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.004 [2024-07-23 01:09:48.110880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.004 qpair failed and we were unable to recover it. 00:30:04.004 [2024-07-23 01:09:48.111020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.004 [2024-07-23 01:09:48.111193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.004 [2024-07-23 01:09:48.111217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.004 qpair failed and we were unable to recover it. 00:30:04.004 [2024-07-23 01:09:48.111377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.004 [2024-07-23 01:09:48.111511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.004 [2024-07-23 01:09:48.111535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.004 qpair failed and we were unable to recover it. 00:30:04.004 [2024-07-23 01:09:48.111709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.004 [2024-07-23 01:09:48.111842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.004 [2024-07-23 01:09:48.111867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.004 qpair failed and we were unable to recover it. 00:30:04.004 [2024-07-23 01:09:48.112039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.004 [2024-07-23 01:09:48.112202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.004 [2024-07-23 01:09:48.112226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.004 qpair failed and we were unable to recover it. 00:30:04.004 [2024-07-23 01:09:48.112407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.004 [2024-07-23 01:09:48.112572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.004 [2024-07-23 01:09:48.112596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.004 qpair failed and we were unable to recover it. 00:30:04.004 [2024-07-23 01:09:48.112730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.004 [2024-07-23 01:09:48.112871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.005 [2024-07-23 01:09:48.112896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.005 qpair failed and we were unable to recover it. 00:30:04.005 [2024-07-23 01:09:48.113072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.005 [2024-07-23 01:09:48.113243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.005 [2024-07-23 01:09:48.113267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.005 qpair failed and we were unable to recover it. 00:30:04.005 [2024-07-23 01:09:48.113406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.005 [2024-07-23 01:09:48.113544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.005 [2024-07-23 01:09:48.113570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.005 qpair failed and we were unable to recover it. 00:30:04.005 [2024-07-23 01:09:48.113716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.005 [2024-07-23 01:09:48.113877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.005 [2024-07-23 01:09:48.113901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.005 qpair failed and we were unable to recover it. 00:30:04.005 [2024-07-23 01:09:48.114106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.005 [2024-07-23 01:09:48.114240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.005 [2024-07-23 01:09:48.114266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.005 qpair failed and we were unable to recover it. 00:30:04.005 [2024-07-23 01:09:48.114407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.005 [2024-07-23 01:09:48.114573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.005 [2024-07-23 01:09:48.114598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.005 qpair failed and we were unable to recover it. 00:30:04.005 [2024-07-23 01:09:48.114775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.005 [2024-07-23 01:09:48.114917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.005 [2024-07-23 01:09:48.114941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.005 qpair failed and we were unable to recover it. 00:30:04.005 [2024-07-23 01:09:48.115095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.005 [2024-07-23 01:09:48.115296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.005 [2024-07-23 01:09:48.115321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.005 qpair failed and we were unable to recover it. 00:30:04.005 [2024-07-23 01:09:48.115478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.005 [2024-07-23 01:09:48.115640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.005 [2024-07-23 01:09:48.115665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.005 qpair failed and we were unable to recover it. 00:30:04.005 [2024-07-23 01:09:48.115805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.005 [2024-07-23 01:09:48.115954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.005 [2024-07-23 01:09:48.115977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.005 qpair failed and we were unable to recover it. 00:30:04.005 [2024-07-23 01:09:48.116152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.005 [2024-07-23 01:09:48.116315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.005 [2024-07-23 01:09:48.116340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.005 qpair failed and we were unable to recover it. 00:30:04.005 [2024-07-23 01:09:48.116508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.005 [2024-07-23 01:09:48.116699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.005 [2024-07-23 01:09:48.116725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.005 qpair failed and we were unable to recover it. 00:30:04.005 [2024-07-23 01:09:48.116883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.005 [2024-07-23 01:09:48.117054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.005 [2024-07-23 01:09:48.117077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.005 qpair failed and we were unable to recover it. 00:30:04.005 [2024-07-23 01:09:48.117248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.005 [2024-07-23 01:09:48.117387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.005 [2024-07-23 01:09:48.117421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.005 qpair failed and we were unable to recover it. 00:30:04.005 [2024-07-23 01:09:48.117593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.005 [2024-07-23 01:09:48.117773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.005 [2024-07-23 01:09:48.117798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.005 qpair failed and we were unable to recover it. 00:30:04.005 [2024-07-23 01:09:48.117966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.005 [2024-07-23 01:09:48.118092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.005 [2024-07-23 01:09:48.118116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.005 qpair failed and we were unable to recover it. 00:30:04.005 [2024-07-23 01:09:48.118287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.005 [2024-07-23 01:09:48.118477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.005 [2024-07-23 01:09:48.118501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.005 qpair failed and we were unable to recover it. 00:30:04.005 [2024-07-23 01:09:48.118633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.005 [2024-07-23 01:09:48.118766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.005 [2024-07-23 01:09:48.118792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.005 qpair failed and we were unable to recover it. 00:30:04.005 [2024-07-23 01:09:48.118924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.005 [2024-07-23 01:09:48.119085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.005 [2024-07-23 01:09:48.119110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.005 qpair failed and we were unable to recover it. 00:30:04.005 [2024-07-23 01:09:48.119270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.005 [2024-07-23 01:09:48.119456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.005 [2024-07-23 01:09:48.119488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.005 qpair failed and we were unable to recover it. 00:30:04.005 [2024-07-23 01:09:48.119652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.005 [2024-07-23 01:09:48.119817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.005 [2024-07-23 01:09:48.119842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.005 qpair failed and we were unable to recover it. 00:30:04.005 [2024-07-23 01:09:48.120004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.005 [2024-07-23 01:09:48.120169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.005 [2024-07-23 01:09:48.120193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.005 qpair failed and we were unable to recover it. 00:30:04.005 [2024-07-23 01:09:48.120397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.005 [2024-07-23 01:09:48.120590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.005 [2024-07-23 01:09:48.120621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.005 qpair failed and we were unable to recover it. 00:30:04.005 [2024-07-23 01:09:48.120789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.005 [2024-07-23 01:09:48.120936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.005 [2024-07-23 01:09:48.120960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.005 qpair failed and we were unable to recover it. 00:30:04.006 [2024-07-23 01:09:48.121161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.006 [2024-07-23 01:09:48.121324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.006 [2024-07-23 01:09:48.121364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.006 qpair failed and we were unable to recover it. 00:30:04.006 [2024-07-23 01:09:48.121500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.006 [2024-07-23 01:09:48.121678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.006 [2024-07-23 01:09:48.121703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.006 qpair failed and we were unable to recover it. 00:30:04.006 [2024-07-23 01:09:48.121845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.006 [2024-07-23 01:09:48.122029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.006 [2024-07-23 01:09:48.122053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.006 qpair failed and we were unable to recover it. 00:30:04.006 [2024-07-23 01:09:48.122218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.006 [2024-07-23 01:09:48.122378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.006 [2024-07-23 01:09:48.122404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.006 qpair failed and we were unable to recover it. 00:30:04.006 [2024-07-23 01:09:48.122635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.006 [2024-07-23 01:09:48.122813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.006 [2024-07-23 01:09:48.122839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.006 qpair failed and we were unable to recover it. 00:30:04.006 [2024-07-23 01:09:48.123038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.006 [2024-07-23 01:09:48.123256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.006 [2024-07-23 01:09:48.123279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.006 qpair failed and we were unable to recover it. 00:30:04.006 [2024-07-23 01:09:48.123502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.006 [2024-07-23 01:09:48.123676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.006 [2024-07-23 01:09:48.123703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.006 qpair failed and we were unable to recover it. 00:30:04.006 [2024-07-23 01:09:48.123847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.006 [2024-07-23 01:09:48.124048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.006 [2024-07-23 01:09:48.124071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.006 qpair failed and we were unable to recover it. 00:30:04.006 [2024-07-23 01:09:48.124233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.006 [2024-07-23 01:09:48.124415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.006 [2024-07-23 01:09:48.124440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.006 qpair failed and we were unable to recover it. 00:30:04.006 [2024-07-23 01:09:48.124601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.006 [2024-07-23 01:09:48.124798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.006 [2024-07-23 01:09:48.124822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.006 qpair failed and we were unable to recover it. 00:30:04.006 [2024-07-23 01:09:48.124963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.006 [2024-07-23 01:09:48.125125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.006 [2024-07-23 01:09:48.125155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.006 qpair failed and we were unable to recover it. 00:30:04.006 [2024-07-23 01:09:48.125317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.006 [2024-07-23 01:09:48.125479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.006 [2024-07-23 01:09:48.125504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.006 qpair failed and we were unable to recover it. 00:30:04.006 [2024-07-23 01:09:48.125684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.006 [2024-07-23 01:09:48.125833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.006 [2024-07-23 01:09:48.125859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.006 qpair failed and we were unable to recover it. 00:30:04.006 [2024-07-23 01:09:48.126022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.006 [2024-07-23 01:09:48.126164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.006 [2024-07-23 01:09:48.126205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.006 qpair failed and we were unable to recover it. 00:30:04.006 [2024-07-23 01:09:48.126406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.006 [2024-07-23 01:09:48.126532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.006 [2024-07-23 01:09:48.126557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.006 qpair failed and we were unable to recover it. 00:30:04.006 [2024-07-23 01:09:48.126700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.006 [2024-07-23 01:09:48.126864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.006 [2024-07-23 01:09:48.126904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.006 qpair failed and we were unable to recover it. 00:30:04.006 [2024-07-23 01:09:48.127055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.006 [2024-07-23 01:09:48.127261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.006 [2024-07-23 01:09:48.127302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.006 qpair failed and we were unable to recover it. 00:30:04.006 [2024-07-23 01:09:48.127504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.006 [2024-07-23 01:09:48.127658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.006 [2024-07-23 01:09:48.127685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.006 qpair failed and we were unable to recover it. 00:30:04.006 [2024-07-23 01:09:48.127846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.006 [2024-07-23 01:09:48.128009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.006 [2024-07-23 01:09:48.128033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.006 qpair failed and we were unable to recover it. 00:30:04.006 [2024-07-23 01:09:48.128225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.006 [2024-07-23 01:09:48.128385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.006 [2024-07-23 01:09:48.128409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.006 qpair failed and we were unable to recover it. 00:30:04.006 [2024-07-23 01:09:48.128612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.006 [2024-07-23 01:09:48.128754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.006 [2024-07-23 01:09:48.128780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.006 qpair failed and we were unable to recover it. 00:30:04.006 [2024-07-23 01:09:48.128978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.006 [2024-07-23 01:09:48.129176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.006 [2024-07-23 01:09:48.129200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.006 qpair failed and we were unable to recover it. 00:30:04.006 [2024-07-23 01:09:48.129390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.006 [2024-07-23 01:09:48.129570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.006 [2024-07-23 01:09:48.129601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.006 qpair failed and we were unable to recover it. 00:30:04.006 [2024-07-23 01:09:48.129753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.006 [2024-07-23 01:09:48.129889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.006 [2024-07-23 01:09:48.129914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.006 qpair failed and we were unable to recover it. 00:30:04.006 [2024-07-23 01:09:48.130118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.006 [2024-07-23 01:09:48.130317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.006 [2024-07-23 01:09:48.130342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.006 qpair failed and we were unable to recover it. 00:30:04.006 [2024-07-23 01:09:48.130507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.006 [2024-07-23 01:09:48.130706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.006 [2024-07-23 01:09:48.130732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.006 qpair failed and we were unable to recover it. 00:30:04.006 [2024-07-23 01:09:48.130892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.006 [2024-07-23 01:09:48.131068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.006 [2024-07-23 01:09:48.131095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.006 qpair failed and we were unable to recover it. 00:30:04.006 [2024-07-23 01:09:48.131261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.006 [2024-07-23 01:09:48.131407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.007 [2024-07-23 01:09:48.131432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.007 qpair failed and we were unable to recover it. 00:30:04.007 [2024-07-23 01:09:48.131598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.007 [2024-07-23 01:09:48.131783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.007 [2024-07-23 01:09:48.131808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.007 qpair failed and we were unable to recover it. 00:30:04.007 [2024-07-23 01:09:48.131953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.007 [2024-07-23 01:09:48.132123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.007 [2024-07-23 01:09:48.132147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.007 qpair failed and we were unable to recover it. 00:30:04.007 [2024-07-23 01:09:48.132310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.007 [2024-07-23 01:09:48.132496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.007 [2024-07-23 01:09:48.132521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.007 qpair failed and we were unable to recover it. 00:30:04.007 [2024-07-23 01:09:48.132687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.007 [2024-07-23 01:09:48.132815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.007 [2024-07-23 01:09:48.132841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.007 qpair failed and we were unable to recover it. 00:30:04.007 [2024-07-23 01:09:48.133039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.007 [2024-07-23 01:09:48.133226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.007 [2024-07-23 01:09:48.133250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.007 qpair failed and we were unable to recover it. 00:30:04.007 [2024-07-23 01:09:48.133413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.007 [2024-07-23 01:09:48.133579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.007 [2024-07-23 01:09:48.133606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.007 qpair failed and we were unable to recover it. 00:30:04.007 [2024-07-23 01:09:48.133751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.007 [2024-07-23 01:09:48.133918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.007 [2024-07-23 01:09:48.133944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.007 qpair failed and we were unable to recover it. 00:30:04.007 [2024-07-23 01:09:48.134146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.007 [2024-07-23 01:09:48.134346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.007 [2024-07-23 01:09:48.134371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.007 qpair failed and we were unable to recover it. 00:30:04.007 [2024-07-23 01:09:48.134563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.007 [2024-07-23 01:09:48.134744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.007 [2024-07-23 01:09:48.134770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.007 qpair failed and we were unable to recover it. 00:30:04.007 [2024-07-23 01:09:48.134909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.007 [2024-07-23 01:09:48.135059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.007 [2024-07-23 01:09:48.135084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.007 qpair failed and we were unable to recover it. 00:30:04.007 [2024-07-23 01:09:48.135228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.007 [2024-07-23 01:09:48.135396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.007 [2024-07-23 01:09:48.135421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.007 qpair failed and we were unable to recover it. 00:30:04.007 [2024-07-23 01:09:48.135590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.007 [2024-07-23 01:09:48.135764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.007 [2024-07-23 01:09:48.135789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.007 qpair failed and we were unable to recover it. 00:30:04.007 [2024-07-23 01:09:48.135951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.007 [2024-07-23 01:09:48.136106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.007 [2024-07-23 01:09:48.136130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.007 qpair failed and we were unable to recover it. 00:30:04.007 [2024-07-23 01:09:48.136294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.007 [2024-07-23 01:09:48.136462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.007 [2024-07-23 01:09:48.136486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.007 qpair failed and we were unable to recover it. 00:30:04.007 [2024-07-23 01:09:48.136647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.007 [2024-07-23 01:09:48.136814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.007 [2024-07-23 01:09:48.136838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.007 qpair failed and we were unable to recover it. 00:30:04.007 [2024-07-23 01:09:48.136979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.007 [2024-07-23 01:09:48.137168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.007 [2024-07-23 01:09:48.137193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.007 qpair failed and we were unable to recover it. 00:30:04.007 [2024-07-23 01:09:48.137357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.007 [2024-07-23 01:09:48.137497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.007 [2024-07-23 01:09:48.137521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.007 qpair failed and we were unable to recover it. 00:30:04.007 [2024-07-23 01:09:48.137688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.007 [2024-07-23 01:09:48.137857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.007 [2024-07-23 01:09:48.137881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.007 qpair failed and we were unable to recover it. 00:30:04.007 [2024-07-23 01:09:48.138056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.007 [2024-07-23 01:09:48.138220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.007 [2024-07-23 01:09:48.138245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.007 qpair failed and we were unable to recover it. 00:30:04.007 [2024-07-23 01:09:48.138419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.007 [2024-07-23 01:09:48.138630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.007 [2024-07-23 01:09:48.138667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.007 qpair failed and we were unable to recover it. 00:30:04.007 [2024-07-23 01:09:48.138831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.007 [2024-07-23 01:09:48.139005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.007 [2024-07-23 01:09:48.139029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.007 qpair failed and we were unable to recover it. 00:30:04.007 [2024-07-23 01:09:48.139191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.007 [2024-07-23 01:09:48.139378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.007 [2024-07-23 01:09:48.139402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.007 qpair failed and we were unable to recover it. 00:30:04.007 [2024-07-23 01:09:48.139593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.007 [2024-07-23 01:09:48.139772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.007 [2024-07-23 01:09:48.139797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.007 qpair failed and we were unable to recover it. 00:30:04.007 [2024-07-23 01:09:48.139965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.007 [2024-07-23 01:09:48.140145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.007 [2024-07-23 01:09:48.140169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.007 qpair failed and we were unable to recover it. 00:30:04.007 [2024-07-23 01:09:48.140356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.007 [2024-07-23 01:09:48.140546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.007 [2024-07-23 01:09:48.140570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.007 qpair failed and we were unable to recover it. 00:30:04.007 [2024-07-23 01:09:48.140741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.007 [2024-07-23 01:09:48.140881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.007 [2024-07-23 01:09:48.140906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.007 qpair failed and we were unable to recover it. 00:30:04.007 [2024-07-23 01:09:48.141095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.007 [2024-07-23 01:09:48.141254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.007 [2024-07-23 01:09:48.141277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.007 qpair failed and we were unable to recover it. 00:30:04.007 [2024-07-23 01:09:48.141419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.007 [2024-07-23 01:09:48.141551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.008 [2024-07-23 01:09:48.141576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.008 qpair failed and we were unable to recover it. 00:30:04.008 [2024-07-23 01:09:48.141725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.008 [2024-07-23 01:09:48.141886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.008 [2024-07-23 01:09:48.141911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.008 qpair failed and we were unable to recover it. 00:30:04.008 [2024-07-23 01:09:48.142114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.008 [2024-07-23 01:09:48.142276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.008 [2024-07-23 01:09:48.142304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.008 qpair failed and we were unable to recover it. 00:30:04.008 [2024-07-23 01:09:48.142469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.008 [2024-07-23 01:09:48.142661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.008 [2024-07-23 01:09:48.142687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.008 qpair failed and we were unable to recover it. 00:30:04.008 [2024-07-23 01:09:48.142819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.008 [2024-07-23 01:09:48.142952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.008 [2024-07-23 01:09:48.142978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.008 qpair failed and we were unable to recover it. 00:30:04.008 [2024-07-23 01:09:48.143141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.008 [2024-07-23 01:09:48.143306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.008 [2024-07-23 01:09:48.143331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.008 qpair failed and we were unable to recover it. 00:30:04.008 [2024-07-23 01:09:48.143496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.008 [2024-07-23 01:09:48.143663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.008 [2024-07-23 01:09:48.143689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.008 qpair failed and we were unable to recover it. 00:30:04.008 [2024-07-23 01:09:48.143864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.008 [2024-07-23 01:09:48.144057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.008 [2024-07-23 01:09:48.144081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.008 qpair failed and we were unable to recover it. 00:30:04.008 [2024-07-23 01:09:48.144244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.008 [2024-07-23 01:09:48.144432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.008 [2024-07-23 01:09:48.144456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.008 qpair failed and we were unable to recover it. 00:30:04.008 [2024-07-23 01:09:48.144593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.008 [2024-07-23 01:09:48.144762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.008 [2024-07-23 01:09:48.144788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.008 qpair failed and we were unable to recover it. 00:30:04.008 [2024-07-23 01:09:48.144982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.008 [2024-07-23 01:09:48.145146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.008 [2024-07-23 01:09:48.145173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.008 qpair failed and we were unable to recover it. 00:30:04.008 [2024-07-23 01:09:48.145371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.008 [2024-07-23 01:09:48.145538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.008 [2024-07-23 01:09:48.145562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.008 qpair failed and we were unable to recover it. 00:30:04.008 [2024-07-23 01:09:48.145727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.008 [2024-07-23 01:09:48.145896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.008 [2024-07-23 01:09:48.145925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.008 qpair failed and we were unable to recover it. 00:30:04.008 [2024-07-23 01:09:48.146114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.008 [2024-07-23 01:09:48.146279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.008 [2024-07-23 01:09:48.146303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.008 qpair failed and we were unable to recover it. 00:30:04.008 [2024-07-23 01:09:48.146439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.008 [2024-07-23 01:09:48.146588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.008 [2024-07-23 01:09:48.146620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.008 qpair failed and we were unable to recover it. 00:30:04.008 [2024-07-23 01:09:48.146789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.008 [2024-07-23 01:09:48.146950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.008 [2024-07-23 01:09:48.146975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.008 qpair failed and we were unable to recover it. 00:30:04.008 [2024-07-23 01:09:48.147146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.008 [2024-07-23 01:09:48.147287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.008 [2024-07-23 01:09:48.147313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.008 qpair failed and we were unable to recover it. 00:30:04.008 [2024-07-23 01:09:48.147487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.008 [2024-07-23 01:09:48.147679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.008 [2024-07-23 01:09:48.147704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.008 qpair failed and we were unable to recover it. 00:30:04.008 [2024-07-23 01:09:48.147875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.008 [2024-07-23 01:09:48.148040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.008 [2024-07-23 01:09:48.148065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.008 qpair failed and we were unable to recover it. 00:30:04.008 [2024-07-23 01:09:48.148259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.008 [2024-07-23 01:09:48.148425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.008 [2024-07-23 01:09:48.148448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.008 qpair failed and we were unable to recover it. 00:30:04.008 [2024-07-23 01:09:48.148593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.008 [2024-07-23 01:09:48.148793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.008 [2024-07-23 01:09:48.148818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.008 qpair failed and we were unable to recover it. 00:30:04.008 [2024-07-23 01:09:48.148991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.008 [2024-07-23 01:09:48.149184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.008 [2024-07-23 01:09:48.149208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.008 qpair failed and we were unable to recover it. 00:30:04.008 [2024-07-23 01:09:48.149373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.008 [2024-07-23 01:09:48.149545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.008 [2024-07-23 01:09:48.149574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.008 qpair failed and we were unable to recover it. 00:30:04.008 [2024-07-23 01:09:48.149749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.008 [2024-07-23 01:09:48.149918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.008 [2024-07-23 01:09:48.149942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.008 qpair failed and we were unable to recover it. 00:30:04.008 [2024-07-23 01:09:48.150090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.008 [2024-07-23 01:09:48.150255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.008 [2024-07-23 01:09:48.150280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.008 qpair failed and we were unable to recover it. 00:30:04.008 [2024-07-23 01:09:48.150470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.008 [2024-07-23 01:09:48.150640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.008 [2024-07-23 01:09:48.150666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.008 qpair failed and we were unable to recover it. 00:30:04.008 [2024-07-23 01:09:48.150854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.008 [2024-07-23 01:09:48.151023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.008 [2024-07-23 01:09:48.151047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.008 qpair failed and we were unable to recover it. 00:30:04.008 [2024-07-23 01:09:48.151184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.008 [2024-07-23 01:09:48.151381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.008 [2024-07-23 01:09:48.151406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.008 qpair failed and we were unable to recover it. 00:30:04.008 [2024-07-23 01:09:48.151538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.008 [2024-07-23 01:09:48.151735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.009 [2024-07-23 01:09:48.151761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.009 qpair failed and we were unable to recover it. 00:30:04.009 [2024-07-23 01:09:48.151896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.009 [2024-07-23 01:09:48.152036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.009 [2024-07-23 01:09:48.152060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.009 qpair failed and we were unable to recover it. 00:30:04.009 [2024-07-23 01:09:48.152223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.009 [2024-07-23 01:09:48.152389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.009 [2024-07-23 01:09:48.152414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.009 qpair failed and we were unable to recover it. 00:30:04.009 [2024-07-23 01:09:48.152582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.009 [2024-07-23 01:09:48.152758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.009 [2024-07-23 01:09:48.152785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.009 qpair failed and we were unable to recover it. 00:30:04.009 [2024-07-23 01:09:48.152956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.009 [2024-07-23 01:09:48.153086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.009 [2024-07-23 01:09:48.153112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.009 qpair failed and we were unable to recover it. 00:30:04.009 [2024-07-23 01:09:48.153293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.009 [2024-07-23 01:09:48.153454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.009 [2024-07-23 01:09:48.153479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.009 qpair failed and we were unable to recover it. 00:30:04.009 [2024-07-23 01:09:48.153642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.009 [2024-07-23 01:09:48.153809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.009 [2024-07-23 01:09:48.153834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.009 qpair failed and we were unable to recover it. 00:30:04.009 [2024-07-23 01:09:48.153987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.009 [2024-07-23 01:09:48.154145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.009 [2024-07-23 01:09:48.154169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.009 qpair failed and we were unable to recover it. 00:30:04.009 [2024-07-23 01:09:48.154362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.009 [2024-07-23 01:09:48.154502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.009 [2024-07-23 01:09:48.154526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.009 qpair failed and we were unable to recover it. 00:30:04.009 [2024-07-23 01:09:48.154728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.009 [2024-07-23 01:09:48.154923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.009 [2024-07-23 01:09:48.154947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.009 qpair failed and we were unable to recover it. 00:30:04.009 [2024-07-23 01:09:48.155087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.009 [2024-07-23 01:09:48.155280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.009 [2024-07-23 01:09:48.155305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.009 qpair failed and we were unable to recover it. 00:30:04.009 [2024-07-23 01:09:48.155436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.009 [2024-07-23 01:09:48.155605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.009 [2024-07-23 01:09:48.155638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.009 qpair failed and we were unable to recover it. 00:30:04.009 [2024-07-23 01:09:48.155774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.009 [2024-07-23 01:09:48.155952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.009 [2024-07-23 01:09:48.155977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.009 qpair failed and we were unable to recover it. 00:30:04.009 [2024-07-23 01:09:48.156146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.009 [2024-07-23 01:09:48.156310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.009 [2024-07-23 01:09:48.156337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.009 qpair failed and we were unable to recover it. 00:30:04.009 [2024-07-23 01:09:48.156474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.009 [2024-07-23 01:09:48.156647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.009 [2024-07-23 01:09:48.156672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.009 qpair failed and we were unable to recover it. 00:30:04.009 [2024-07-23 01:09:48.156841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.009 [2024-07-23 01:09:48.156981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.009 [2024-07-23 01:09:48.157005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.009 qpair failed and we were unable to recover it. 00:30:04.009 [2024-07-23 01:09:48.157198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.009 [2024-07-23 01:09:48.157360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.009 [2024-07-23 01:09:48.157385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.009 qpair failed and we were unable to recover it. 00:30:04.009 [2024-07-23 01:09:48.157559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.009 [2024-07-23 01:09:48.157739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.009 [2024-07-23 01:09:48.157764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.009 qpair failed and we were unable to recover it. 00:30:04.009 [2024-07-23 01:09:48.157932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.009 [2024-07-23 01:09:48.158136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.009 [2024-07-23 01:09:48.158161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.009 qpair failed and we were unable to recover it. 00:30:04.009 [2024-07-23 01:09:48.158324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.009 [2024-07-23 01:09:48.158467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.009 [2024-07-23 01:09:48.158492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.009 qpair failed and we were unable to recover it. 00:30:04.009 [2024-07-23 01:09:48.158663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.009 [2024-07-23 01:09:48.158826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.009 [2024-07-23 01:09:48.158851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.009 qpair failed and we were unable to recover it. 00:30:04.009 [2024-07-23 01:09:48.159023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.009 [2024-07-23 01:09:48.159185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.009 [2024-07-23 01:09:48.159209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.009 qpair failed and we were unable to recover it. 00:30:04.009 [2024-07-23 01:09:48.159352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.009 [2024-07-23 01:09:48.159544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.009 [2024-07-23 01:09:48.159569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.009 qpair failed and we were unable to recover it. 00:30:04.009 [2024-07-23 01:09:48.159747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.009 [2024-07-23 01:09:48.159916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.009 [2024-07-23 01:09:48.159942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.009 qpair failed and we were unable to recover it. 00:30:04.009 [2024-07-23 01:09:48.160079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.009 [2024-07-23 01:09:48.160237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.009 [2024-07-23 01:09:48.160261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.009 qpair failed and we were unable to recover it. 00:30:04.009 [2024-07-23 01:09:48.160406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.009 [2024-07-23 01:09:48.160569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.009 [2024-07-23 01:09:48.160592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.009 qpair failed and we were unable to recover it. 00:30:04.009 [2024-07-23 01:09:48.160766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.009 [2024-07-23 01:09:48.160933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.009 [2024-07-23 01:09:48.160958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.009 qpair failed and we were unable to recover it. 00:30:04.009 [2024-07-23 01:09:48.161107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.009 [2024-07-23 01:09:48.161261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.009 [2024-07-23 01:09:48.161286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.010 qpair failed and we were unable to recover it. 00:30:04.010 [2024-07-23 01:09:48.161450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.010 [2024-07-23 01:09:48.161584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.010 [2024-07-23 01:09:48.161609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.010 qpair failed and we were unable to recover it. 00:30:04.010 [2024-07-23 01:09:48.161823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.010 [2024-07-23 01:09:48.161955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.010 [2024-07-23 01:09:48.161981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.010 qpair failed and we were unable to recover it. 00:30:04.010 [2024-07-23 01:09:48.162147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.010 [2024-07-23 01:09:48.162301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.010 [2024-07-23 01:09:48.162325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.010 qpair failed and we were unable to recover it. 00:30:04.010 [2024-07-23 01:09:48.162469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.010 [2024-07-23 01:09:48.162636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.010 [2024-07-23 01:09:48.162664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.010 qpair failed and we were unable to recover it. 00:30:04.010 [2024-07-23 01:09:48.162827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.010 [2024-07-23 01:09:48.163028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.010 [2024-07-23 01:09:48.163053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.010 qpair failed and we were unable to recover it. 00:30:04.010 [2024-07-23 01:09:48.163188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.010 [2024-07-23 01:09:48.163350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.010 [2024-07-23 01:09:48.163374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.010 qpair failed and we were unable to recover it. 00:30:04.010 [2024-07-23 01:09:48.163537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.010 [2024-07-23 01:09:48.163700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.010 [2024-07-23 01:09:48.163725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.010 qpair failed and we were unable to recover it. 00:30:04.010 [2024-07-23 01:09:48.163914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.010 [2024-07-23 01:09:48.164047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.010 [2024-07-23 01:09:48.164073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.010 qpair failed and we were unable to recover it. 00:30:04.010 [2024-07-23 01:09:48.164214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.010 [2024-07-23 01:09:48.164350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.010 [2024-07-23 01:09:48.164374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.010 qpair failed and we were unable to recover it. 00:30:04.010 [2024-07-23 01:09:48.164508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.010 [2024-07-23 01:09:48.164677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.010 [2024-07-23 01:09:48.164703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.010 qpair failed and we were unable to recover it. 00:30:04.010 [2024-07-23 01:09:48.164837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.010 [2024-07-23 01:09:48.164999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.010 [2024-07-23 01:09:48.165025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.010 qpair failed and we were unable to recover it. 00:30:04.010 [2024-07-23 01:09:48.165190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.010 [2024-07-23 01:09:48.165354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.010 [2024-07-23 01:09:48.165379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.010 qpair failed and we were unable to recover it. 00:30:04.010 [2024-07-23 01:09:48.165545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.010 [2024-07-23 01:09:48.165706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.010 [2024-07-23 01:09:48.165731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.010 qpair failed and we were unable to recover it. 00:30:04.010 [2024-07-23 01:09:48.165891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.010 [2024-07-23 01:09:48.166036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.010 [2024-07-23 01:09:48.166059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.010 qpair failed and we were unable to recover it. 00:30:04.010 [2024-07-23 01:09:48.166221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.010 [2024-07-23 01:09:48.166387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.010 [2024-07-23 01:09:48.166412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.010 qpair failed and we were unable to recover it. 00:30:04.010 [2024-07-23 01:09:48.166581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.010 [2024-07-23 01:09:48.166784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.010 [2024-07-23 01:09:48.166819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.010 qpair failed and we were unable to recover it. 00:30:04.010 [2024-07-23 01:09:48.167011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.010 [2024-07-23 01:09:48.167178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.010 [2024-07-23 01:09:48.167203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.010 qpair failed and we were unable to recover it. 00:30:04.010 [2024-07-23 01:09:48.167396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.010 [2024-07-23 01:09:48.167537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.010 [2024-07-23 01:09:48.167564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.010 qpair failed and we were unable to recover it. 00:30:04.010 [2024-07-23 01:09:48.167702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.010 [2024-07-23 01:09:48.167871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.010 [2024-07-23 01:09:48.167896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.010 qpair failed and we were unable to recover it. 00:30:04.010 [2024-07-23 01:09:48.168068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.010 [2024-07-23 01:09:48.168233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.010 [2024-07-23 01:09:48.168258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.010 qpair failed and we were unable to recover it. 00:30:04.010 [2024-07-23 01:09:48.168439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.010 [2024-07-23 01:09:48.168651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.010 [2024-07-23 01:09:48.168685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.010 qpair failed and we were unable to recover it. 00:30:04.010 [2024-07-23 01:09:48.168859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.010 [2024-07-23 01:09:48.169017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.010 [2024-07-23 01:09:48.169044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.010 qpair failed and we were unable to recover it. 00:30:04.010 [2024-07-23 01:09:48.169191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.010 [2024-07-23 01:09:48.169381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.010 [2024-07-23 01:09:48.169405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.010 qpair failed and we were unable to recover it. 00:30:04.010 [2024-07-23 01:09:48.169594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.010 [2024-07-23 01:09:48.169788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.010 [2024-07-23 01:09:48.169814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.011 qpair failed and we were unable to recover it. 00:30:04.011 [2024-07-23 01:09:48.169955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.011 [2024-07-23 01:09:48.170162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.011 [2024-07-23 01:09:48.170200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.011 qpair failed and we were unable to recover it. 00:30:04.011 [2024-07-23 01:09:48.170414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.284 [2024-07-23 01:09:48.170552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.284 [2024-07-23 01:09:48.170577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.284 qpair failed and we were unable to recover it. 00:30:04.284 [2024-07-23 01:09:48.170781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.284 [2024-07-23 01:09:48.170943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.284 [2024-07-23 01:09:48.170967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.284 qpair failed and we were unable to recover it. 00:30:04.284 [2024-07-23 01:09:48.171142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.284 [2024-07-23 01:09:48.171327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.284 [2024-07-23 01:09:48.171361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.284 qpair failed and we were unable to recover it. 00:30:04.284 [2024-07-23 01:09:48.171573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.284 [2024-07-23 01:09:48.171758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.284 [2024-07-23 01:09:48.171794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.284 qpair failed and we were unable to recover it. 00:30:04.284 [2024-07-23 01:09:48.171981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.284 [2024-07-23 01:09:48.172166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.284 [2024-07-23 01:09:48.172203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.284 qpair failed and we were unable to recover it. 00:30:04.284 [2024-07-23 01:09:48.172391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.284 [2024-07-23 01:09:48.172552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.284 [2024-07-23 01:09:48.172587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.284 qpair failed and we were unable to recover it. 00:30:04.284 [2024-07-23 01:09:48.172784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.284 [2024-07-23 01:09:48.172996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.284 [2024-07-23 01:09:48.173028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.284 qpair failed and we were unable to recover it. 00:30:04.284 [2024-07-23 01:09:48.173177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.284 [2024-07-23 01:09:48.173338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.284 [2024-07-23 01:09:48.173365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.284 qpair failed and we were unable to recover it. 00:30:04.284 [2024-07-23 01:09:48.173555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.284 [2024-07-23 01:09:48.173737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.284 [2024-07-23 01:09:48.173764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.284 qpair failed and we were unable to recover it. 00:30:04.284 [2024-07-23 01:09:48.173919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.284 [2024-07-23 01:09:48.174095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.284 [2024-07-23 01:09:48.174122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.284 qpair failed and we were unable to recover it. 00:30:04.284 [2024-07-23 01:09:48.174294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.284 [2024-07-23 01:09:48.174462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.284 [2024-07-23 01:09:48.174487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.284 qpair failed and we were unable to recover it. 00:30:04.284 [2024-07-23 01:09:48.174652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.284 [2024-07-23 01:09:48.174850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.284 [2024-07-23 01:09:48.174876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.284 qpair failed and we were unable to recover it. 00:30:04.284 [2024-07-23 01:09:48.175031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.284 [2024-07-23 01:09:48.175197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.284 [2024-07-23 01:09:48.175222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.284 qpair failed and we were unable to recover it. 00:30:04.284 [2024-07-23 01:09:48.175355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.284 [2024-07-23 01:09:48.175553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.284 [2024-07-23 01:09:48.175578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.284 qpair failed and we were unable to recover it. 00:30:04.284 [2024-07-23 01:09:48.175784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.284 [2024-07-23 01:09:48.175949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.284 [2024-07-23 01:09:48.175974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.284 qpair failed and we were unable to recover it. 00:30:04.284 [2024-07-23 01:09:48.176164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.284 [2024-07-23 01:09:48.176326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.284 [2024-07-23 01:09:48.176350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.284 qpair failed and we were unable to recover it. 00:30:04.284 [2024-07-23 01:09:48.176492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.284 [2024-07-23 01:09:48.176638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.284 [2024-07-23 01:09:48.176677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.284 qpair failed and we were unable to recover it. 00:30:04.284 [2024-07-23 01:09:48.176813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.284 [2024-07-23 01:09:48.177011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.284 [2024-07-23 01:09:48.177045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.284 qpair failed and we were unable to recover it. 00:30:04.284 [2024-07-23 01:09:48.177212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.284 [2024-07-23 01:09:48.177353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.284 [2024-07-23 01:09:48.177377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.284 qpair failed and we were unable to recover it. 00:30:04.284 [2024-07-23 01:09:48.177509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.284 [2024-07-23 01:09:48.177676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.284 [2024-07-23 01:09:48.177702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.284 qpair failed and we were unable to recover it. 00:30:04.284 [2024-07-23 01:09:48.177868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.284 [2024-07-23 01:09:48.178044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.284 [2024-07-23 01:09:48.178068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.285 qpair failed and we were unable to recover it. 00:30:04.285 [2024-07-23 01:09:48.178233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.285 [2024-07-23 01:09:48.178395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.285 [2024-07-23 01:09:48.178420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.285 qpair failed and we were unable to recover it. 00:30:04.285 [2024-07-23 01:09:48.178592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.285 [2024-07-23 01:09:48.178778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.285 [2024-07-23 01:09:48.178803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.285 qpair failed and we were unable to recover it. 00:30:04.285 [2024-07-23 01:09:48.178945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.285 [2024-07-23 01:09:48.179113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.285 [2024-07-23 01:09:48.179138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.285 qpair failed and we were unable to recover it. 00:30:04.285 [2024-07-23 01:09:48.179324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.285 [2024-07-23 01:09:48.179467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.285 [2024-07-23 01:09:48.179503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.285 qpair failed and we were unable to recover it. 00:30:04.285 [2024-07-23 01:09:48.179687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.285 [2024-07-23 01:09:48.179880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.285 [2024-07-23 01:09:48.179905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.285 qpair failed and we were unable to recover it. 00:30:04.285 [2024-07-23 01:09:48.180070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.285 [2024-07-23 01:09:48.180263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.285 [2024-07-23 01:09:48.180288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.285 qpair failed and we were unable to recover it. 00:30:04.285 [2024-07-23 01:09:48.180460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.285 [2024-07-23 01:09:48.180624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.285 [2024-07-23 01:09:48.180650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.285 qpair failed and we were unable to recover it. 00:30:04.285 [2024-07-23 01:09:48.180819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.285 [2024-07-23 01:09:48.180960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.285 [2024-07-23 01:09:48.180987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.285 qpair failed and we were unable to recover it. 00:30:04.285 [2024-07-23 01:09:48.181168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.285 [2024-07-23 01:09:48.181356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.285 [2024-07-23 01:09:48.181380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.285 qpair failed and we were unable to recover it. 00:30:04.285 [2024-07-23 01:09:48.181575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.285 [2024-07-23 01:09:48.181737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.285 [2024-07-23 01:09:48.181763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.285 qpair failed and we were unable to recover it. 00:30:04.285 [2024-07-23 01:09:48.181909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.285 [2024-07-23 01:09:48.182077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.285 [2024-07-23 01:09:48.182103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.285 qpair failed and we were unable to recover it. 00:30:04.285 [2024-07-23 01:09:48.182293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.285 [2024-07-23 01:09:48.182435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.285 [2024-07-23 01:09:48.182460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.285 qpair failed and we were unable to recover it. 00:30:04.285 [2024-07-23 01:09:48.182652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.285 [2024-07-23 01:09:48.182815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.285 [2024-07-23 01:09:48.182842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.285 qpair failed and we were unable to recover it. 00:30:04.285 [2024-07-23 01:09:48.182973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.285 [2024-07-23 01:09:48.183124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.285 [2024-07-23 01:09:48.183150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.285 qpair failed and we were unable to recover it. 00:30:04.285 [2024-07-23 01:09:48.183293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.285 [2024-07-23 01:09:48.183472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.285 [2024-07-23 01:09:48.183496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.285 qpair failed and we were unable to recover it. 00:30:04.285 [2024-07-23 01:09:48.183667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.285 [2024-07-23 01:09:48.183812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.285 [2024-07-23 01:09:48.183837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.285 qpair failed and we were unable to recover it. 00:30:04.285 [2024-07-23 01:09:48.184004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.285 [2024-07-23 01:09:48.184174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.285 [2024-07-23 01:09:48.184198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.285 qpair failed and we were unable to recover it. 00:30:04.285 [2024-07-23 01:09:48.184357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.285 [2024-07-23 01:09:48.184545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.285 [2024-07-23 01:09:48.184570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.285 qpair failed and we were unable to recover it. 00:30:04.285 [2024-07-23 01:09:48.184739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.285 [2024-07-23 01:09:48.184888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.285 [2024-07-23 01:09:48.184913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.285 qpair failed and we were unable to recover it. 00:30:04.285 [2024-07-23 01:09:48.185079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.285 [2024-07-23 01:09:48.185254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.285 [2024-07-23 01:09:48.185279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.285 qpair failed and we were unable to recover it. 00:30:04.285 [2024-07-23 01:09:48.185481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.285 [2024-07-23 01:09:48.185648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.285 [2024-07-23 01:09:48.185674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.285 qpair failed and we were unable to recover it. 00:30:04.285 [2024-07-23 01:09:48.185841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.285 [2024-07-23 01:09:48.186009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.285 [2024-07-23 01:09:48.186034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.285 qpair failed and we were unable to recover it. 00:30:04.285 [2024-07-23 01:09:48.186197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.285 [2024-07-23 01:09:48.186340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.285 [2024-07-23 01:09:48.186365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.285 qpair failed and we were unable to recover it. 00:30:04.285 [2024-07-23 01:09:48.186526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.285 [2024-07-23 01:09:48.186664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.285 [2024-07-23 01:09:48.186690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.285 qpair failed and we were unable to recover it. 00:30:04.285 [2024-07-23 01:09:48.186936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.285 [2024-07-23 01:09:48.187124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.285 [2024-07-23 01:09:48.187148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.285 qpair failed and we were unable to recover it. 00:30:04.285 [2024-07-23 01:09:48.187312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.285 [2024-07-23 01:09:48.187446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.285 [2024-07-23 01:09:48.187471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.285 qpair failed and we were unable to recover it. 00:30:04.285 [2024-07-23 01:09:48.187642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.285 [2024-07-23 01:09:48.187781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.285 [2024-07-23 01:09:48.187805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.285 qpair failed and we were unable to recover it. 00:30:04.285 [2024-07-23 01:09:48.187975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.285 [2024-07-23 01:09:48.188177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.285 [2024-07-23 01:09:48.188202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.285 qpair failed and we were unable to recover it. 00:30:04.285 [2024-07-23 01:09:48.188399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.286 [2024-07-23 01:09:48.188641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.286 [2024-07-23 01:09:48.188666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.286 qpair failed and we were unable to recover it. 00:30:04.286 [2024-07-23 01:09:48.188838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.286 [2024-07-23 01:09:48.188973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.286 [2024-07-23 01:09:48.189000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.286 qpair failed and we were unable to recover it. 00:30:04.286 [2024-07-23 01:09:48.189165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.286 [2024-07-23 01:09:48.189310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.286 [2024-07-23 01:09:48.189334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.286 qpair failed and we were unable to recover it. 00:30:04.286 [2024-07-23 01:09:48.189483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.286 [2024-07-23 01:09:48.189680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.286 [2024-07-23 01:09:48.189706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.286 qpair failed and we were unable to recover it. 00:30:04.286 [2024-07-23 01:09:48.189868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.286 [2024-07-23 01:09:48.190031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.286 [2024-07-23 01:09:48.190055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.286 qpair failed and we were unable to recover it. 00:30:04.286 [2024-07-23 01:09:48.190213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.286 [2024-07-23 01:09:48.190377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.286 [2024-07-23 01:09:48.190404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.286 qpair failed and we were unable to recover it. 00:30:04.286 [2024-07-23 01:09:48.190561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.286 [2024-07-23 01:09:48.190730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.286 [2024-07-23 01:09:48.190756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.286 qpair failed and we were unable to recover it. 00:30:04.286 [2024-07-23 01:09:48.190927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.286 [2024-07-23 01:09:48.191093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.286 [2024-07-23 01:09:48.191118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.286 qpair failed and we were unable to recover it. 00:30:04.286 [2024-07-23 01:09:48.191308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.286 [2024-07-23 01:09:48.191446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.286 [2024-07-23 01:09:48.191471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.286 qpair failed and we were unable to recover it. 00:30:04.286 [2024-07-23 01:09:48.191639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.286 [2024-07-23 01:09:48.191772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.286 [2024-07-23 01:09:48.191799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.286 qpair failed and we were unable to recover it. 00:30:04.286 [2024-07-23 01:09:48.191973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.286 [2024-07-23 01:09:48.192213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.286 [2024-07-23 01:09:48.192238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.286 qpair failed and we were unable to recover it. 00:30:04.286 [2024-07-23 01:09:48.192480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.286 [2024-07-23 01:09:48.192677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.286 [2024-07-23 01:09:48.192702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.286 qpair failed and we were unable to recover it. 00:30:04.286 [2024-07-23 01:09:48.192867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.286 [2024-07-23 01:09:48.193061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.286 [2024-07-23 01:09:48.193085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.286 qpair failed and we were unable to recover it. 00:30:04.286 [2024-07-23 01:09:48.193280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.286 [2024-07-23 01:09:48.193425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.286 [2024-07-23 01:09:48.193450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.286 qpair failed and we were unable to recover it. 00:30:04.286 [2024-07-23 01:09:48.193624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.286 [2024-07-23 01:09:48.193784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.286 [2024-07-23 01:09:48.193809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.286 qpair failed and we were unable to recover it. 00:30:04.286 [2024-07-23 01:09:48.194010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.286 [2024-07-23 01:09:48.194143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.286 [2024-07-23 01:09:48.194167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.286 qpair failed and we were unable to recover it. 00:30:04.286 [2024-07-23 01:09:48.194302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.286 [2024-07-23 01:09:48.194429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.286 [2024-07-23 01:09:48.194454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.286 qpair failed and we were unable to recover it. 00:30:04.286 [2024-07-23 01:09:48.194595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.286 [2024-07-23 01:09:48.194852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.286 [2024-07-23 01:09:48.194879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.286 qpair failed and we were unable to recover it. 00:30:04.286 [2024-07-23 01:09:48.195073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.286 [2024-07-23 01:09:48.195265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.286 [2024-07-23 01:09:48.195290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.286 qpair failed and we were unable to recover it. 00:30:04.286 [2024-07-23 01:09:48.195449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.286 [2024-07-23 01:09:48.195626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.286 [2024-07-23 01:09:48.195652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.286 qpair failed and we were unable to recover it. 00:30:04.286 [2024-07-23 01:09:48.195821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.286 [2024-07-23 01:09:48.196011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.286 [2024-07-23 01:09:48.196036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.286 qpair failed and we were unable to recover it. 00:30:04.286 [2024-07-23 01:09:48.196172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.286 [2024-07-23 01:09:48.196370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.286 [2024-07-23 01:09:48.196395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.286 qpair failed and we were unable to recover it. 00:30:04.286 [2024-07-23 01:09:48.196568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.286 [2024-07-23 01:09:48.196736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.286 [2024-07-23 01:09:48.196762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.286 qpair failed and we were unable to recover it. 00:30:04.286 [2024-07-23 01:09:48.196932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.286 [2024-07-23 01:09:48.197096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.286 [2024-07-23 01:09:48.197125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.286 qpair failed and we were unable to recover it. 00:30:04.286 [2024-07-23 01:09:48.197298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.286 [2024-07-23 01:09:48.197458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.286 [2024-07-23 01:09:48.197483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.286 qpair failed and we were unable to recover it. 00:30:04.286 [2024-07-23 01:09:48.197623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.286 [2024-07-23 01:09:48.197787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.286 [2024-07-23 01:09:48.197812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.286 qpair failed and we were unable to recover it. 00:30:04.286 [2024-07-23 01:09:48.197977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.286 [2024-07-23 01:09:48.198134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.286 [2024-07-23 01:09:48.198158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.286 qpair failed and we were unable to recover it. 00:30:04.286 [2024-07-23 01:09:48.198299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.286 [2024-07-23 01:09:48.198498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.286 [2024-07-23 01:09:48.198522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.286 qpair failed and we were unable to recover it. 00:30:04.286 [2024-07-23 01:09:48.198657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.286 [2024-07-23 01:09:48.198826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.287 [2024-07-23 01:09:48.198853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.287 qpair failed and we were unable to recover it. 00:30:04.287 [2024-07-23 01:09:48.199016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.287 [2024-07-23 01:09:48.199205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.287 [2024-07-23 01:09:48.199229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.287 qpair failed and we were unable to recover it. 00:30:04.287 [2024-07-23 01:09:48.199395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.287 [2024-07-23 01:09:48.199558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.287 [2024-07-23 01:09:48.199584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.287 qpair failed and we were unable to recover it. 00:30:04.287 [2024-07-23 01:09:48.199790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.287 [2024-07-23 01:09:48.199983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.287 [2024-07-23 01:09:48.200007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.287 qpair failed and we were unable to recover it. 00:30:04.287 [2024-07-23 01:09:48.200163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.287 [2024-07-23 01:09:48.200329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.287 [2024-07-23 01:09:48.200354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.287 qpair failed and we were unable to recover it. 00:30:04.287 [2024-07-23 01:09:48.200544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.287 [2024-07-23 01:09:48.200724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.287 [2024-07-23 01:09:48.200769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.287 qpair failed and we were unable to recover it. 00:30:04.287 [2024-07-23 01:09:48.200939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.287 [2024-07-23 01:09:48.201098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.287 [2024-07-23 01:09:48.201123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.287 qpair failed and we were unable to recover it. 00:30:04.287 [2024-07-23 01:09:48.201285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.287 [2024-07-23 01:09:48.201442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.287 [2024-07-23 01:09:48.201466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.287 qpair failed and we were unable to recover it. 00:30:04.287 [2024-07-23 01:09:48.201603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.287 [2024-07-23 01:09:48.201774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.287 [2024-07-23 01:09:48.201800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.287 qpair failed and we were unable to recover it. 00:30:04.287 [2024-07-23 01:09:48.201994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.287 [2024-07-23 01:09:48.202130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.287 [2024-07-23 01:09:48.202155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.287 qpair failed and we were unable to recover it. 00:30:04.287 [2024-07-23 01:09:48.202319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.287 [2024-07-23 01:09:48.202485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.287 [2024-07-23 01:09:48.202510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.287 qpair failed and we were unable to recover it. 00:30:04.287 [2024-07-23 01:09:48.202681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.287 [2024-07-23 01:09:48.202836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.287 [2024-07-23 01:09:48.202861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.287 qpair failed and we were unable to recover it. 00:30:04.287 [2024-07-23 01:09:48.203002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.287 [2024-07-23 01:09:48.203143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.287 [2024-07-23 01:09:48.203166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.287 qpair failed and we were unable to recover it. 00:30:04.287 [2024-07-23 01:09:48.203308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.287 [2024-07-23 01:09:48.203502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.287 [2024-07-23 01:09:48.203526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.287 qpair failed and we were unable to recover it. 00:30:04.287 [2024-07-23 01:09:48.203664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.287 [2024-07-23 01:09:48.203801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.287 [2024-07-23 01:09:48.203827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.287 qpair failed and we were unable to recover it. 00:30:04.287 [2024-07-23 01:09:48.204017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.287 [2024-07-23 01:09:48.204182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.287 [2024-07-23 01:09:48.204210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.287 qpair failed and we were unable to recover it. 00:30:04.287 [2024-07-23 01:09:48.204378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.287 [2024-07-23 01:09:48.204541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.287 [2024-07-23 01:09:48.204564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.287 qpair failed and we were unable to recover it. 00:30:04.287 [2024-07-23 01:09:48.204731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.287 [2024-07-23 01:09:48.204899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.287 [2024-07-23 01:09:48.204924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.287 qpair failed and we were unable to recover it. 00:30:04.287 [2024-07-23 01:09:48.205109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.287 [2024-07-23 01:09:48.205274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.287 [2024-07-23 01:09:48.205298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.287 qpair failed and we were unable to recover it. 00:30:04.287 [2024-07-23 01:09:48.205488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.287 [2024-07-23 01:09:48.205678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.287 [2024-07-23 01:09:48.205703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.287 qpair failed and we were unable to recover it. 00:30:04.287 [2024-07-23 01:09:48.205845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.287 [2024-07-23 01:09:48.206011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.287 [2024-07-23 01:09:48.206035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.287 qpair failed and we were unable to recover it. 00:30:04.287 [2024-07-23 01:09:48.206204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.287 [2024-07-23 01:09:48.206411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.287 [2024-07-23 01:09:48.206436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.287 qpair failed and we were unable to recover it. 00:30:04.287 [2024-07-23 01:09:48.206604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.287 [2024-07-23 01:09:48.206782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.287 [2024-07-23 01:09:48.206806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.287 qpair failed and we were unable to recover it. 00:30:04.287 [2024-07-23 01:09:48.206973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.287 [2024-07-23 01:09:48.207114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.287 [2024-07-23 01:09:48.207139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.287 qpair failed and we were unable to recover it. 00:30:04.287 [2024-07-23 01:09:48.207300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.287 [2024-07-23 01:09:48.207462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.287 [2024-07-23 01:09:48.207486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.287 qpair failed and we were unable to recover it. 00:30:04.287 [2024-07-23 01:09:48.207647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.287 [2024-07-23 01:09:48.207815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.287 [2024-07-23 01:09:48.207845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.287 qpair failed and we were unable to recover it. 00:30:04.287 [2024-07-23 01:09:48.208010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.287 [2024-07-23 01:09:48.208205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.287 [2024-07-23 01:09:48.208230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.287 qpair failed and we were unable to recover it. 00:30:04.287 [2024-07-23 01:09:48.208390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.287 [2024-07-23 01:09:48.208566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.287 [2024-07-23 01:09:48.208590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.287 qpair failed and we were unable to recover it. 00:30:04.287 [2024-07-23 01:09:48.208765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.287 [2024-07-23 01:09:48.208937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.287 [2024-07-23 01:09:48.208961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.287 qpair failed and we were unable to recover it. 00:30:04.287 [2024-07-23 01:09:48.209153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.288 [2024-07-23 01:09:48.209293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.288 [2024-07-23 01:09:48.209320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.288 qpair failed and we were unable to recover it. 00:30:04.288 [2024-07-23 01:09:48.209490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.288 [2024-07-23 01:09:48.209653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.288 [2024-07-23 01:09:48.209682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.288 qpair failed and we were unable to recover it. 00:30:04.288 [2024-07-23 01:09:48.209875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.288 [2024-07-23 01:09:48.210040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.288 [2024-07-23 01:09:48.210064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.288 qpair failed and we were unable to recover it. 00:30:04.288 [2024-07-23 01:09:48.210252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.288 [2024-07-23 01:09:48.210418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.288 [2024-07-23 01:09:48.210442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.288 qpair failed and we were unable to recover it. 00:30:04.288 [2024-07-23 01:09:48.210608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.288 [2024-07-23 01:09:48.210796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.288 [2024-07-23 01:09:48.210821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.288 qpair failed and we were unable to recover it. 00:30:04.288 [2024-07-23 01:09:48.210958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.288 [2024-07-23 01:09:48.211129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.288 [2024-07-23 01:09:48.211152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.288 qpair failed and we were unable to recover it. 00:30:04.288 [2024-07-23 01:09:48.211318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.288 [2024-07-23 01:09:48.211507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.288 [2024-07-23 01:09:48.211531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.288 qpair failed and we were unable to recover it. 00:30:04.288 [2024-07-23 01:09:48.211703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.288 [2024-07-23 01:09:48.211967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.288 [2024-07-23 01:09:48.211992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.288 qpair failed and we were unable to recover it. 00:30:04.288 [2024-07-23 01:09:48.212158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.288 [2024-07-23 01:09:48.212345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.288 [2024-07-23 01:09:48.212369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.288 qpair failed and we were unable to recover it. 00:30:04.288 [2024-07-23 01:09:48.212503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.288 [2024-07-23 01:09:48.212667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.288 [2024-07-23 01:09:48.212693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.288 qpair failed and we were unable to recover it. 00:30:04.288 [2024-07-23 01:09:48.212858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.288 [2024-07-23 01:09:48.213046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.288 [2024-07-23 01:09:48.213071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.288 qpair failed and we were unable to recover it. 00:30:04.288 [2024-07-23 01:09:48.213237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.288 [2024-07-23 01:09:48.213400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.288 [2024-07-23 01:09:48.213424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.288 qpair failed and we were unable to recover it. 00:30:04.288 [2024-07-23 01:09:48.213561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.288 [2024-07-23 01:09:48.213726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.288 [2024-07-23 01:09:48.213752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.288 qpair failed and we were unable to recover it. 00:30:04.288 [2024-07-23 01:09:48.213920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.288 [2024-07-23 01:09:48.214085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.288 [2024-07-23 01:09:48.214120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.288 qpair failed and we were unable to recover it. 00:30:04.288 [2024-07-23 01:09:48.214310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.288 [2024-07-23 01:09:48.214477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.288 [2024-07-23 01:09:48.214503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.288 qpair failed and we were unable to recover it. 00:30:04.288 [2024-07-23 01:09:48.214686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.288 [2024-07-23 01:09:48.214877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.288 [2024-07-23 01:09:48.214904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.288 qpair failed and we were unable to recover it. 00:30:04.288 [2024-07-23 01:09:48.215074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.288 [2024-07-23 01:09:48.215215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.288 [2024-07-23 01:09:48.215242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.288 qpair failed and we were unable to recover it. 00:30:04.288 [2024-07-23 01:09:48.215428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.288 [2024-07-23 01:09:48.215638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.288 [2024-07-23 01:09:48.215666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.288 qpair failed and we were unable to recover it. 00:30:04.288 [2024-07-23 01:09:48.215840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.288 [2024-07-23 01:09:48.216008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.288 [2024-07-23 01:09:48.216032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.288 qpair failed and we were unable to recover it. 00:30:04.288 [2024-07-23 01:09:48.216197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.288 [2024-07-23 01:09:48.216363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.288 [2024-07-23 01:09:48.216388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.288 qpair failed and we were unable to recover it. 00:30:04.288 [2024-07-23 01:09:48.216558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.288 [2024-07-23 01:09:48.216725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.288 [2024-07-23 01:09:48.216750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.288 qpair failed and we were unable to recover it. 00:30:04.288 [2024-07-23 01:09:48.216918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.288 [2024-07-23 01:09:48.217077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.288 [2024-07-23 01:09:48.217101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.288 qpair failed and we were unable to recover it. 00:30:04.288 [2024-07-23 01:09:48.217230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.289 [2024-07-23 01:09:48.217387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.289 [2024-07-23 01:09:48.217411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.289 qpair failed and we were unable to recover it. 00:30:04.289 [2024-07-23 01:09:48.217542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.289 [2024-07-23 01:09:48.217709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.289 [2024-07-23 01:09:48.217735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.289 qpair failed and we were unable to recover it. 00:30:04.289 [2024-07-23 01:09:48.217899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.289 [2024-07-23 01:09:48.218039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.289 [2024-07-23 01:09:48.218064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.289 qpair failed and we were unable to recover it. 00:30:04.289 [2024-07-23 01:09:48.218225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.289 [2024-07-23 01:09:48.218364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.289 [2024-07-23 01:09:48.218391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.289 qpair failed and we were unable to recover it. 00:30:04.289 [2024-07-23 01:09:48.218581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.289 [2024-07-23 01:09:48.218764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.289 [2024-07-23 01:09:48.218789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.289 qpair failed and we were unable to recover it. 00:30:04.289 [2024-07-23 01:09:48.218960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.289 [2024-07-23 01:09:48.219124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.289 [2024-07-23 01:09:48.219150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.289 qpair failed and we were unable to recover it. 00:30:04.289 [2024-07-23 01:09:48.219318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.289 [2024-07-23 01:09:48.219459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.289 [2024-07-23 01:09:48.219484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.289 qpair failed and we were unable to recover it. 00:30:04.289 [2024-07-23 01:09:48.219651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.289 [2024-07-23 01:09:48.219795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.289 [2024-07-23 01:09:48.219819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.289 qpair failed and we were unable to recover it. 00:30:04.289 [2024-07-23 01:09:48.219985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.289 [2024-07-23 01:09:48.220122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.289 [2024-07-23 01:09:48.220145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.289 qpair failed and we were unable to recover it. 00:30:04.289 [2024-07-23 01:09:48.220308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.289 [2024-07-23 01:09:48.220445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.289 [2024-07-23 01:09:48.220468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.289 qpair failed and we were unable to recover it. 00:30:04.289 [2024-07-23 01:09:48.220631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.289 [2024-07-23 01:09:48.220819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.289 [2024-07-23 01:09:48.220845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.289 qpair failed and we were unable to recover it. 00:30:04.289 [2024-07-23 01:09:48.220981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.289 [2024-07-23 01:09:48.221168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.289 [2024-07-23 01:09:48.221192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.289 qpair failed and we were unable to recover it. 00:30:04.289 [2024-07-23 01:09:48.221331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.289 [2024-07-23 01:09:48.221476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.289 [2024-07-23 01:09:48.221501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.289 qpair failed and we were unable to recover it. 00:30:04.289 [2024-07-23 01:09:48.221696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.289 [2024-07-23 01:09:48.221844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.289 [2024-07-23 01:09:48.221869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.289 qpair failed and we were unable to recover it. 00:30:04.289 [2024-07-23 01:09:48.222036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.289 [2024-07-23 01:09:48.222201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.289 [2024-07-23 01:09:48.222224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.289 qpair failed and we were unable to recover it. 00:30:04.289 [2024-07-23 01:09:48.222392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.289 [2024-07-23 01:09:48.222577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.289 [2024-07-23 01:09:48.222602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.289 qpair failed and we were unable to recover it. 00:30:04.289 [2024-07-23 01:09:48.222786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.289 [2024-07-23 01:09:48.222955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.289 [2024-07-23 01:09:48.222979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.289 qpair failed and we were unable to recover it. 00:30:04.289 [2024-07-23 01:09:48.223131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.289 [2024-07-23 01:09:48.223268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.289 [2024-07-23 01:09:48.223291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.289 qpair failed and we were unable to recover it. 00:30:04.289 [2024-07-23 01:09:48.223484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.289 [2024-07-23 01:09:48.223637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.289 [2024-07-23 01:09:48.223673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.289 qpair failed and we were unable to recover it. 00:30:04.289 [2024-07-23 01:09:48.223848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.289 [2024-07-23 01:09:48.224042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.289 [2024-07-23 01:09:48.224066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.289 qpair failed and we were unable to recover it. 00:30:04.289 [2024-07-23 01:09:48.224243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.289 [2024-07-23 01:09:48.224403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.289 [2024-07-23 01:09:48.224428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.289 qpair failed and we were unable to recover it. 00:30:04.289 [2024-07-23 01:09:48.224595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.289 [2024-07-23 01:09:48.224769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.289 [2024-07-23 01:09:48.224794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.289 qpair failed and we were unable to recover it. 00:30:04.289 [2024-07-23 01:09:48.224931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.289 [2024-07-23 01:09:48.225094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.289 [2024-07-23 01:09:48.225119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.289 qpair failed and we were unable to recover it. 00:30:04.289 [2024-07-23 01:09:48.225288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.289 [2024-07-23 01:09:48.225433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.289 [2024-07-23 01:09:48.225457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.289 qpair failed and we were unable to recover it. 00:30:04.289 [2024-07-23 01:09:48.225625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.289 [2024-07-23 01:09:48.225784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.289 [2024-07-23 01:09:48.225808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.289 qpair failed and we were unable to recover it. 00:30:04.289 [2024-07-23 01:09:48.225979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.289 [2024-07-23 01:09:48.226137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.289 [2024-07-23 01:09:48.226162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.289 qpair failed and we were unable to recover it. 00:30:04.289 [2024-07-23 01:09:48.226327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.289 [2024-07-23 01:09:48.226492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.289 [2024-07-23 01:09:48.226516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.289 qpair failed and we were unable to recover it. 00:30:04.289 [2024-07-23 01:09:48.226657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.289 [2024-07-23 01:09:48.226797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.289 [2024-07-23 01:09:48.226824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.289 qpair failed and we were unable to recover it. 00:30:04.289 [2024-07-23 01:09:48.226989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.289 [2024-07-23 01:09:48.227156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.289 [2024-07-23 01:09:48.227182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.290 qpair failed and we were unable to recover it. 00:30:04.290 [2024-07-23 01:09:48.227347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.290 [2024-07-23 01:09:48.227504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.290 [2024-07-23 01:09:48.227527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.290 qpair failed and we were unable to recover it. 00:30:04.290 [2024-07-23 01:09:48.227717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.290 [2024-07-23 01:09:48.227851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.290 [2024-07-23 01:09:48.227875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.290 qpair failed and we were unable to recover it. 00:30:04.290 [2024-07-23 01:09:48.228017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.290 [2024-07-23 01:09:48.228185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.290 [2024-07-23 01:09:48.228209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.290 qpair failed and we were unable to recover it. 00:30:04.290 [2024-07-23 01:09:48.228374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.290 [2024-07-23 01:09:48.228539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.290 [2024-07-23 01:09:48.228565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.290 qpair failed and we were unable to recover it. 00:30:04.290 [2024-07-23 01:09:48.228738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.290 [2024-07-23 01:09:48.228934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.290 [2024-07-23 01:09:48.228959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.290 qpair failed and we were unable to recover it. 00:30:04.290 [2024-07-23 01:09:48.229130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.290 [2024-07-23 01:09:48.229295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.290 [2024-07-23 01:09:48.229319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.290 qpair failed and we were unable to recover it. 00:30:04.290 [2024-07-23 01:09:48.229518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.290 [2024-07-23 01:09:48.229677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.290 [2024-07-23 01:09:48.229702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.290 qpair failed and we were unable to recover it. 00:30:04.290 [2024-07-23 01:09:48.229839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.290 [2024-07-23 01:09:48.229975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.290 [2024-07-23 01:09:48.230002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.290 qpair failed and we were unable to recover it. 00:30:04.290 [2024-07-23 01:09:48.230193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.290 [2024-07-23 01:09:48.230383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.290 [2024-07-23 01:09:48.230408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.290 qpair failed and we were unable to recover it. 00:30:04.290 [2024-07-23 01:09:48.230539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.290 [2024-07-23 01:09:48.230699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.290 [2024-07-23 01:09:48.230724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.290 qpair failed and we were unable to recover it. 00:30:04.290 [2024-07-23 01:09:48.230865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.290 [2024-07-23 01:09:48.231053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.290 [2024-07-23 01:09:48.231078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.290 qpair failed and we were unable to recover it. 00:30:04.290 [2024-07-23 01:09:48.231240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.290 [2024-07-23 01:09:48.231404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.290 [2024-07-23 01:09:48.231428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.290 qpair failed and we were unable to recover it. 00:30:04.290 [2024-07-23 01:09:48.231595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.290 [2024-07-23 01:09:48.231842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.290 [2024-07-23 01:09:48.231867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.290 qpair failed and we were unable to recover it. 00:30:04.290 [2024-07-23 01:09:48.232036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.290 [2024-07-23 01:09:48.232191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.290 [2024-07-23 01:09:48.232216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.290 qpair failed and we were unable to recover it. 00:30:04.290 [2024-07-23 01:09:48.232435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.290 [2024-07-23 01:09:48.232573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.290 [2024-07-23 01:09:48.232599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.290 qpair failed and we were unable to recover it. 00:30:04.290 [2024-07-23 01:09:48.232769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.290 [2024-07-23 01:09:48.232908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.290 [2024-07-23 01:09:48.232948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.290 qpair failed and we were unable to recover it. 00:30:04.290 [2024-07-23 01:09:48.233148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.290 [2024-07-23 01:09:48.233353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.290 [2024-07-23 01:09:48.233377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.290 qpair failed and we were unable to recover it. 00:30:04.290 [2024-07-23 01:09:48.233581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.290 [2024-07-23 01:09:48.233739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.290 [2024-07-23 01:09:48.233765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.290 qpair failed and we were unable to recover it. 00:30:04.290 [2024-07-23 01:09:48.233937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.290 [2024-07-23 01:09:48.234070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.290 [2024-07-23 01:09:48.234097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.290 qpair failed and we were unable to recover it. 00:30:04.290 [2024-07-23 01:09:48.234311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.290 [2024-07-23 01:09:48.234442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.290 [2024-07-23 01:09:48.234466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.290 qpair failed and we were unable to recover it. 00:30:04.290 [2024-07-23 01:09:48.234603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.290 [2024-07-23 01:09:48.234756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.290 [2024-07-23 01:09:48.234780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.290 qpair failed and we were unable to recover it. 00:30:04.290 [2024-07-23 01:09:48.234909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.290 [2024-07-23 01:09:48.235136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.290 [2024-07-23 01:09:48.235160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.290 qpair failed and we were unable to recover it. 00:30:04.290 [2024-07-23 01:09:48.235338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.290 [2024-07-23 01:09:48.235525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.290 [2024-07-23 01:09:48.235549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.290 qpair failed and we were unable to recover it. 00:30:04.290 [2024-07-23 01:09:48.235738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.290 [2024-07-23 01:09:48.235995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.290 [2024-07-23 01:09:48.236033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.290 qpair failed and we were unable to recover it. 00:30:04.290 [2024-07-23 01:09:48.236232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.290 [2024-07-23 01:09:48.236485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.290 [2024-07-23 01:09:48.236509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.290 qpair failed and we were unable to recover it. 00:30:04.290 [2024-07-23 01:09:48.236681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.290 [2024-07-23 01:09:48.236857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.290 [2024-07-23 01:09:48.236881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.290 qpair failed and we were unable to recover it. 00:30:04.290 [2024-07-23 01:09:48.237076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.290 [2024-07-23 01:09:48.237243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.290 [2024-07-23 01:09:48.237267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.290 qpair failed and we were unable to recover it. 00:30:04.290 [2024-07-23 01:09:48.237442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.290 [2024-07-23 01:09:48.237583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.290 [2024-07-23 01:09:48.237607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.290 qpair failed and we were unable to recover it. 00:30:04.291 [2024-07-23 01:09:48.237810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.291 [2024-07-23 01:09:48.237965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.291 [2024-07-23 01:09:48.237990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.291 qpair failed and we were unable to recover it. 00:30:04.291 [2024-07-23 01:09:48.238164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.291 [2024-07-23 01:09:48.238352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.291 [2024-07-23 01:09:48.238376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.291 qpair failed and we were unable to recover it. 00:30:04.291 [2024-07-23 01:09:48.238569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.291 [2024-07-23 01:09:48.238743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.291 [2024-07-23 01:09:48.238769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.291 qpair failed and we were unable to recover it. 00:30:04.291 [2024-07-23 01:09:48.238934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.291 [2024-07-23 01:09:48.239126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.291 [2024-07-23 01:09:48.239151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.291 qpair failed and we were unable to recover it. 00:30:04.291 [2024-07-23 01:09:48.239345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.291 [2024-07-23 01:09:48.239508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.291 [2024-07-23 01:09:48.239534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.291 qpair failed and we were unable to recover it. 00:30:04.291 [2024-07-23 01:09:48.239679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.291 [2024-07-23 01:09:48.239844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.291 [2024-07-23 01:09:48.239870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.291 qpair failed and we were unable to recover it. 00:30:04.291 [2024-07-23 01:09:48.240011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.291 [2024-07-23 01:09:48.240200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.291 [2024-07-23 01:09:48.240225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.291 qpair failed and we were unable to recover it. 00:30:04.291 [2024-07-23 01:09:48.240391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.291 [2024-07-23 01:09:48.240530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.291 [2024-07-23 01:09:48.240554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.291 qpair failed and we were unable to recover it. 00:30:04.291 [2024-07-23 01:09:48.240719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.291 [2024-07-23 01:09:48.240889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.291 [2024-07-23 01:09:48.240913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.291 qpair failed and we were unable to recover it. 00:30:04.291 [2024-07-23 01:09:48.241054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.291 [2024-07-23 01:09:48.241244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.291 [2024-07-23 01:09:48.241270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.291 qpair failed and we were unable to recover it. 00:30:04.291 [2024-07-23 01:09:48.241460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.291 [2024-07-23 01:09:48.241652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.291 [2024-07-23 01:09:48.241677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.291 qpair failed and we were unable to recover it. 00:30:04.291 [2024-07-23 01:09:48.241824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.291 [2024-07-23 01:09:48.241985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.291 [2024-07-23 01:09:48.242009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.291 qpair failed and we were unable to recover it. 00:30:04.291 [2024-07-23 01:09:48.242176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.291 [2024-07-23 01:09:48.242339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.291 [2024-07-23 01:09:48.242366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.291 qpair failed and we were unable to recover it. 00:30:04.291 [2024-07-23 01:09:48.242547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.291 [2024-07-23 01:09:48.242735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.291 [2024-07-23 01:09:48.242759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.291 qpair failed and we were unable to recover it. 00:30:04.291 [2024-07-23 01:09:48.242923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.291 [2024-07-23 01:09:48.243086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.291 [2024-07-23 01:09:48.243111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.291 qpair failed and we were unable to recover it. 00:30:04.291 [2024-07-23 01:09:48.243278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.291 [2024-07-23 01:09:48.243447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.291 [2024-07-23 01:09:48.243471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.291 qpair failed and we were unable to recover it. 00:30:04.291 [2024-07-23 01:09:48.243633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.291 [2024-07-23 01:09:48.243815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.291 [2024-07-23 01:09:48.243839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.291 qpair failed and we were unable to recover it. 00:30:04.291 [2024-07-23 01:09:48.244026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.291 [2024-07-23 01:09:48.244217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.291 [2024-07-23 01:09:48.244241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.291 qpair failed and we were unable to recover it. 00:30:04.291 [2024-07-23 01:09:48.244407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.291 [2024-07-23 01:09:48.244547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.291 [2024-07-23 01:09:48.244572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.291 qpair failed and we were unable to recover it. 00:30:04.291 [2024-07-23 01:09:48.244745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.291 [2024-07-23 01:09:48.244911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.291 [2024-07-23 01:09:48.244935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.291 qpair failed and we were unable to recover it. 00:30:04.291 [2024-07-23 01:09:48.245062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.291 [2024-07-23 01:09:48.245249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.291 [2024-07-23 01:09:48.245273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.291 qpair failed and we were unable to recover it. 00:30:04.291 [2024-07-23 01:09:48.245409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.291 [2024-07-23 01:09:48.245572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.291 [2024-07-23 01:09:48.245596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.291 qpair failed and we were unable to recover it. 00:30:04.291 [2024-07-23 01:09:48.245778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.291 [2024-07-23 01:09:48.245909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.291 [2024-07-23 01:09:48.245935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.291 qpair failed and we were unable to recover it. 00:30:04.291 [2024-07-23 01:09:48.246097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.291 [2024-07-23 01:09:48.246297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.291 [2024-07-23 01:09:48.246321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.291 qpair failed and we were unable to recover it. 00:30:04.291 [2024-07-23 01:09:48.246464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.291 [2024-07-23 01:09:48.246631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.291 [2024-07-23 01:09:48.246658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.291 qpair failed and we were unable to recover it. 00:30:04.291 [2024-07-23 01:09:48.246828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.291 [2024-07-23 01:09:48.247019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.291 [2024-07-23 01:09:48.247043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.291 qpair failed and we were unable to recover it. 00:30:04.291 [2024-07-23 01:09:48.247233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.291 [2024-07-23 01:09:48.247397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.291 [2024-07-23 01:09:48.247421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.291 qpair failed and we were unable to recover it. 00:30:04.291 [2024-07-23 01:09:48.247611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.291 [2024-07-23 01:09:48.247802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.291 [2024-07-23 01:09:48.247828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.291 qpair failed and we were unable to recover it. 00:30:04.291 [2024-07-23 01:09:48.247992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.291 [2024-07-23 01:09:48.248160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.292 [2024-07-23 01:09:48.248184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.292 qpair failed and we were unable to recover it. 00:30:04.292 [2024-07-23 01:09:48.248326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.292 [2024-07-23 01:09:48.248516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.292 [2024-07-23 01:09:48.248541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.292 qpair failed and we were unable to recover it. 00:30:04.292 [2024-07-23 01:09:48.248732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.292 [2024-07-23 01:09:48.248896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.292 [2024-07-23 01:09:48.248920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.292 qpair failed and we were unable to recover it. 00:30:04.292 [2024-07-23 01:09:48.249109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.292 [2024-07-23 01:09:48.249270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.292 [2024-07-23 01:09:48.249294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.292 qpair failed and we were unable to recover it. 00:30:04.292 [2024-07-23 01:09:48.249433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.292 [2024-07-23 01:09:48.249631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.292 [2024-07-23 01:09:48.249656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.292 qpair failed and we were unable to recover it. 00:30:04.292 [2024-07-23 01:09:48.249816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.292 [2024-07-23 01:09:48.249982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.292 [2024-07-23 01:09:48.250014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.292 qpair failed and we were unable to recover it. 00:30:04.292 [2024-07-23 01:09:48.250159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.292 [2024-07-23 01:09:48.250323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.292 [2024-07-23 01:09:48.250348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.292 qpair failed and we were unable to recover it. 00:30:04.292 [2024-07-23 01:09:48.250487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.292 [2024-07-23 01:09:48.250650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.292 [2024-07-23 01:09:48.250675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.292 qpair failed and we were unable to recover it. 00:30:04.292 [2024-07-23 01:09:48.250863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.292 [2024-07-23 01:09:48.250997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.292 [2024-07-23 01:09:48.251020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.292 qpair failed and we were unable to recover it. 00:30:04.292 [2024-07-23 01:09:48.251163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.292 [2024-07-23 01:09:48.251325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.292 [2024-07-23 01:09:48.251351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.292 qpair failed and we were unable to recover it. 00:30:04.292 [2024-07-23 01:09:48.251510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.292 [2024-07-23 01:09:48.251702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.292 [2024-07-23 01:09:48.251731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.292 qpair failed and we were unable to recover it. 00:30:04.292 [2024-07-23 01:09:48.251893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.292 [2024-07-23 01:09:48.252082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.292 [2024-07-23 01:09:48.252107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.292 qpair failed and we were unable to recover it. 00:30:04.292 [2024-07-23 01:09:48.252292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.292 [2024-07-23 01:09:48.252450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.292 [2024-07-23 01:09:48.252474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.292 qpair failed and we were unable to recover it. 00:30:04.292 [2024-07-23 01:09:48.252642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.292 [2024-07-23 01:09:48.252837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.292 [2024-07-23 01:09:48.252862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.292 qpair failed and we were unable to recover it. 00:30:04.292 [2024-07-23 01:09:48.253027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.292 [2024-07-23 01:09:48.253216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.292 [2024-07-23 01:09:48.253241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.292 qpair failed and we were unable to recover it. 00:30:04.292 [2024-07-23 01:09:48.253406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.292 [2024-07-23 01:09:48.253570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.292 [2024-07-23 01:09:48.253594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.292 qpair failed and we were unable to recover it. 00:30:04.292 [2024-07-23 01:09:48.253770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.292 [2024-07-23 01:09:48.253934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.292 [2024-07-23 01:09:48.253961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.292 qpair failed and we were unable to recover it. 00:30:04.292 [2024-07-23 01:09:48.254128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.292 [2024-07-23 01:09:48.254264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.292 [2024-07-23 01:09:48.254289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.292 qpair failed and we were unable to recover it. 00:30:04.292 [2024-07-23 01:09:48.254462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.292 [2024-07-23 01:09:48.254631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.292 [2024-07-23 01:09:48.254656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.292 qpair failed and we were unable to recover it. 00:30:04.292 [2024-07-23 01:09:48.254823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.292 [2024-07-23 01:09:48.255010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.292 [2024-07-23 01:09:48.255034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.292 qpair failed and we were unable to recover it. 00:30:04.292 [2024-07-23 01:09:48.255217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.292 [2024-07-23 01:09:48.255382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.292 [2024-07-23 01:09:48.255413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.292 qpair failed and we were unable to recover it. 00:30:04.292 [2024-07-23 01:09:48.255557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.292 [2024-07-23 01:09:48.255729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.292 [2024-07-23 01:09:48.255756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.292 qpair failed and we were unable to recover it. 00:30:04.292 [2024-07-23 01:09:48.255947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.292 [2024-07-23 01:09:48.256086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.292 [2024-07-23 01:09:48.256110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.292 qpair failed and we were unable to recover it. 00:30:04.292 [2024-07-23 01:09:48.256275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.292 [2024-07-23 01:09:48.256452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.292 [2024-07-23 01:09:48.256477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.292 qpair failed and we were unable to recover it. 00:30:04.292 [2024-07-23 01:09:48.256666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.292 [2024-07-23 01:09:48.256834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.292 [2024-07-23 01:09:48.256858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.292 qpair failed and we were unable to recover it. 00:30:04.292 [2024-07-23 01:09:48.257018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.292 [2024-07-23 01:09:48.257203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.292 [2024-07-23 01:09:48.257227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.292 qpair failed and we were unable to recover it. 00:30:04.292 [2024-07-23 01:09:48.257377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.292 [2024-07-23 01:09:48.257546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.292 [2024-07-23 01:09:48.257571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.292 qpair failed and we were unable to recover it. 00:30:04.292 [2024-07-23 01:09:48.257760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.292 [2024-07-23 01:09:48.257953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.292 [2024-07-23 01:09:48.257977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.292 qpair failed and we were unable to recover it. 00:30:04.292 [2024-07-23 01:09:48.258145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.292 [2024-07-23 01:09:48.258339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.292 [2024-07-23 01:09:48.258364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.292 qpair failed and we were unable to recover it. 00:30:04.292 [2024-07-23 01:09:48.258510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.293 [2024-07-23 01:09:48.258673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.293 [2024-07-23 01:09:48.258697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.293 qpair failed and we were unable to recover it. 00:30:04.293 [2024-07-23 01:09:48.258867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.293 [2024-07-23 01:09:48.259007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.293 [2024-07-23 01:09:48.259035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.293 qpair failed and we were unable to recover it. 00:30:04.293 [2024-07-23 01:09:48.259192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.293 [2024-07-23 01:09:48.259324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.293 [2024-07-23 01:09:48.259348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.293 qpair failed and we were unable to recover it. 00:30:04.293 [2024-07-23 01:09:48.259538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.293 [2024-07-23 01:09:48.259730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.293 [2024-07-23 01:09:48.259756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.293 qpair failed and we were unable to recover it. 00:30:04.293 [2024-07-23 01:09:48.259894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.293 [2024-07-23 01:09:48.260081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.293 [2024-07-23 01:09:48.260106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.293 qpair failed and we were unable to recover it. 00:30:04.293 [2024-07-23 01:09:48.260273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.293 [2024-07-23 01:09:48.260435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.293 [2024-07-23 01:09:48.260459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.293 qpair failed and we were unable to recover it. 00:30:04.293 [2024-07-23 01:09:48.260596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.293 [2024-07-23 01:09:48.260800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.293 [2024-07-23 01:09:48.260826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.293 qpair failed and we were unable to recover it. 00:30:04.293 [2024-07-23 01:09:48.260982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.293 [2024-07-23 01:09:48.261137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.293 [2024-07-23 01:09:48.261161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.293 qpair failed and we were unable to recover it. 00:30:04.293 [2024-07-23 01:09:48.261335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.293 [2024-07-23 01:09:48.261470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.293 [2024-07-23 01:09:48.261495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.293 qpair failed and we were unable to recover it. 00:30:04.293 [2024-07-23 01:09:48.261659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.293 [2024-07-23 01:09:48.261849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.293 [2024-07-23 01:09:48.261874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.293 qpair failed and we were unable to recover it. 00:30:04.293 [2024-07-23 01:09:48.262041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.293 [2024-07-23 01:09:48.262232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.293 [2024-07-23 01:09:48.262256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.293 qpair failed and we were unable to recover it. 00:30:04.293 [2024-07-23 01:09:48.262401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.293 [2024-07-23 01:09:48.262564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.293 [2024-07-23 01:09:48.262595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.293 qpair failed and we were unable to recover it. 00:30:04.293 [2024-07-23 01:09:48.262767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.293 [2024-07-23 01:09:48.262931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.293 [2024-07-23 01:09:48.262956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.293 qpair failed and we were unable to recover it. 00:30:04.293 [2024-07-23 01:09:48.263122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.293 [2024-07-23 01:09:48.263281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.293 [2024-07-23 01:09:48.263305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.293 qpair failed and we were unable to recover it. 00:30:04.293 [2024-07-23 01:09:48.263447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.293 [2024-07-23 01:09:48.263609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.293 [2024-07-23 01:09:48.263651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.293 qpair failed and we were unable to recover it. 00:30:04.293 [2024-07-23 01:09:48.263816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.293 [2024-07-23 01:09:48.264006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.293 [2024-07-23 01:09:48.264030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.293 qpair failed and we were unable to recover it. 00:30:04.293 [2024-07-23 01:09:48.264197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.293 [2024-07-23 01:09:48.264360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.293 [2024-07-23 01:09:48.264384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.293 qpair failed and we were unable to recover it. 00:30:04.293 [2024-07-23 01:09:48.264527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.293 [2024-07-23 01:09:48.264674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.293 [2024-07-23 01:09:48.264702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.293 qpair failed and we were unable to recover it. 00:30:04.293 [2024-07-23 01:09:48.264863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.293 [2024-07-23 01:09:48.265038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.293 [2024-07-23 01:09:48.265061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.293 qpair failed and we were unable to recover it. 00:30:04.293 [2024-07-23 01:09:48.265257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.293 [2024-07-23 01:09:48.265423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.293 [2024-07-23 01:09:48.265447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.293 qpair failed and we were unable to recover it. 00:30:04.293 [2024-07-23 01:09:48.265609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.293 [2024-07-23 01:09:48.265756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.293 [2024-07-23 01:09:48.265781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.293 qpair failed and we were unable to recover it. 00:30:04.293 [2024-07-23 01:09:48.265948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.293 [2024-07-23 01:09:48.266115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.293 [2024-07-23 01:09:48.266139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.293 qpair failed and we were unable to recover it. 00:30:04.293 [2024-07-23 01:09:48.266336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.293 [2024-07-23 01:09:48.266498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.293 [2024-07-23 01:09:48.266523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.293 qpair failed and we were unable to recover it. 00:30:04.293 [2024-07-23 01:09:48.266691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.293 [2024-07-23 01:09:48.266826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.293 [2024-07-23 01:09:48.266850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.293 qpair failed and we were unable to recover it. 00:30:04.293 [2024-07-23 01:09:48.267006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.294 [2024-07-23 01:09:48.267195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.294 [2024-07-23 01:09:48.267219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.294 qpair failed and we were unable to recover it. 00:30:04.294 [2024-07-23 01:09:48.267360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.294 [2024-07-23 01:09:48.267517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.294 [2024-07-23 01:09:48.267541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.294 qpair failed and we were unable to recover it. 00:30:04.294 [2024-07-23 01:09:48.267699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.294 [2024-07-23 01:09:48.267835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.294 [2024-07-23 01:09:48.267859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.294 qpair failed and we were unable to recover it. 00:30:04.294 [2024-07-23 01:09:48.267993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.294 [2024-07-23 01:09:48.268187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.294 [2024-07-23 01:09:48.268222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.294 qpair failed and we were unable to recover it. 00:30:04.294 [2024-07-23 01:09:48.268398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.294 [2024-07-23 01:09:48.268569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.294 [2024-07-23 01:09:48.268594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.294 qpair failed and we were unable to recover it. 00:30:04.294 [2024-07-23 01:09:48.268740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.294 [2024-07-23 01:09:48.268928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.294 [2024-07-23 01:09:48.268953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.294 qpair failed and we were unable to recover it. 00:30:04.294 [2024-07-23 01:09:48.269121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.294 [2024-07-23 01:09:48.269256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.294 [2024-07-23 01:09:48.269281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.294 qpair failed and we were unable to recover it. 00:30:04.294 [2024-07-23 01:09:48.269473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.294 [2024-07-23 01:09:48.269661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.294 [2024-07-23 01:09:48.269686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.294 qpair failed and we were unable to recover it. 00:30:04.294 [2024-07-23 01:09:48.269888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.294 [2024-07-23 01:09:48.270034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.294 [2024-07-23 01:09:48.270060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.294 qpair failed and we were unable to recover it. 00:30:04.294 [2024-07-23 01:09:48.270229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.294 [2024-07-23 01:09:48.270399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.294 [2024-07-23 01:09:48.270423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.294 qpair failed and we were unable to recover it. 00:30:04.294 [2024-07-23 01:09:48.270589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.294 [2024-07-23 01:09:48.270863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.294 [2024-07-23 01:09:48.270889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.294 qpair failed and we were unable to recover it. 00:30:04.294 [2024-07-23 01:09:48.271060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.294 [2024-07-23 01:09:48.271250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.294 [2024-07-23 01:09:48.271274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.294 qpair failed and we were unable to recover it. 00:30:04.294 [2024-07-23 01:09:48.271434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.294 [2024-07-23 01:09:48.271641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.294 [2024-07-23 01:09:48.271667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.294 qpair failed and we were unable to recover it. 00:30:04.294 [2024-07-23 01:09:48.271815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.294 [2024-07-23 01:09:48.271981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.294 [2024-07-23 01:09:48.272004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.294 qpair failed and we were unable to recover it. 00:30:04.294 [2024-07-23 01:09:48.272194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.294 [2024-07-23 01:09:48.272360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.294 [2024-07-23 01:09:48.272384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.294 qpair failed and we were unable to recover it. 00:30:04.294 [2024-07-23 01:09:48.272524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.294 [2024-07-23 01:09:48.272688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.294 [2024-07-23 01:09:48.272714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.294 qpair failed and we were unable to recover it. 00:30:04.294 [2024-07-23 01:09:48.272880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.294 [2024-07-23 01:09:48.273071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.294 [2024-07-23 01:09:48.273095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.294 qpair failed and we were unable to recover it. 00:30:04.294 [2024-07-23 01:09:48.273267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.294 [2024-07-23 01:09:48.273405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.294 [2024-07-23 01:09:48.273431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.294 qpair failed and we were unable to recover it. 00:30:04.294 [2024-07-23 01:09:48.273602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.294 [2024-07-23 01:09:48.273800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.294 [2024-07-23 01:09:48.273825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.294 qpair failed and we were unable to recover it. 00:30:04.294 [2024-07-23 01:09:48.273991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.294 [2024-07-23 01:09:48.274186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.294 [2024-07-23 01:09:48.274211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.294 qpair failed and we were unable to recover it. 00:30:04.294 [2024-07-23 01:09:48.274375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.294 [2024-07-23 01:09:48.274539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.294 [2024-07-23 01:09:48.274563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.294 qpair failed and we were unable to recover it. 00:30:04.294 [2024-07-23 01:09:48.274724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.294 [2024-07-23 01:09:48.274861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.294 [2024-07-23 01:09:48.274886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.294 qpair failed and we were unable to recover it. 00:30:04.294 [2024-07-23 01:09:48.275046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.294 [2024-07-23 01:09:48.275200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.294 [2024-07-23 01:09:48.275224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.294 qpair failed and we were unable to recover it. 00:30:04.294 [2024-07-23 01:09:48.275388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.294 [2024-07-23 01:09:48.275576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.294 [2024-07-23 01:09:48.275600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.294 qpair failed and we were unable to recover it. 00:30:04.294 [2024-07-23 01:09:48.275793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.294 [2024-07-23 01:09:48.275983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.294 [2024-07-23 01:09:48.276008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.294 qpair failed and we were unable to recover it. 00:30:04.294 [2024-07-23 01:09:48.276137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.294 [2024-07-23 01:09:48.276312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.294 [2024-07-23 01:09:48.276336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.294 qpair failed and we were unable to recover it. 00:30:04.294 [2024-07-23 01:09:48.276545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.294 [2024-07-23 01:09:48.276686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.294 [2024-07-23 01:09:48.276712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.294 qpair failed and we were unable to recover it. 00:30:04.294 [2024-07-23 01:09:48.276853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.294 [2024-07-23 01:09:48.277019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.294 [2024-07-23 01:09:48.277043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.294 qpair failed and we were unable to recover it. 00:30:04.294 [2024-07-23 01:09:48.277206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.294 [2024-07-23 01:09:48.277346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.295 [2024-07-23 01:09:48.277370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.295 qpair failed and we were unable to recover it. 00:30:04.295 [2024-07-23 01:09:48.277535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.295 [2024-07-23 01:09:48.277726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.295 [2024-07-23 01:09:48.277751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.295 qpair failed and we were unable to recover it. 00:30:04.295 [2024-07-23 01:09:48.277948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.295 [2024-07-23 01:09:48.278138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.295 [2024-07-23 01:09:48.278163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.295 qpair failed and we were unable to recover it. 00:30:04.295 [2024-07-23 01:09:48.278329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.295 [2024-07-23 01:09:48.278482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.295 [2024-07-23 01:09:48.278506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.295 qpair failed and we were unable to recover it. 00:30:04.295 [2024-07-23 01:09:48.278645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.295 [2024-07-23 01:09:48.278812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.295 [2024-07-23 01:09:48.278838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.295 qpair failed and we were unable to recover it. 00:30:04.295 [2024-07-23 01:09:48.279004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.295 [2024-07-23 01:09:48.279162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.295 [2024-07-23 01:09:48.279186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.295 qpair failed and we were unable to recover it. 00:30:04.295 [2024-07-23 01:09:48.279347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.295 [2024-07-23 01:09:48.279502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.295 [2024-07-23 01:09:48.279526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.295 qpair failed and we were unable to recover it. 00:30:04.295 [2024-07-23 01:09:48.279734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.295 [2024-07-23 01:09:48.279926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.295 [2024-07-23 01:09:48.279951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.295 qpair failed and we were unable to recover it. 00:30:04.295 [2024-07-23 01:09:48.280081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.295 [2024-07-23 01:09:48.280225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.295 [2024-07-23 01:09:48.280250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.295 qpair failed and we were unable to recover it. 00:30:04.295 [2024-07-23 01:09:48.280418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.295 [2024-07-23 01:09:48.280582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.295 [2024-07-23 01:09:48.280607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.295 qpair failed and we were unable to recover it. 00:30:04.295 [2024-07-23 01:09:48.280764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.295 [2024-07-23 01:09:48.280906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.295 [2024-07-23 01:09:48.280932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.295 qpair failed and we were unable to recover it. 00:30:04.295 [2024-07-23 01:09:48.281102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.295 [2024-07-23 01:09:48.281242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.295 [2024-07-23 01:09:48.281269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.295 qpair failed and we were unable to recover it. 00:30:04.295 [2024-07-23 01:09:48.281460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.295 [2024-07-23 01:09:48.281645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.295 [2024-07-23 01:09:48.281669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.295 qpair failed and we were unable to recover it. 00:30:04.295 [2024-07-23 01:09:48.281829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.295 [2024-07-23 01:09:48.281966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.295 [2024-07-23 01:09:48.281992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.295 qpair failed and we were unable to recover it. 00:30:04.295 [2024-07-23 01:09:48.282178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.295 [2024-07-23 01:09:48.282344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.295 [2024-07-23 01:09:48.282369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.295 qpair failed and we were unable to recover it. 00:30:04.295 [2024-07-23 01:09:48.282560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.295 [2024-07-23 01:09:48.282699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.295 [2024-07-23 01:09:48.282726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.295 qpair failed and we were unable to recover it. 00:30:04.295 [2024-07-23 01:09:48.282892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.295 [2024-07-23 01:09:48.283033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.295 [2024-07-23 01:09:48.283059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.295 qpair failed and we were unable to recover it. 00:30:04.295 [2024-07-23 01:09:48.283232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.295 [2024-07-23 01:09:48.283415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.295 [2024-07-23 01:09:48.283439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.295 qpair failed and we were unable to recover it. 00:30:04.295 [2024-07-23 01:09:48.283602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.295 [2024-07-23 01:09:48.283787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.295 [2024-07-23 01:09:48.283812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.295 qpair failed and we were unable to recover it. 00:30:04.295 [2024-07-23 01:09:48.283952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.295 [2024-07-23 01:09:48.284114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.295 [2024-07-23 01:09:48.284138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.295 qpair failed and we were unable to recover it. 00:30:04.295 [2024-07-23 01:09:48.284289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.295 [2024-07-23 01:09:48.284429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.295 [2024-07-23 01:09:48.284455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.295 qpair failed and we were unable to recover it. 00:30:04.295 [2024-07-23 01:09:48.284593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.295 [2024-07-23 01:09:48.284739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.295 [2024-07-23 01:09:48.284765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.295 qpair failed and we were unable to recover it. 00:30:04.295 [2024-07-23 01:09:48.284960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.295 [2024-07-23 01:09:48.285121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.295 [2024-07-23 01:09:48.285146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.295 qpair failed and we were unable to recover it. 00:30:04.295 [2024-07-23 01:09:48.285283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.295 [2024-07-23 01:09:48.285446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.295 [2024-07-23 01:09:48.285470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.295 qpair failed and we were unable to recover it. 00:30:04.295 [2024-07-23 01:09:48.285619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.295 [2024-07-23 01:09:48.285782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.295 [2024-07-23 01:09:48.285806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.295 qpair failed and we were unable to recover it. 00:30:04.295 [2024-07-23 01:09:48.285962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.295 [2024-07-23 01:09:48.286122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.295 [2024-07-23 01:09:48.286146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.295 qpair failed and we were unable to recover it. 00:30:04.295 [2024-07-23 01:09:48.286338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.295 [2024-07-23 01:09:48.286503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.295 [2024-07-23 01:09:48.286527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.295 qpair failed and we were unable to recover it. 00:30:04.295 [2024-07-23 01:09:48.286714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.295 [2024-07-23 01:09:48.286874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.295 [2024-07-23 01:09:48.286899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.295 qpair failed and we were unable to recover it. 00:30:04.295 [2024-07-23 01:09:48.287043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.295 [2024-07-23 01:09:48.287189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.295 [2024-07-23 01:09:48.287213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.295 qpair failed and we were unable to recover it. 00:30:04.296 [2024-07-23 01:09:48.287404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.296 [2024-07-23 01:09:48.287537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.296 [2024-07-23 01:09:48.287561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.296 qpair failed and we were unable to recover it. 00:30:04.296 [2024-07-23 01:09:48.287707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.296 [2024-07-23 01:09:48.287875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.296 [2024-07-23 01:09:48.287900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.296 qpair failed and we were unable to recover it. 00:30:04.296 [2024-07-23 01:09:48.288092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.296 [2024-07-23 01:09:48.288264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.296 [2024-07-23 01:09:48.288290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.296 qpair failed and we were unable to recover it. 00:30:04.296 [2024-07-23 01:09:48.288432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.296 [2024-07-23 01:09:48.288624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.296 [2024-07-23 01:09:48.288650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.296 qpair failed and we were unable to recover it. 00:30:04.296 [2024-07-23 01:09:48.288800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.296 [2024-07-23 01:09:48.288964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.296 [2024-07-23 01:09:48.288989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.296 qpair failed and we were unable to recover it. 00:30:04.296 [2024-07-23 01:09:48.289157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.296 [2024-07-23 01:09:48.289326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.296 [2024-07-23 01:09:48.289351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.296 qpair failed and we were unable to recover it. 00:30:04.296 [2024-07-23 01:09:48.289490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.296 [2024-07-23 01:09:48.289654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.296 [2024-07-23 01:09:48.289680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.296 qpair failed and we were unable to recover it. 00:30:04.296 [2024-07-23 01:09:48.289836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.296 [2024-07-23 01:09:48.289978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.296 [2024-07-23 01:09:48.290003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.296 qpair failed and we were unable to recover it. 00:30:04.296 [2024-07-23 01:09:48.290188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.296 [2024-07-23 01:09:48.290352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.296 [2024-07-23 01:09:48.290376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.296 qpair failed and we were unable to recover it. 00:30:04.296 [2024-07-23 01:09:48.290519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.296 [2024-07-23 01:09:48.290684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.296 [2024-07-23 01:09:48.290710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.296 qpair failed and we were unable to recover it. 00:30:04.296 [2024-07-23 01:09:48.290881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.296 [2024-07-23 01:09:48.291023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.296 [2024-07-23 01:09:48.291050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.296 qpair failed and we were unable to recover it. 00:30:04.296 [2024-07-23 01:09:48.291223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.296 [2024-07-23 01:09:48.291414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.296 [2024-07-23 01:09:48.291439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.296 qpair failed and we were unable to recover it. 00:30:04.296 [2024-07-23 01:09:48.291600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.296 [2024-07-23 01:09:48.291744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.296 [2024-07-23 01:09:48.291769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.296 qpair failed and we were unable to recover it. 00:30:04.296 [2024-07-23 01:09:48.291907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.296 [2024-07-23 01:09:48.292070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.296 [2024-07-23 01:09:48.292096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.296 qpair failed and we were unable to recover it. 00:30:04.296 [2024-07-23 01:09:48.292263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.296 [2024-07-23 01:09:48.292508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.296 [2024-07-23 01:09:48.292533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.296 qpair failed and we were unable to recover it. 00:30:04.296 [2024-07-23 01:09:48.292693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.296 [2024-07-23 01:09:48.292881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.296 [2024-07-23 01:09:48.292906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.296 qpair failed and we were unable to recover it. 00:30:04.296 [2024-07-23 01:09:48.293068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.296 [2024-07-23 01:09:48.293235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.296 [2024-07-23 01:09:48.293260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.296 qpair failed and we were unable to recover it. 00:30:04.296 [2024-07-23 01:09:48.293421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.296 [2024-07-23 01:09:48.293607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.296 [2024-07-23 01:09:48.293639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.296 qpair failed and we were unable to recover it. 00:30:04.296 [2024-07-23 01:09:48.293771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.296 [2024-07-23 01:09:48.293936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.296 [2024-07-23 01:09:48.293962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.296 qpair failed and we were unable to recover it. 00:30:04.296 [2024-07-23 01:09:48.294096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.296 [2024-07-23 01:09:48.294285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.296 [2024-07-23 01:09:48.294309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.296 qpair failed and we were unable to recover it. 00:30:04.296 [2024-07-23 01:09:48.294447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.296 [2024-07-23 01:09:48.294611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.296 [2024-07-23 01:09:48.294644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.296 qpair failed and we were unable to recover it. 00:30:04.296 [2024-07-23 01:09:48.294836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.296 [2024-07-23 01:09:48.295085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.296 [2024-07-23 01:09:48.295110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.296 qpair failed and we were unable to recover it. 00:30:04.296 [2024-07-23 01:09:48.295304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.296 [2024-07-23 01:09:48.295441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.296 [2024-07-23 01:09:48.295466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.296 qpair failed and we were unable to recover it. 00:30:04.296 [2024-07-23 01:09:48.295640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.296 [2024-07-23 01:09:48.295783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.296 [2024-07-23 01:09:48.295810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.296 qpair failed and we were unable to recover it. 00:30:04.296 [2024-07-23 01:09:48.295988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.296 [2024-07-23 01:09:48.296129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.296 [2024-07-23 01:09:48.296156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.296 qpair failed and we were unable to recover it. 00:30:04.296 [2024-07-23 01:09:48.296318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.296 [2024-07-23 01:09:48.296481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.296 [2024-07-23 01:09:48.296505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.296 qpair failed and we were unable to recover it. 00:30:04.296 [2024-07-23 01:09:48.296697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.296 [2024-07-23 01:09:48.296829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.296 [2024-07-23 01:09:48.296854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.296 qpair failed and we were unable to recover it. 00:30:04.296 [2024-07-23 01:09:48.296990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.296 [2024-07-23 01:09:48.297131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.296 [2024-07-23 01:09:48.297156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.296 qpair failed and we were unable to recover it. 00:30:04.296 [2024-07-23 01:09:48.297319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.296 [2024-07-23 01:09:48.297486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.297 [2024-07-23 01:09:48.297511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.297 qpair failed and we were unable to recover it. 00:30:04.297 [2024-07-23 01:09:48.297708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.297 [2024-07-23 01:09:48.297842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.297 [2024-07-23 01:09:48.297867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.297 qpair failed and we were unable to recover it. 00:30:04.297 [2024-07-23 01:09:48.298035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.297 [2024-07-23 01:09:48.298225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.297 [2024-07-23 01:09:48.298250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.297 qpair failed and we were unable to recover it. 00:30:04.297 [2024-07-23 01:09:48.298416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.297 [2024-07-23 01:09:48.298584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.297 [2024-07-23 01:09:48.298609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.297 qpair failed and we were unable to recover it. 00:30:04.297 [2024-07-23 01:09:48.298804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.297 [2024-07-23 01:09:48.298934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.297 [2024-07-23 01:09:48.298959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.297 qpair failed and we were unable to recover it. 00:30:04.297 [2024-07-23 01:09:48.299122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.297 [2024-07-23 01:09:48.299314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.297 [2024-07-23 01:09:48.299338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.297 qpair failed and we were unable to recover it. 00:30:04.297 [2024-07-23 01:09:48.299503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.297 [2024-07-23 01:09:48.299698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.297 [2024-07-23 01:09:48.299723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.297 qpair failed and we were unable to recover it. 00:30:04.297 [2024-07-23 01:09:48.299862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.297 [2024-07-23 01:09:48.300063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.297 [2024-07-23 01:09:48.300088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.297 qpair failed and we were unable to recover it. 00:30:04.297 [2024-07-23 01:09:48.300248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.297 [2024-07-23 01:09:48.300383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.297 [2024-07-23 01:09:48.300407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.297 qpair failed and we were unable to recover it. 00:30:04.297 [2024-07-23 01:09:48.300567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.297 [2024-07-23 01:09:48.300755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.297 [2024-07-23 01:09:48.300781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.297 qpair failed and we were unable to recover it. 00:30:04.297 [2024-07-23 01:09:48.300941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.297 [2024-07-23 01:09:48.301111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.297 [2024-07-23 01:09:48.301135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.297 qpair failed and we were unable to recover it. 00:30:04.297 [2024-07-23 01:09:48.301323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.297 [2024-07-23 01:09:48.301486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.297 [2024-07-23 01:09:48.301511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.297 qpair failed and we were unable to recover it. 00:30:04.297 [2024-07-23 01:09:48.301654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.297 [2024-07-23 01:09:48.301792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.297 [2024-07-23 01:09:48.301817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.297 qpair failed and we were unable to recover it. 00:30:04.297 [2024-07-23 01:09:48.301980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.297 [2024-07-23 01:09:48.302142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.297 [2024-07-23 01:09:48.302166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.297 qpair failed and we were unable to recover it. 00:30:04.297 [2024-07-23 01:09:48.302305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.297 [2024-07-23 01:09:48.302469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.297 [2024-07-23 01:09:48.302495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.297 qpair failed and we were unable to recover it. 00:30:04.297 [2024-07-23 01:09:48.302660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.297 [2024-07-23 01:09:48.302829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.297 [2024-07-23 01:09:48.302854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.297 qpair failed and we were unable to recover it. 00:30:04.297 [2024-07-23 01:09:48.303043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.297 [2024-07-23 01:09:48.303202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.297 [2024-07-23 01:09:48.303228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.297 qpair failed and we were unable to recover it. 00:30:04.297 [2024-07-23 01:09:48.303392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.297 [2024-07-23 01:09:48.303534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.297 [2024-07-23 01:09:48.303558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.297 qpair failed and we were unable to recover it. 00:30:04.297 [2024-07-23 01:09:48.303750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.297 [2024-07-23 01:09:48.303916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.297 [2024-07-23 01:09:48.303940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.297 qpair failed and we were unable to recover it. 00:30:04.297 [2024-07-23 01:09:48.304105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.297 [2024-07-23 01:09:48.304274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.297 [2024-07-23 01:09:48.304300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.297 qpair failed and we were unable to recover it. 00:30:04.297 [2024-07-23 01:09:48.304462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.297 [2024-07-23 01:09:48.304624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.297 [2024-07-23 01:09:48.304650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.297 qpair failed and we were unable to recover it. 00:30:04.297 [2024-07-23 01:09:48.304815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.297 [2024-07-23 01:09:48.304999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.297 [2024-07-23 01:09:48.305024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.297 qpair failed and we were unable to recover it. 00:30:04.297 [2024-07-23 01:09:48.305165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.297 [2024-07-23 01:09:48.305368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.297 [2024-07-23 01:09:48.305392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.297 qpair failed and we were unable to recover it. 00:30:04.297 [2024-07-23 01:09:48.305552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.297 [2024-07-23 01:09:48.305701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.297 [2024-07-23 01:09:48.305726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.297 qpair failed and we were unable to recover it. 00:30:04.297 [2024-07-23 01:09:48.305915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.297 [2024-07-23 01:09:48.306076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.297 [2024-07-23 01:09:48.306100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.297 qpair failed and we were unable to recover it. 00:30:04.297 [2024-07-23 01:09:48.306235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.297 [2024-07-23 01:09:48.306398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.297 [2024-07-23 01:09:48.306422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.297 qpair failed and we were unable to recover it. 00:30:04.297 [2024-07-23 01:09:48.306585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.297 [2024-07-23 01:09:48.306762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.297 [2024-07-23 01:09:48.306788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.297 qpair failed and we were unable to recover it. 00:30:04.297 [2024-07-23 01:09:48.306965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.297 [2024-07-23 01:09:48.307151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.297 [2024-07-23 01:09:48.307174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.297 qpair failed and we were unable to recover it. 00:30:04.297 [2024-07-23 01:09:48.307329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.297 [2024-07-23 01:09:48.307483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.297 [2024-07-23 01:09:48.307508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.297 qpair failed and we were unable to recover it. 00:30:04.298 [2024-07-23 01:09:48.307671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.298 [2024-07-23 01:09:48.307812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.298 [2024-07-23 01:09:48.307838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.298 qpair failed and we were unable to recover it. 00:30:04.298 [2024-07-23 01:09:48.307976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.298 [2024-07-23 01:09:48.308138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.298 [2024-07-23 01:09:48.308162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.298 qpair failed and we were unable to recover it. 00:30:04.298 [2024-07-23 01:09:48.308304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.298 [2024-07-23 01:09:48.308473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.298 [2024-07-23 01:09:48.308497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.298 qpair failed and we were unable to recover it. 00:30:04.298 [2024-07-23 01:09:48.308659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.298 [2024-07-23 01:09:48.308824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.298 [2024-07-23 01:09:48.308849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.298 qpair failed and we were unable to recover it. 00:30:04.298 [2024-07-23 01:09:48.309039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.298 [2024-07-23 01:09:48.309205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.298 [2024-07-23 01:09:48.309233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.298 qpair failed and we were unable to recover it. 00:30:04.298 [2024-07-23 01:09:48.309418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.298 [2024-07-23 01:09:48.309609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.298 [2024-07-23 01:09:48.309641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.298 qpair failed and we were unable to recover it. 00:30:04.298 [2024-07-23 01:09:48.309802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.298 [2024-07-23 01:09:48.309967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.298 [2024-07-23 01:09:48.309991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.298 qpair failed and we were unable to recover it. 00:30:04.298 [2024-07-23 01:09:48.310154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.298 [2024-07-23 01:09:48.310285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.298 [2024-07-23 01:09:48.310309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.298 qpair failed and we were unable to recover it. 00:30:04.298 [2024-07-23 01:09:48.310498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.298 [2024-07-23 01:09:48.310662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.298 [2024-07-23 01:09:48.310688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.298 qpair failed and we were unable to recover it. 00:30:04.298 [2024-07-23 01:09:48.310852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.298 [2024-07-23 01:09:48.311020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.298 [2024-07-23 01:09:48.311046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.298 qpair failed and we were unable to recover it. 00:30:04.298 [2024-07-23 01:09:48.311195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.298 [2024-07-23 01:09:48.311338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.298 [2024-07-23 01:09:48.311362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.298 qpair failed and we were unable to recover it. 00:30:04.298 [2024-07-23 01:09:48.311550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.298 [2024-07-23 01:09:48.311684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.298 [2024-07-23 01:09:48.311711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.298 qpair failed and we were unable to recover it. 00:30:04.298 [2024-07-23 01:09:48.311902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.298 [2024-07-23 01:09:48.312088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.298 [2024-07-23 01:09:48.312112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.298 qpair failed and we were unable to recover it. 00:30:04.298 [2024-07-23 01:09:48.312258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.298 [2024-07-23 01:09:48.312401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.298 [2024-07-23 01:09:48.312426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.298 qpair failed and we were unable to recover it. 00:30:04.298 [2024-07-23 01:09:48.312562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.298 [2024-07-23 01:09:48.312728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.298 [2024-07-23 01:09:48.312759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.298 qpair failed and we were unable to recover it. 00:30:04.298 [2024-07-23 01:09:48.312955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.298 [2024-07-23 01:09:48.313099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.298 [2024-07-23 01:09:48.313123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.298 qpair failed and we were unable to recover it. 00:30:04.298 [2024-07-23 01:09:48.313255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.298 [2024-07-23 01:09:48.313416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.298 [2024-07-23 01:09:48.313440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.298 qpair failed and we were unable to recover it. 00:30:04.298 [2024-07-23 01:09:48.313601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.298 [2024-07-23 01:09:48.313745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.298 [2024-07-23 01:09:48.313770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.298 qpair failed and we were unable to recover it. 00:30:04.298 [2024-07-23 01:09:48.313908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.298 [2024-07-23 01:09:48.314098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.298 [2024-07-23 01:09:48.314123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.298 qpair failed and we were unable to recover it. 00:30:04.298 [2024-07-23 01:09:48.314318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.298 [2024-07-23 01:09:48.314450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.298 [2024-07-23 01:09:48.314475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.298 qpair failed and we were unable to recover it. 00:30:04.298 [2024-07-23 01:09:48.314621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.298 [2024-07-23 01:09:48.314791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.298 [2024-07-23 01:09:48.314815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.298 qpair failed and we were unable to recover it. 00:30:04.298 [2024-07-23 01:09:48.315016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.298 [2024-07-23 01:09:48.315171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.298 [2024-07-23 01:09:48.315196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.298 qpair failed and we were unable to recover it. 00:30:04.298 [2024-07-23 01:09:48.315329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.298 [2024-07-23 01:09:48.315526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.298 [2024-07-23 01:09:48.315551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.298 qpair failed and we were unable to recover it. 00:30:04.298 [2024-07-23 01:09:48.315725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.298 [2024-07-23 01:09:48.315890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.298 [2024-07-23 01:09:48.315915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.298 qpair failed and we were unable to recover it. 00:30:04.298 [2024-07-23 01:09:48.316074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.298 [2024-07-23 01:09:48.316238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.299 [2024-07-23 01:09:48.316269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.299 qpair failed and we were unable to recover it. 00:30:04.299 [2024-07-23 01:09:48.316405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.299 [2024-07-23 01:09:48.316546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.299 [2024-07-23 01:09:48.316570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.299 qpair failed and we were unable to recover it. 00:30:04.299 [2024-07-23 01:09:48.316741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.299 [2024-07-23 01:09:48.316885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.299 [2024-07-23 01:09:48.316909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.299 qpair failed and we were unable to recover it. 00:30:04.299 [2024-07-23 01:09:48.317102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.299 [2024-07-23 01:09:48.317245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.299 [2024-07-23 01:09:48.317272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.299 qpair failed and we were unable to recover it. 00:30:04.299 [2024-07-23 01:09:48.317436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.299 [2024-07-23 01:09:48.317628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.299 [2024-07-23 01:09:48.317653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.299 qpair failed and we were unable to recover it. 00:30:04.299 [2024-07-23 01:09:48.317814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.299 [2024-07-23 01:09:48.317947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.299 [2024-07-23 01:09:48.317974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.299 qpair failed and we were unable to recover it. 00:30:04.299 [2024-07-23 01:09:48.318153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.299 [2024-07-23 01:09:48.318316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.299 [2024-07-23 01:09:48.318341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.299 qpair failed and we were unable to recover it. 00:30:04.299 [2024-07-23 01:09:48.318481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.299 [2024-07-23 01:09:48.318672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.299 [2024-07-23 01:09:48.318698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.299 qpair failed and we were unable to recover it. 00:30:04.299 [2024-07-23 01:09:48.318862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.299 [2024-07-23 01:09:48.319027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.299 [2024-07-23 01:09:48.319053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.299 qpair failed and we were unable to recover it. 00:30:04.299 [2024-07-23 01:09:48.319212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.299 [2024-07-23 01:09:48.319345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.299 [2024-07-23 01:09:48.319369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.299 qpair failed and we were unable to recover it. 00:30:04.299 [2024-07-23 01:09:48.319556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.299 [2024-07-23 01:09:48.319697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.299 [2024-07-23 01:09:48.319728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.299 qpair failed and we were unable to recover it. 00:30:04.299 [2024-07-23 01:09:48.319891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.299 [2024-07-23 01:09:48.320060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.299 [2024-07-23 01:09:48.320084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.299 qpair failed and we were unable to recover it. 00:30:04.299 [2024-07-23 01:09:48.320275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.299 [2024-07-23 01:09:48.320418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.299 [2024-07-23 01:09:48.320444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.299 qpair failed and we were unable to recover it. 00:30:04.299 [2024-07-23 01:09:48.320608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.299 [2024-07-23 01:09:48.320778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.299 [2024-07-23 01:09:48.320804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.299 qpair failed and we were unable to recover it. 00:30:04.299 [2024-07-23 01:09:48.320969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.299 [2024-07-23 01:09:48.321134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.299 [2024-07-23 01:09:48.321158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.299 qpair failed and we were unable to recover it. 00:30:04.299 [2024-07-23 01:09:48.321291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.299 [2024-07-23 01:09:48.321453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.299 [2024-07-23 01:09:48.321479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.299 qpair failed and we were unable to recover it. 00:30:04.299 [2024-07-23 01:09:48.321624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.299 [2024-07-23 01:09:48.321757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.299 [2024-07-23 01:09:48.321783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.299 qpair failed and we were unable to recover it. 00:30:04.299 [2024-07-23 01:09:48.321952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.299 [2024-07-23 01:09:48.322143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.299 [2024-07-23 01:09:48.322167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.299 qpair failed and we were unable to recover it. 00:30:04.299 [2024-07-23 01:09:48.322331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.299 [2024-07-23 01:09:48.322501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.299 [2024-07-23 01:09:48.322526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.299 qpair failed and we were unable to recover it. 00:30:04.299 [2024-07-23 01:09:48.322716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.299 [2024-07-23 01:09:48.322881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.299 [2024-07-23 01:09:48.322905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.299 qpair failed and we were unable to recover it. 00:30:04.299 [2024-07-23 01:09:48.323095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.299 [2024-07-23 01:09:48.323261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.299 [2024-07-23 01:09:48.323286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.299 qpair failed and we were unable to recover it. 00:30:04.299 [2024-07-23 01:09:48.323457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.299 [2024-07-23 01:09:48.323641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.299 [2024-07-23 01:09:48.323666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.299 qpair failed and we were unable to recover it. 00:30:04.299 [2024-07-23 01:09:48.323833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.299 [2024-07-23 01:09:48.323969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.299 [2024-07-23 01:09:48.323995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.299 qpair failed and we were unable to recover it. 00:30:04.299 [2024-07-23 01:09:48.324162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.299 [2024-07-23 01:09:48.324296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.299 [2024-07-23 01:09:48.324322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.299 qpair failed and we were unable to recover it. 00:30:04.299 [2024-07-23 01:09:48.324465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.299 [2024-07-23 01:09:48.324627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.299 [2024-07-23 01:09:48.324652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.299 qpair failed and we were unable to recover it. 00:30:04.299 [2024-07-23 01:09:48.324814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.299 [2024-07-23 01:09:48.325009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.299 [2024-07-23 01:09:48.325033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.299 qpair failed and we were unable to recover it. 00:30:04.299 [2024-07-23 01:09:48.325193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.299 [2024-07-23 01:09:48.325331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.299 [2024-07-23 01:09:48.325356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.299 qpair failed and we were unable to recover it. 00:30:04.299 [2024-07-23 01:09:48.325520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.299 [2024-07-23 01:09:48.325717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.299 [2024-07-23 01:09:48.325742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.299 qpair failed and we were unable to recover it. 00:30:04.299 [2024-07-23 01:09:48.325902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.299 [2024-07-23 01:09:48.326092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.299 [2024-07-23 01:09:48.326116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.299 qpair failed and we were unable to recover it. 00:30:04.300 [2024-07-23 01:09:48.326280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.300 [2024-07-23 01:09:48.326441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.300 [2024-07-23 01:09:48.326467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.300 qpair failed and we were unable to recover it. 00:30:04.300 [2024-07-23 01:09:48.326641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.300 [2024-07-23 01:09:48.326804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.300 [2024-07-23 01:09:48.326830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.300 qpair failed and we were unable to recover it. 00:30:04.300 [2024-07-23 01:09:48.327010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.300 [2024-07-23 01:09:48.327197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.300 [2024-07-23 01:09:48.327222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.300 qpair failed and we were unable to recover it. 00:30:04.300 [2024-07-23 01:09:48.327386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.300 [2024-07-23 01:09:48.327554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.300 [2024-07-23 01:09:48.327579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.300 qpair failed and we were unable to recover it. 00:30:04.300 [2024-07-23 01:09:48.327752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.300 [2024-07-23 01:09:48.327898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.300 [2024-07-23 01:09:48.327922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.300 qpair failed and we were unable to recover it. 00:30:04.300 [2024-07-23 01:09:48.328108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.300 [2024-07-23 01:09:48.328304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.300 [2024-07-23 01:09:48.328328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.300 qpair failed and we were unable to recover it. 00:30:04.300 [2024-07-23 01:09:48.328495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.300 [2024-07-23 01:09:48.328685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.300 [2024-07-23 01:09:48.328710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.300 qpair failed and we were unable to recover it. 00:30:04.300 [2024-07-23 01:09:48.328867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.300 [2024-07-23 01:09:48.329059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.300 [2024-07-23 01:09:48.329084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.300 qpair failed and we were unable to recover it. 00:30:04.300 [2024-07-23 01:09:48.329246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.300 [2024-07-23 01:09:48.329385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.300 [2024-07-23 01:09:48.329409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.300 qpair failed and we were unable to recover it. 00:30:04.300 [2024-07-23 01:09:48.329576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.300 [2024-07-23 01:09:48.329744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.300 [2024-07-23 01:09:48.329769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.300 qpair failed and we were unable to recover it. 00:30:04.300 [2024-07-23 01:09:48.329935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.300 [2024-07-23 01:09:48.330098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.300 [2024-07-23 01:09:48.330123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.300 qpair failed and we were unable to recover it. 00:30:04.300 [2024-07-23 01:09:48.330282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.300 [2024-07-23 01:09:48.330444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.300 [2024-07-23 01:09:48.330468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.300 qpair failed and we were unable to recover it. 00:30:04.300 [2024-07-23 01:09:48.330634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.300 [2024-07-23 01:09:48.330775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.300 [2024-07-23 01:09:48.330802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.300 qpair failed and we were unable to recover it. 00:30:04.300 [2024-07-23 01:09:48.330938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.300 [2024-07-23 01:09:48.331136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.300 [2024-07-23 01:09:48.331161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.300 qpair failed and we were unable to recover it. 00:30:04.300 [2024-07-23 01:09:48.331323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.300 [2024-07-23 01:09:48.331489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.300 [2024-07-23 01:09:48.331514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.300 qpair failed and we were unable to recover it. 00:30:04.300 [2024-07-23 01:09:48.331676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.300 [2024-07-23 01:09:48.331962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.300 [2024-07-23 01:09:48.331987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.300 qpair failed and we were unable to recover it. 00:30:04.300 [2024-07-23 01:09:48.332154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.300 [2024-07-23 01:09:48.332316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.300 [2024-07-23 01:09:48.332340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.300 qpair failed and we were unable to recover it. 00:30:04.300 [2024-07-23 01:09:48.332481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.300 [2024-07-23 01:09:48.332639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.300 [2024-07-23 01:09:48.332665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.300 qpair failed and we were unable to recover it. 00:30:04.300 [2024-07-23 01:09:48.332834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.300 [2024-07-23 01:09:48.333003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.300 [2024-07-23 01:09:48.333028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.300 qpair failed and we were unable to recover it. 00:30:04.300 [2024-07-23 01:09:48.333197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.300 [2024-07-23 01:09:48.333364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.300 [2024-07-23 01:09:48.333389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.300 qpair failed and we were unable to recover it. 00:30:04.300 [2024-07-23 01:09:48.333548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.300 [2024-07-23 01:09:48.333724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.300 [2024-07-23 01:09:48.333749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.300 qpair failed and we were unable to recover it. 00:30:04.300 [2024-07-23 01:09:48.333913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.300 [2024-07-23 01:09:48.334102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.300 [2024-07-23 01:09:48.334127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.300 qpair failed and we were unable to recover it. 00:30:04.300 [2024-07-23 01:09:48.334297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.300 [2024-07-23 01:09:48.334462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.300 [2024-07-23 01:09:48.334487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.300 qpair failed and we were unable to recover it. 00:30:04.300 [2024-07-23 01:09:48.334654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.300 [2024-07-23 01:09:48.334821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.300 [2024-07-23 01:09:48.334846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.300 qpair failed and we were unable to recover it. 00:30:04.300 [2024-07-23 01:09:48.335040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.300 [2024-07-23 01:09:48.335223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.300 [2024-07-23 01:09:48.335248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.300 qpair failed and we were unable to recover it. 00:30:04.300 [2024-07-23 01:09:48.335411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.300 [2024-07-23 01:09:48.335549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.300 [2024-07-23 01:09:48.335575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.300 qpair failed and we were unable to recover it. 00:30:04.300 [2024-07-23 01:09:48.335726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.300 [2024-07-23 01:09:48.335861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.300 [2024-07-23 01:09:48.335887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.300 qpair failed and we were unable to recover it. 00:30:04.300 [2024-07-23 01:09:48.336077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.300 [2024-07-23 01:09:48.336242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.300 [2024-07-23 01:09:48.336267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.300 qpair failed and we were unable to recover it. 00:30:04.300 [2024-07-23 01:09:48.336418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.300 [2024-07-23 01:09:48.336556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.301 [2024-07-23 01:09:48.336580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.301 qpair failed and we were unable to recover it. 00:30:04.301 [2024-07-23 01:09:48.336754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.301 [2024-07-23 01:09:48.336945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.301 [2024-07-23 01:09:48.336969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.301 qpair failed and we were unable to recover it. 00:30:04.301 [2024-07-23 01:09:48.337136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.301 [2024-07-23 01:09:48.337333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.301 [2024-07-23 01:09:48.337358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.301 qpair failed and we were unable to recover it. 00:30:04.301 [2024-07-23 01:09:48.337526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.301 [2024-07-23 01:09:48.337697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.301 [2024-07-23 01:09:48.337722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.301 qpair failed and we were unable to recover it. 00:30:04.301 [2024-07-23 01:09:48.337872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.301 [2024-07-23 01:09:48.338037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.301 [2024-07-23 01:09:48.338062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.301 qpair failed and we were unable to recover it. 00:30:04.301 [2024-07-23 01:09:48.338225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.301 [2024-07-23 01:09:48.338411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.301 [2024-07-23 01:09:48.338435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.301 qpair failed and we were unable to recover it. 00:30:04.301 [2024-07-23 01:09:48.338569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.301 [2024-07-23 01:09:48.338710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.301 [2024-07-23 01:09:48.338737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.301 qpair failed and we were unable to recover it. 00:30:04.301 [2024-07-23 01:09:48.338983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.301 [2024-07-23 01:09:48.339154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.301 [2024-07-23 01:09:48.339180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.301 qpair failed and we were unable to recover it. 00:30:04.301 [2024-07-23 01:09:48.339343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.301 [2024-07-23 01:09:48.339507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.301 [2024-07-23 01:09:48.339532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.301 qpair failed and we were unable to recover it. 00:30:04.301 [2024-07-23 01:09:48.339694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.301 [2024-07-23 01:09:48.339828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.301 [2024-07-23 01:09:48.339852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.301 qpair failed and we were unable to recover it. 00:30:04.301 [2024-07-23 01:09:48.339984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.301 [2024-07-23 01:09:48.340151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.301 [2024-07-23 01:09:48.340177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.301 qpair failed and we were unable to recover it. 00:30:04.301 [2024-07-23 01:09:48.340422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.301 [2024-07-23 01:09:48.340585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.301 [2024-07-23 01:09:48.340610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.301 qpair failed and we were unable to recover it. 00:30:04.301 [2024-07-23 01:09:48.340823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.301 [2024-07-23 01:09:48.340984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.301 [2024-07-23 01:09:48.341009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.301 qpair failed and we were unable to recover it. 00:30:04.301 [2024-07-23 01:09:48.341204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.301 [2024-07-23 01:09:48.341364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.301 [2024-07-23 01:09:48.341389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.301 qpair failed and we were unable to recover it. 00:30:04.301 [2024-07-23 01:09:48.341537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.301 [2024-07-23 01:09:48.341701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.301 [2024-07-23 01:09:48.341727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.301 qpair failed and we were unable to recover it. 00:30:04.301 [2024-07-23 01:09:48.341892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.301 [2024-07-23 01:09:48.342062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.301 [2024-07-23 01:09:48.342087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.301 qpair failed and we were unable to recover it. 00:30:04.301 [2024-07-23 01:09:48.342254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.301 [2024-07-23 01:09:48.342421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.301 [2024-07-23 01:09:48.342446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.301 qpair failed and we were unable to recover it. 00:30:04.301 [2024-07-23 01:09:48.342645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.301 [2024-07-23 01:09:48.342836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.301 [2024-07-23 01:09:48.342861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.301 qpair failed and we were unable to recover it. 00:30:04.301 [2024-07-23 01:09:48.342994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.301 [2024-07-23 01:09:48.343118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.301 [2024-07-23 01:09:48.343143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.301 qpair failed and we were unable to recover it. 00:30:04.301 [2024-07-23 01:09:48.343306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.301 [2024-07-23 01:09:48.343448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.301 [2024-07-23 01:09:48.343473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.301 qpair failed and we were unable to recover it. 00:30:04.301 [2024-07-23 01:09:48.343638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.301 [2024-07-23 01:09:48.343779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.301 [2024-07-23 01:09:48.343803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.301 qpair failed and we were unable to recover it. 00:30:04.301 [2024-07-23 01:09:48.343971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.301 [2024-07-23 01:09:48.344138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.301 [2024-07-23 01:09:48.344163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.301 qpair failed and we were unable to recover it. 00:30:04.301 [2024-07-23 01:09:48.344331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.301 [2024-07-23 01:09:48.344496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.301 [2024-07-23 01:09:48.344521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.301 qpair failed and we were unable to recover it. 00:30:04.301 [2024-07-23 01:09:48.344709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.301 [2024-07-23 01:09:48.344875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.301 [2024-07-23 01:09:48.344899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.301 qpair failed and we were unable to recover it. 00:30:04.301 [2024-07-23 01:09:48.345068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.301 [2024-07-23 01:09:48.345265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.301 [2024-07-23 01:09:48.345289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.301 qpair failed and we were unable to recover it. 00:30:04.301 [2024-07-23 01:09:48.345452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.301 [2024-07-23 01:09:48.345590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.301 [2024-07-23 01:09:48.345622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.301 qpair failed and we were unable to recover it. 00:30:04.301 [2024-07-23 01:09:48.345796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.301 [2024-07-23 01:09:48.345961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.301 [2024-07-23 01:09:48.345986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.301 qpair failed and we were unable to recover it. 00:30:04.301 [2024-07-23 01:09:48.346145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.301 [2024-07-23 01:09:48.346311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.301 [2024-07-23 01:09:48.346335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.301 qpair failed and we were unable to recover it. 00:30:04.301 [2024-07-23 01:09:48.346503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.301 [2024-07-23 01:09:48.346649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.301 [2024-07-23 01:09:48.346674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.301 qpair failed and we were unable to recover it. 00:30:04.302 [2024-07-23 01:09:48.346810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.302 [2024-07-23 01:09:48.346951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.302 [2024-07-23 01:09:48.346975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.302 qpair failed and we were unable to recover it. 00:30:04.302 [2024-07-23 01:09:48.347125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.302 [2024-07-23 01:09:48.347286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.302 [2024-07-23 01:09:48.347311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.302 qpair failed and we were unable to recover it. 00:30:04.302 [2024-07-23 01:09:48.347442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.302 [2024-07-23 01:09:48.347603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.302 [2024-07-23 01:09:48.347635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.302 qpair failed and we were unable to recover it. 00:30:04.302 [2024-07-23 01:09:48.347803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.302 [2024-07-23 01:09:48.347994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.302 [2024-07-23 01:09:48.348018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.302 qpair failed and we were unable to recover it. 00:30:04.302 [2024-07-23 01:09:48.348181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.302 [2024-07-23 01:09:48.348321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.302 [2024-07-23 01:09:48.348347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.302 qpair failed and we were unable to recover it. 00:30:04.302 [2024-07-23 01:09:48.348494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.302 [2024-07-23 01:09:48.348668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.302 [2024-07-23 01:09:48.348694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.302 qpair failed and we were unable to recover it. 00:30:04.302 [2024-07-23 01:09:48.348865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.302 [2024-07-23 01:09:48.349055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.302 [2024-07-23 01:09:48.349080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.302 qpair failed and we were unable to recover it. 00:30:04.302 [2024-07-23 01:09:48.349248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.302 [2024-07-23 01:09:48.349411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.302 [2024-07-23 01:09:48.349435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.302 qpair failed and we were unable to recover it. 00:30:04.302 [2024-07-23 01:09:48.349628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.302 [2024-07-23 01:09:48.349789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.302 [2024-07-23 01:09:48.349813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.302 qpair failed and we were unable to recover it. 00:30:04.302 [2024-07-23 01:09:48.349983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.302 [2024-07-23 01:09:48.350120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.302 [2024-07-23 01:09:48.350145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.302 qpair failed and we were unable to recover it. 00:30:04.302 [2024-07-23 01:09:48.350302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.302 [2024-07-23 01:09:48.350468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.302 [2024-07-23 01:09:48.350493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.302 qpair failed and we were unable to recover it. 00:30:04.302 [2024-07-23 01:09:48.350659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.302 [2024-07-23 01:09:48.350800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.302 [2024-07-23 01:09:48.350823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.302 qpair failed and we were unable to recover it. 00:30:04.302 [2024-07-23 01:09:48.351013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.302 [2024-07-23 01:09:48.351152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.302 [2024-07-23 01:09:48.351176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.302 qpair failed and we were unable to recover it. 00:30:04.302 [2024-07-23 01:09:48.351348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.302 [2024-07-23 01:09:48.351482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.302 [2024-07-23 01:09:48.351505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.302 qpair failed and we were unable to recover it. 00:30:04.302 [2024-07-23 01:09:48.351682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.302 [2024-07-23 01:09:48.351875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.302 [2024-07-23 01:09:48.351899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.302 qpair failed and we were unable to recover it. 00:30:04.302 [2024-07-23 01:09:48.352064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.302 [2024-07-23 01:09:48.352259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.302 [2024-07-23 01:09:48.352284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.302 qpair failed and we were unable to recover it. 00:30:04.302 [2024-07-23 01:09:48.352425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.302 [2024-07-23 01:09:48.352612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.302 [2024-07-23 01:09:48.352643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.302 qpair failed and we were unable to recover it. 00:30:04.302 [2024-07-23 01:09:48.352810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.302 [2024-07-23 01:09:48.352947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.302 [2024-07-23 01:09:48.352974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.302 qpair failed and we were unable to recover it. 00:30:04.302 [2024-07-23 01:09:48.353146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.302 [2024-07-23 01:09:48.353303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.302 [2024-07-23 01:09:48.353327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.302 qpair failed and we were unable to recover it. 00:30:04.302 [2024-07-23 01:09:48.353477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.302 [2024-07-23 01:09:48.353607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.302 [2024-07-23 01:09:48.353639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.302 qpair failed and we were unable to recover it. 00:30:04.302 [2024-07-23 01:09:48.353773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.302 [2024-07-23 01:09:48.353971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.302 [2024-07-23 01:09:48.353996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.302 qpair failed and we were unable to recover it. 00:30:04.302 [2024-07-23 01:09:48.354136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.302 [2024-07-23 01:09:48.354297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.302 [2024-07-23 01:09:48.354322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.302 qpair failed and we were unable to recover it. 00:30:04.302 [2024-07-23 01:09:48.354518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.302 [2024-07-23 01:09:48.354709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.302 [2024-07-23 01:09:48.354735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.302 qpair failed and we were unable to recover it. 00:30:04.302 [2024-07-23 01:09:48.354896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.302 [2024-07-23 01:09:48.355031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.302 [2024-07-23 01:09:48.355056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.302 qpair failed and we were unable to recover it. 00:30:04.302 [2024-07-23 01:09:48.355248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.302 [2024-07-23 01:09:48.355408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.303 [2024-07-23 01:09:48.355433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.303 qpair failed and we were unable to recover it. 00:30:04.303 [2024-07-23 01:09:48.355566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.303 [2024-07-23 01:09:48.355743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.303 [2024-07-23 01:09:48.355769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.303 qpair failed and we were unable to recover it. 00:30:04.303 [2024-07-23 01:09:48.355919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.303 [2024-07-23 01:09:48.356081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.303 [2024-07-23 01:09:48.356107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.303 qpair failed and we were unable to recover it. 00:30:04.303 [2024-07-23 01:09:48.356248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.303 [2024-07-23 01:09:48.356437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.303 [2024-07-23 01:09:48.356462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.303 qpair failed and we were unable to recover it. 00:30:04.303 [2024-07-23 01:09:48.356650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.303 [2024-07-23 01:09:48.356789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.303 [2024-07-23 01:09:48.356814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.303 qpair failed and we were unable to recover it. 00:30:04.303 [2024-07-23 01:09:48.356954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.303 [2024-07-23 01:09:48.357124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.303 [2024-07-23 01:09:48.357149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.303 qpair failed and we were unable to recover it. 00:30:04.303 [2024-07-23 01:09:48.357338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.303 [2024-07-23 01:09:48.357500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.303 [2024-07-23 01:09:48.357525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.303 qpair failed and we were unable to recover it. 00:30:04.303 [2024-07-23 01:09:48.357686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.303 [2024-07-23 01:09:48.357820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.303 [2024-07-23 01:09:48.357846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.303 qpair failed and we were unable to recover it. 00:30:04.303 [2024-07-23 01:09:48.358011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.303 [2024-07-23 01:09:48.358177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.303 [2024-07-23 01:09:48.358203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.303 qpair failed and we were unable to recover it. 00:30:04.303 [2024-07-23 01:09:48.358364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.303 [2024-07-23 01:09:48.358497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.303 [2024-07-23 01:09:48.358521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.303 qpair failed and we were unable to recover it. 00:30:04.303 [2024-07-23 01:09:48.358685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.303 [2024-07-23 01:09:48.358819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.303 [2024-07-23 01:09:48.358844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.303 qpair failed and we were unable to recover it. 00:30:04.303 [2024-07-23 01:09:48.359041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.303 [2024-07-23 01:09:48.359209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.303 [2024-07-23 01:09:48.359233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.303 qpair failed and we were unable to recover it. 00:30:04.303 [2024-07-23 01:09:48.359365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.303 [2024-07-23 01:09:48.359529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.303 [2024-07-23 01:09:48.359553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.303 qpair failed and we were unable to recover it. 00:30:04.303 [2024-07-23 01:09:48.359699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.303 [2024-07-23 01:09:48.359841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.303 [2024-07-23 01:09:48.359866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.303 qpair failed and we were unable to recover it. 00:30:04.303 [2024-07-23 01:09:48.360064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.303 [2024-07-23 01:09:48.360207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.303 [2024-07-23 01:09:48.360230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.303 qpair failed and we were unable to recover it. 00:30:04.303 [2024-07-23 01:09:48.360369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.303 [2024-07-23 01:09:48.360529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.303 [2024-07-23 01:09:48.360553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.303 qpair failed and we were unable to recover it. 00:30:04.303 [2024-07-23 01:09:48.360690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.303 [2024-07-23 01:09:48.360814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.303 [2024-07-23 01:09:48.360839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.303 qpair failed and we were unable to recover it. 00:30:04.303 [2024-07-23 01:09:48.361011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.303 [2024-07-23 01:09:48.361208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.303 [2024-07-23 01:09:48.361232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.303 qpair failed and we were unable to recover it. 00:30:04.303 [2024-07-23 01:09:48.361366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.303 [2024-07-23 01:09:48.361530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.303 [2024-07-23 01:09:48.361554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.303 qpair failed and we were unable to recover it. 00:30:04.303 [2024-07-23 01:09:48.361718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.303 [2024-07-23 01:09:48.361907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.303 [2024-07-23 01:09:48.361932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.303 qpair failed and we were unable to recover it. 00:30:04.303 [2024-07-23 01:09:48.362068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.303 [2024-07-23 01:09:48.362272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.303 [2024-07-23 01:09:48.362296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.303 qpair failed and we were unable to recover it. 00:30:04.303 [2024-07-23 01:09:48.362458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.303 [2024-07-23 01:09:48.362603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.303 [2024-07-23 01:09:48.362643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.303 qpair failed and we were unable to recover it. 00:30:04.303 [2024-07-23 01:09:48.362785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.303 [2024-07-23 01:09:48.362975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.303 [2024-07-23 01:09:48.363000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.303 qpair failed and we were unable to recover it. 00:30:04.303 [2024-07-23 01:09:48.363191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.303 [2024-07-23 01:09:48.363329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.303 [2024-07-23 01:09:48.363354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.303 qpair failed and we were unable to recover it. 00:30:04.303 [2024-07-23 01:09:48.363548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.303 [2024-07-23 01:09:48.363691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.303 [2024-07-23 01:09:48.363718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.303 qpair failed and we were unable to recover it. 00:30:04.303 [2024-07-23 01:09:48.363882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.303 [2024-07-23 01:09:48.364071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.303 [2024-07-23 01:09:48.364095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.303 qpair failed and we were unable to recover it. 00:30:04.303 [2024-07-23 01:09:48.364250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.303 [2024-07-23 01:09:48.364384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.303 [2024-07-23 01:09:48.364409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.303 qpair failed and we were unable to recover it. 00:30:04.303 [2024-07-23 01:09:48.364574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.303 [2024-07-23 01:09:48.364736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.303 [2024-07-23 01:09:48.364762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.303 qpair failed and we were unable to recover it. 00:30:04.303 [2024-07-23 01:09:48.364929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.303 [2024-07-23 01:09:48.365090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.303 [2024-07-23 01:09:48.365114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.303 qpair failed and we were unable to recover it. 00:30:04.304 [2024-07-23 01:09:48.365251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.304 [2024-07-23 01:09:48.365415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.304 [2024-07-23 01:09:48.365439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.304 qpair failed and we were unable to recover it. 00:30:04.304 [2024-07-23 01:09:48.365637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.304 [2024-07-23 01:09:48.365837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.304 [2024-07-23 01:09:48.365862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.304 qpair failed and we were unable to recover it. 00:30:04.304 [2024-07-23 01:09:48.366051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.304 [2024-07-23 01:09:48.366194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.304 [2024-07-23 01:09:48.366224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.304 qpair failed and we were unable to recover it. 00:30:04.304 [2024-07-23 01:09:48.366370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.304 [2024-07-23 01:09:48.366554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.304 [2024-07-23 01:09:48.366579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.304 qpair failed and we were unable to recover it. 00:30:04.304 [2024-07-23 01:09:48.366748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.304 [2024-07-23 01:09:48.366937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.304 [2024-07-23 01:09:48.366961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.304 qpair failed and we were unable to recover it. 00:30:04.304 [2024-07-23 01:09:48.367120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.304 [2024-07-23 01:09:48.367309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.304 [2024-07-23 01:09:48.367334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.304 qpair failed and we were unable to recover it. 00:30:04.304 [2024-07-23 01:09:48.367499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.304 [2024-07-23 01:09:48.367666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.304 [2024-07-23 01:09:48.367691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.304 qpair failed and we were unable to recover it. 00:30:04.304 [2024-07-23 01:09:48.367854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.304 [2024-07-23 01:09:48.367982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.304 [2024-07-23 01:09:48.368005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.304 qpair failed and we were unable to recover it. 00:30:04.304 [2024-07-23 01:09:48.368166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.304 [2024-07-23 01:09:48.368323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.304 [2024-07-23 01:09:48.368348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.304 qpair failed and we were unable to recover it. 00:30:04.304 [2024-07-23 01:09:48.368477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.304 [2024-07-23 01:09:48.368640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.304 [2024-07-23 01:09:48.368666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.304 qpair failed and we were unable to recover it. 00:30:04.304 [2024-07-23 01:09:48.368831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.304 [2024-07-23 01:09:48.368991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.304 [2024-07-23 01:09:48.369016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.304 qpair failed and we were unable to recover it. 00:30:04.304 [2024-07-23 01:09:48.369203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.304 [2024-07-23 01:09:48.369365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.304 [2024-07-23 01:09:48.369389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.304 qpair failed and we were unable to recover it. 00:30:04.304 [2024-07-23 01:09:48.369523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.304 [2024-07-23 01:09:48.369700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.304 [2024-07-23 01:09:48.369733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.304 qpair failed and we were unable to recover it. 00:30:04.304 [2024-07-23 01:09:48.369898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.304 [2024-07-23 01:09:48.370039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.304 [2024-07-23 01:09:48.370065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.304 qpair failed and we were unable to recover it. 00:30:04.304 [2024-07-23 01:09:48.370207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.304 [2024-07-23 01:09:48.370370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.304 [2024-07-23 01:09:48.370395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.304 qpair failed and we were unable to recover it. 00:30:04.304 [2024-07-23 01:09:48.370587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.304 [2024-07-23 01:09:48.370758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.304 [2024-07-23 01:09:48.370784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.304 qpair failed and we were unable to recover it. 00:30:04.304 [2024-07-23 01:09:48.370974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.304 [2024-07-23 01:09:48.371135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.304 [2024-07-23 01:09:48.371159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.304 qpair failed and we were unable to recover it. 00:30:04.304 [2024-07-23 01:09:48.371295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.304 [2024-07-23 01:09:48.371483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.304 [2024-07-23 01:09:48.371508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.304 qpair failed and we were unable to recover it. 00:30:04.304 [2024-07-23 01:09:48.371695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.304 [2024-07-23 01:09:48.371836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.304 [2024-07-23 01:09:48.371859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.304 qpair failed and we were unable to recover it. 00:30:04.304 [2024-07-23 01:09:48.372026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.304 [2024-07-23 01:09:48.372197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.304 [2024-07-23 01:09:48.372222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.304 qpair failed and we were unable to recover it. 00:30:04.304 [2024-07-23 01:09:48.372389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.304 [2024-07-23 01:09:48.372548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.304 [2024-07-23 01:09:48.372573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.304 qpair failed and we were unable to recover it. 00:30:04.304 [2024-07-23 01:09:48.372760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.304 [2024-07-23 01:09:48.372937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.304 [2024-07-23 01:09:48.372961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.304 qpair failed and we were unable to recover it. 00:30:04.304 [2024-07-23 01:09:48.373101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.304 [2024-07-23 01:09:48.373238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.304 [2024-07-23 01:09:48.373267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.304 qpair failed and we were unable to recover it. 00:30:04.304 [2024-07-23 01:09:48.373432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.304 [2024-07-23 01:09:48.373610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.304 [2024-07-23 01:09:48.373642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.304 qpair failed and we were unable to recover it. 00:30:04.304 [2024-07-23 01:09:48.373769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.304 [2024-07-23 01:09:48.373934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.304 [2024-07-23 01:09:48.373958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.304 qpair failed and we were unable to recover it. 00:30:04.304 [2024-07-23 01:09:48.374121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.304 [2024-07-23 01:09:48.374252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.304 [2024-07-23 01:09:48.374276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.304 qpair failed and we were unable to recover it. 00:30:04.304 [2024-07-23 01:09:48.374441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.304 [2024-07-23 01:09:48.374608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.304 [2024-07-23 01:09:48.374639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.304 qpair failed and we were unable to recover it. 00:30:04.304 [2024-07-23 01:09:48.374830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.304 [2024-07-23 01:09:48.374995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.304 [2024-07-23 01:09:48.375020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.304 qpair failed and we were unable to recover it. 00:30:04.304 [2024-07-23 01:09:48.375204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.304 [2024-07-23 01:09:48.375364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.305 [2024-07-23 01:09:48.375388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.305 qpair failed and we were unable to recover it. 00:30:04.305 [2024-07-23 01:09:48.375518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.305 [2024-07-23 01:09:48.375680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.305 [2024-07-23 01:09:48.375705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.305 qpair failed and we were unable to recover it. 00:30:04.305 [2024-07-23 01:09:48.375870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.305 [2024-07-23 01:09:48.376003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.305 [2024-07-23 01:09:48.376028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.305 qpair failed and we were unable to recover it. 00:30:04.305 [2024-07-23 01:09:48.376157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.305 [2024-07-23 01:09:48.376294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.305 [2024-07-23 01:09:48.376320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.305 qpair failed and we were unable to recover it. 00:30:04.305 [2024-07-23 01:09:48.376485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.305 [2024-07-23 01:09:48.376647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.305 [2024-07-23 01:09:48.376676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.305 qpair failed and we were unable to recover it. 00:30:04.305 [2024-07-23 01:09:48.376838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.305 [2024-07-23 01:09:48.377029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.305 [2024-07-23 01:09:48.377053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.305 qpair failed and we were unable to recover it. 00:30:04.305 [2024-07-23 01:09:48.377220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.305 [2024-07-23 01:09:48.377359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.305 [2024-07-23 01:09:48.377383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.305 qpair failed and we were unable to recover it. 00:30:04.305 [2024-07-23 01:09:48.377547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.305 [2024-07-23 01:09:48.377714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.305 [2024-07-23 01:09:48.377739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.305 qpair failed and we were unable to recover it. 00:30:04.305 [2024-07-23 01:09:48.377905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.305 [2024-07-23 01:09:48.378078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.305 [2024-07-23 01:09:48.378103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.305 qpair failed and we were unable to recover it. 00:30:04.305 [2024-07-23 01:09:48.378242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.305 [2024-07-23 01:09:48.378430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.305 [2024-07-23 01:09:48.378455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.305 qpair failed and we were unable to recover it. 00:30:04.305 [2024-07-23 01:09:48.378624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.305 [2024-07-23 01:09:48.378759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.305 [2024-07-23 01:09:48.378783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.305 qpair failed and we were unable to recover it. 00:30:04.305 [2024-07-23 01:09:48.378920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.305 [2024-07-23 01:09:48.379088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.305 [2024-07-23 01:09:48.379114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.305 qpair failed and we were unable to recover it. 00:30:04.305 [2024-07-23 01:09:48.379309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.305 [2024-07-23 01:09:48.379442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.305 [2024-07-23 01:09:48.379466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.305 qpair failed and we were unable to recover it. 00:30:04.305 [2024-07-23 01:09:48.379634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.305 [2024-07-23 01:09:48.379771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.305 [2024-07-23 01:09:48.379796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.305 qpair failed and we were unable to recover it. 00:30:04.305 [2024-07-23 01:09:48.379931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.305 [2024-07-23 01:09:48.380120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.305 [2024-07-23 01:09:48.380145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.305 qpair failed and we were unable to recover it. 00:30:04.305 [2024-07-23 01:09:48.380289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.305 [2024-07-23 01:09:48.380451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.305 [2024-07-23 01:09:48.380475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.305 qpair failed and we were unable to recover it. 00:30:04.305 [2024-07-23 01:09:48.380664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.305 [2024-07-23 01:09:48.380826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.305 [2024-07-23 01:09:48.380851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.305 qpair failed and we were unable to recover it. 00:30:04.305 [2024-07-23 01:09:48.381012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.305 [2024-07-23 01:09:48.381196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.305 [2024-07-23 01:09:48.381220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.305 qpair failed and we were unable to recover it. 00:30:04.305 [2024-07-23 01:09:48.381410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.305 [2024-07-23 01:09:48.381578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.305 [2024-07-23 01:09:48.381603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.305 qpair failed and we were unable to recover it. 00:30:04.305 [2024-07-23 01:09:48.381775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.305 [2024-07-23 01:09:48.381937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.305 [2024-07-23 01:09:48.381961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.305 qpair failed and we were unable to recover it. 00:30:04.305 [2024-07-23 01:09:48.382130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.305 [2024-07-23 01:09:48.382288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.305 [2024-07-23 01:09:48.382313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.305 qpair failed and we were unable to recover it. 00:30:04.305 [2024-07-23 01:09:48.382474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.305 [2024-07-23 01:09:48.382632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.305 [2024-07-23 01:09:48.382657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.305 qpair failed and we were unable to recover it. 00:30:04.305 [2024-07-23 01:09:48.382849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.305 [2024-07-23 01:09:48.383003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.305 [2024-07-23 01:09:48.383026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.305 qpair failed and we were unable to recover it. 00:30:04.305 [2024-07-23 01:09:48.383165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.305 [2024-07-23 01:09:48.383359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.305 [2024-07-23 01:09:48.383385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.305 qpair failed and we were unable to recover it. 00:30:04.305 [2024-07-23 01:09:48.383574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.305 [2024-07-23 01:09:48.383743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.305 [2024-07-23 01:09:48.383768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.305 qpair failed and we were unable to recover it. 00:30:04.305 [2024-07-23 01:09:48.383943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.305 [2024-07-23 01:09:48.384113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.305 [2024-07-23 01:09:48.384137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.305 qpair failed and we were unable to recover it. 00:30:04.305 [2024-07-23 01:09:48.384280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.305 [2024-07-23 01:09:48.384444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.305 [2024-07-23 01:09:48.384469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.305 qpair failed and we were unable to recover it. 00:30:04.305 [2024-07-23 01:09:48.384640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.305 [2024-07-23 01:09:48.384830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.305 [2024-07-23 01:09:48.384855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.305 qpair failed and we were unable to recover it. 00:30:04.305 [2024-07-23 01:09:48.385025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.305 [2024-07-23 01:09:48.385165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.305 [2024-07-23 01:09:48.385191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.305 qpair failed and we were unable to recover it. 00:30:04.306 [2024-07-23 01:09:48.385396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.306 [2024-07-23 01:09:48.385528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.306 [2024-07-23 01:09:48.385552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.306 qpair failed and we were unable to recover it. 00:30:04.306 [2024-07-23 01:09:48.385717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.306 [2024-07-23 01:09:48.385873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.306 [2024-07-23 01:09:48.385897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.306 qpair failed and we were unable to recover it. 00:30:04.306 [2024-07-23 01:09:48.386086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.306 [2024-07-23 01:09:48.386223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.306 [2024-07-23 01:09:48.386248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.306 qpair failed and we were unable to recover it. 00:30:04.306 [2024-07-23 01:09:48.386443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.306 [2024-07-23 01:09:48.386627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.306 [2024-07-23 01:09:48.386651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.306 qpair failed and we were unable to recover it. 00:30:04.306 [2024-07-23 01:09:48.386807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.306 [2024-07-23 01:09:48.386999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.306 [2024-07-23 01:09:48.387024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.306 qpair failed and we were unable to recover it. 00:30:04.306 [2024-07-23 01:09:48.387215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.306 [2024-07-23 01:09:48.387370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.306 [2024-07-23 01:09:48.387394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.306 qpair failed and we were unable to recover it. 00:30:04.306 [2024-07-23 01:09:48.387589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.306 [2024-07-23 01:09:48.387758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.306 [2024-07-23 01:09:48.387782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.306 qpair failed and we were unable to recover it. 00:30:04.306 [2024-07-23 01:09:48.387952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.306 [2024-07-23 01:09:48.388083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.306 [2024-07-23 01:09:48.388107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.306 qpair failed and we were unable to recover it. 00:30:04.306 [2024-07-23 01:09:48.388278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.306 [2024-07-23 01:09:48.388468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.306 [2024-07-23 01:09:48.388492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.306 qpair failed and we were unable to recover it. 00:30:04.306 [2024-07-23 01:09:48.388660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.306 [2024-07-23 01:09:48.388826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.306 [2024-07-23 01:09:48.388851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.306 qpair failed and we were unable to recover it. 00:30:04.306 [2024-07-23 01:09:48.388988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.306 [2024-07-23 01:09:48.389149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.306 [2024-07-23 01:09:48.389173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.306 qpair failed and we were unable to recover it. 00:30:04.306 [2024-07-23 01:09:48.389336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.306 [2024-07-23 01:09:48.389530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.306 [2024-07-23 01:09:48.389554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.306 qpair failed and we were unable to recover it. 00:30:04.306 [2024-07-23 01:09:48.389717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.306 [2024-07-23 01:09:48.389910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.306 [2024-07-23 01:09:48.389935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.306 qpair failed and we were unable to recover it. 00:30:04.306 [2024-07-23 01:09:48.390077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.306 [2024-07-23 01:09:48.390269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.306 [2024-07-23 01:09:48.390294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.306 qpair failed and we were unable to recover it. 00:30:04.306 [2024-07-23 01:09:48.390436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.306 [2024-07-23 01:09:48.390601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.306 [2024-07-23 01:09:48.390633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.306 qpair failed and we were unable to recover it. 00:30:04.306 [2024-07-23 01:09:48.390796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.306 [2024-07-23 01:09:48.390942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.306 [2024-07-23 01:09:48.390966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.306 qpair failed and we were unable to recover it. 00:30:04.306 [2024-07-23 01:09:48.391132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.306 [2024-07-23 01:09:48.391323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.306 [2024-07-23 01:09:48.391348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.306 qpair failed and we were unable to recover it. 00:30:04.306 [2024-07-23 01:09:48.391513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.306 [2024-07-23 01:09:48.391696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.306 [2024-07-23 01:09:48.391720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.306 qpair failed and we were unable to recover it. 00:30:04.306 [2024-07-23 01:09:48.391906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.306 [2024-07-23 01:09:48.392049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.306 [2024-07-23 01:09:48.392076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.306 qpair failed and we were unable to recover it. 00:30:04.306 [2024-07-23 01:09:48.392263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.306 [2024-07-23 01:09:48.392430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.306 [2024-07-23 01:09:48.392455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.306 qpair failed and we were unable to recover it. 00:30:04.306 [2024-07-23 01:09:48.392597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.306 [2024-07-23 01:09:48.392774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.306 [2024-07-23 01:09:48.392800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.306 qpair failed and we were unable to recover it. 00:30:04.306 [2024-07-23 01:09:48.392992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.306 [2024-07-23 01:09:48.393157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.306 [2024-07-23 01:09:48.393181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.306 qpair failed and we were unable to recover it. 00:30:04.306 [2024-07-23 01:09:48.393373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.306 [2024-07-23 01:09:48.393504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.306 [2024-07-23 01:09:48.393529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.306 qpair failed and we were unable to recover it. 00:30:04.306 [2024-07-23 01:09:48.393666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.306 [2024-07-23 01:09:48.393829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.306 [2024-07-23 01:09:48.393853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.306 qpair failed and we were unable to recover it. 00:30:04.306 [2024-07-23 01:09:48.394014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.306 [2024-07-23 01:09:48.394156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.306 [2024-07-23 01:09:48.394182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.306 qpair failed and we were unable to recover it. 00:30:04.306 [2024-07-23 01:09:48.394380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.306 [2024-07-23 01:09:48.394540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.306 [2024-07-23 01:09:48.394566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.306 qpair failed and we were unable to recover it. 00:30:04.306 [2024-07-23 01:09:48.394767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.306 [2024-07-23 01:09:48.394935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.306 [2024-07-23 01:09:48.394959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.306 qpair failed and we were unable to recover it. 00:30:04.306 [2024-07-23 01:09:48.395126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.306 [2024-07-23 01:09:48.395292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.306 [2024-07-23 01:09:48.395317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.306 qpair failed and we were unable to recover it. 00:30:04.306 [2024-07-23 01:09:48.395457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.306 [2024-07-23 01:09:48.395647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.307 [2024-07-23 01:09:48.395672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.307 qpair failed and we were unable to recover it. 00:30:04.307 [2024-07-23 01:09:48.395834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.307 [2024-07-23 01:09:48.395977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.307 [2024-07-23 01:09:48.396001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.307 qpair failed and we were unable to recover it. 00:30:04.307 [2024-07-23 01:09:48.396192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.307 [2024-07-23 01:09:48.396332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.307 [2024-07-23 01:09:48.396356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.307 qpair failed and we were unable to recover it. 00:30:04.307 [2024-07-23 01:09:48.396492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.307 [2024-07-23 01:09:48.396666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.307 [2024-07-23 01:09:48.396692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.307 qpair failed and we were unable to recover it. 00:30:04.307 [2024-07-23 01:09:48.396836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.307 [2024-07-23 01:09:48.396999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.307 [2024-07-23 01:09:48.397028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.307 qpair failed and we were unable to recover it. 00:30:04.307 [2024-07-23 01:09:48.397166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.307 [2024-07-23 01:09:48.397352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.307 [2024-07-23 01:09:48.397376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.307 qpair failed and we were unable to recover it. 00:30:04.307 [2024-07-23 01:09:48.397513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.307 [2024-07-23 01:09:48.397659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.307 [2024-07-23 01:09:48.397686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.307 qpair failed and we were unable to recover it. 00:30:04.307 [2024-07-23 01:09:48.397851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.307 [2024-07-23 01:09:48.398009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.307 [2024-07-23 01:09:48.398034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.307 qpair failed and we were unable to recover it. 00:30:04.307 [2024-07-23 01:09:48.398172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.307 [2024-07-23 01:09:48.398339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.307 [2024-07-23 01:09:48.398364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.307 qpair failed and we were unable to recover it. 00:30:04.307 [2024-07-23 01:09:48.398555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.307 [2024-07-23 01:09:48.398728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.307 [2024-07-23 01:09:48.398754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.307 qpair failed and we were unable to recover it. 00:30:04.307 [2024-07-23 01:09:48.398923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.307 [2024-07-23 01:09:48.399113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.307 [2024-07-23 01:09:48.399138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.307 qpair failed and we were unable to recover it. 00:30:04.307 [2024-07-23 01:09:48.399298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.307 [2024-07-23 01:09:48.399452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.307 [2024-07-23 01:09:48.399476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.307 qpair failed and we were unable to recover it. 00:30:04.307 [2024-07-23 01:09:48.399609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.307 [2024-07-23 01:09:48.399757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.307 [2024-07-23 01:09:48.399782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.307 qpair failed and we were unable to recover it. 00:30:04.307 [2024-07-23 01:09:48.399936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.307 [2024-07-23 01:09:48.400075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.307 [2024-07-23 01:09:48.400102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.307 qpair failed and we were unable to recover it. 00:30:04.307 [2024-07-23 01:09:48.401645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.307 [2024-07-23 01:09:48.401822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.307 [2024-07-23 01:09:48.401853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.307 qpair failed and we were unable to recover it. 00:30:04.307 [2024-07-23 01:09:48.402039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.307 [2024-07-23 01:09:48.402193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.307 [2024-07-23 01:09:48.402221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.307 qpair failed and we were unable to recover it. 00:30:04.307 [2024-07-23 01:09:48.402411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.307 [2024-07-23 01:09:48.402629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.307 [2024-07-23 01:09:48.402658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.307 qpair failed and we were unable to recover it. 00:30:04.307 [2024-07-23 01:09:48.402844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.307 [2024-07-23 01:09:48.403037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.307 [2024-07-23 01:09:48.403061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.307 qpair failed and we were unable to recover it. 00:30:04.307 [2024-07-23 01:09:48.403232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.307 [2024-07-23 01:09:48.403405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.307 [2024-07-23 01:09:48.403431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.307 qpair failed and we were unable to recover it. 00:30:04.307 [2024-07-23 01:09:48.403570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.307 [2024-07-23 01:09:48.403749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.307 [2024-07-23 01:09:48.403774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.307 qpair failed and we were unable to recover it. 00:30:04.307 [2024-07-23 01:09:48.403936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.307 [2024-07-23 01:09:48.404130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.307 [2024-07-23 01:09:48.404155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.307 qpair failed and we were unable to recover it. 00:30:04.307 [2024-07-23 01:09:48.404294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.307 [2024-07-23 01:09:48.404458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.307 [2024-07-23 01:09:48.404482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.307 qpair failed and we were unable to recover it. 00:30:04.307 [2024-07-23 01:09:48.404637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.307 [2024-07-23 01:09:48.404831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.307 [2024-07-23 01:09:48.404855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.307 qpair failed and we were unable to recover it. 00:30:04.307 [2024-07-23 01:09:48.405026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.307 [2024-07-23 01:09:48.405166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.307 [2024-07-23 01:09:48.405192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.307 qpair failed and we were unable to recover it. 00:30:04.307 [2024-07-23 01:09:48.405361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.307 [2024-07-23 01:09:48.405499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.308 [2024-07-23 01:09:48.405525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.308 qpair failed and we were unable to recover it. 00:30:04.308 [2024-07-23 01:09:48.405717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.308 [2024-07-23 01:09:48.405848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.308 [2024-07-23 01:09:48.405872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.308 qpair failed and we were unable to recover it. 00:30:04.308 [2024-07-23 01:09:48.406034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.308 [2024-07-23 01:09:48.406200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.308 [2024-07-23 01:09:48.406226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.308 qpair failed and we were unable to recover it. 00:30:04.308 [2024-07-23 01:09:48.406414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.308 [2024-07-23 01:09:48.406571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.308 [2024-07-23 01:09:48.406594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.308 qpair failed and we were unable to recover it. 00:30:04.308 [2024-07-23 01:09:48.406797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.308 [2024-07-23 01:09:48.406952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.308 [2024-07-23 01:09:48.406977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.308 qpair failed and we were unable to recover it. 00:30:04.308 [2024-07-23 01:09:48.407148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.308 [2024-07-23 01:09:48.407308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.308 [2024-07-23 01:09:48.407333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.308 qpair failed and we were unable to recover it. 00:30:04.308 [2024-07-23 01:09:48.407477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.308 [2024-07-23 01:09:48.407641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.308 [2024-07-23 01:09:48.407666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.308 qpair failed and we were unable to recover it. 00:30:04.308 [2024-07-23 01:09:48.407803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.308 [2024-07-23 01:09:48.407945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.308 [2024-07-23 01:09:48.407970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.308 qpair failed and we were unable to recover it. 00:30:04.308 [2024-07-23 01:09:48.408136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.308 [2024-07-23 01:09:48.408322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.308 [2024-07-23 01:09:48.408347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.308 qpair failed and we were unable to recover it. 00:30:04.308 [2024-07-23 01:09:48.408512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.308 [2024-07-23 01:09:48.408689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.308 [2024-07-23 01:09:48.408715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.308 qpair failed and we were unable to recover it. 00:30:04.308 [2024-07-23 01:09:48.408881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.308 [2024-07-23 01:09:48.409041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.308 [2024-07-23 01:09:48.409067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.308 qpair failed and we were unable to recover it. 00:30:04.308 [2024-07-23 01:09:48.409231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.308 [2024-07-23 01:09:48.409421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.308 [2024-07-23 01:09:48.409445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.308 qpair failed and we were unable to recover it. 00:30:04.308 [2024-07-23 01:09:48.409611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.308 [2024-07-23 01:09:48.409759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.308 [2024-07-23 01:09:48.409784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.308 qpair failed and we were unable to recover it. 00:30:04.308 [2024-07-23 01:09:48.409934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.308 [2024-07-23 01:09:48.410095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.308 [2024-07-23 01:09:48.410122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.308 qpair failed and we were unable to recover it. 00:30:04.308 [2024-07-23 01:09:48.410260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.308 [2024-07-23 01:09:48.410431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.308 [2024-07-23 01:09:48.410456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.308 qpair failed and we were unable to recover it. 00:30:04.308 [2024-07-23 01:09:48.410624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.308 [2024-07-23 01:09:48.410793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.308 [2024-07-23 01:09:48.410819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.308 qpair failed and we were unable to recover it. 00:30:04.308 [2024-07-23 01:09:48.410993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.308 [2024-07-23 01:09:48.411186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.308 [2024-07-23 01:09:48.411211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.308 qpair failed and we were unable to recover it. 00:30:04.308 [2024-07-23 01:09:48.411376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.308 [2024-07-23 01:09:48.411537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.308 [2024-07-23 01:09:48.411561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.308 qpair failed and we were unable to recover it. 00:30:04.308 [2024-07-23 01:09:48.411748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.308 [2024-07-23 01:09:48.411891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.308 [2024-07-23 01:09:48.411917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.308 qpair failed and we were unable to recover it. 00:30:04.308 [2024-07-23 01:09:48.412058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.308 [2024-07-23 01:09:48.412223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.308 [2024-07-23 01:09:48.412247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.308 qpair failed and we were unable to recover it. 00:30:04.308 [2024-07-23 01:09:48.412383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.308 [2024-07-23 01:09:48.412574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.308 [2024-07-23 01:09:48.412598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.308 qpair failed and we were unable to recover it. 00:30:04.308 [2024-07-23 01:09:48.412765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.308 [2024-07-23 01:09:48.412908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.308 [2024-07-23 01:09:48.412933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.308 qpair failed and we were unable to recover it. 00:30:04.308 [2024-07-23 01:09:48.413103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.308 [2024-07-23 01:09:48.413242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.308 [2024-07-23 01:09:48.413266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.308 qpair failed and we were unable to recover it. 00:30:04.308 [2024-07-23 01:09:48.413404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.308 [2024-07-23 01:09:48.413597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.308 [2024-07-23 01:09:48.413628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.308 qpair failed and we were unable to recover it. 00:30:04.308 [2024-07-23 01:09:48.413808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.308 [2024-07-23 01:09:48.413952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.308 [2024-07-23 01:09:48.413977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.308 qpair failed and we were unable to recover it. 00:30:04.308 [2024-07-23 01:09:48.414144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.308 [2024-07-23 01:09:48.414308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.308 [2024-07-23 01:09:48.414333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.308 qpair failed and we were unable to recover it. 00:30:04.308 [2024-07-23 01:09:48.414467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.308 [2024-07-23 01:09:48.414643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.308 [2024-07-23 01:09:48.414671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.308 qpair failed and we were unable to recover it. 00:30:04.308 [2024-07-23 01:09:48.414806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.308 [2024-07-23 01:09:48.414951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.308 [2024-07-23 01:09:48.414975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.308 qpair failed and we were unable to recover it. 00:30:04.308 [2024-07-23 01:09:48.415169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.308 [2024-07-23 01:09:48.415305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.308 [2024-07-23 01:09:48.415330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.308 qpair failed and we were unable to recover it. 00:30:04.308 [2024-07-23 01:09:48.415491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.309 [2024-07-23 01:09:48.415686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.309 [2024-07-23 01:09:48.415711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.309 qpair failed and we were unable to recover it. 00:30:04.309 [2024-07-23 01:09:48.415878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.309 [2024-07-23 01:09:48.416040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.309 [2024-07-23 01:09:48.416064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.309 qpair failed and we were unable to recover it. 00:30:04.309 [2024-07-23 01:09:48.416228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.309 [2024-07-23 01:09:48.416371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.309 [2024-07-23 01:09:48.416396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.309 qpair failed and we were unable to recover it. 00:30:04.309 [2024-07-23 01:09:48.416560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.309 [2024-07-23 01:09:48.416714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.309 [2024-07-23 01:09:48.416739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.309 qpair failed and we were unable to recover it. 00:30:04.309 [2024-07-23 01:09:48.416942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.309 [2024-07-23 01:09:48.417107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.309 [2024-07-23 01:09:48.417131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.309 qpair failed and we were unable to recover it. 00:30:04.309 [2024-07-23 01:09:48.417270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.309 [2024-07-23 01:09:48.417451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.309 [2024-07-23 01:09:48.417477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.309 qpair failed and we were unable to recover it. 00:30:04.309 [2024-07-23 01:09:48.417655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.309 [2024-07-23 01:09:48.417822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.309 [2024-07-23 01:09:48.417847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.309 qpair failed and we were unable to recover it. 00:30:04.309 [2024-07-23 01:09:48.417986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.309 [2024-07-23 01:09:48.418151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.309 [2024-07-23 01:09:48.418176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.309 qpair failed and we were unable to recover it. 00:30:04.309 [2024-07-23 01:09:48.418316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.309 [2024-07-23 01:09:48.418480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.309 [2024-07-23 01:09:48.418505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.309 qpair failed and we were unable to recover it. 00:30:04.309 [2024-07-23 01:09:48.418672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.309 [2024-07-23 01:09:48.418810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.309 [2024-07-23 01:09:48.418834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.309 qpair failed and we were unable to recover it. 00:30:04.309 [2024-07-23 01:09:48.419002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.309 [2024-07-23 01:09:48.419146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.309 [2024-07-23 01:09:48.419173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.309 qpair failed and we were unable to recover it. 00:30:04.309 [2024-07-23 01:09:48.419372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.309 [2024-07-23 01:09:48.419508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.309 [2024-07-23 01:09:48.419533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.309 qpair failed and we were unable to recover it. 00:30:04.309 [2024-07-23 01:09:48.419723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.309 [2024-07-23 01:09:48.419870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.309 [2024-07-23 01:09:48.419896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.309 qpair failed and we were unable to recover it. 00:30:04.309 [2024-07-23 01:09:48.420039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.309 [2024-07-23 01:09:48.420182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.309 [2024-07-23 01:09:48.420208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.309 qpair failed and we were unable to recover it. 00:30:04.309 [2024-07-23 01:09:48.420373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.309 [2024-07-23 01:09:48.420514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.309 [2024-07-23 01:09:48.420540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.309 qpair failed and we were unable to recover it. 00:30:04.309 [2024-07-23 01:09:48.420731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.309 [2024-07-23 01:09:48.420874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.309 [2024-07-23 01:09:48.420903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.309 qpair failed and we were unable to recover it. 00:30:04.309 [2024-07-23 01:09:48.421064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.309 [2024-07-23 01:09:48.421218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.309 [2024-07-23 01:09:48.421243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.309 qpair failed and we were unable to recover it. 00:30:04.309 [2024-07-23 01:09:48.421413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.309 [2024-07-23 01:09:48.421608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.309 [2024-07-23 01:09:48.421641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.309 qpair failed and we were unable to recover it. 00:30:04.309 [2024-07-23 01:09:48.421806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.309 [2024-07-23 01:09:48.421977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.309 [2024-07-23 01:09:48.422002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.309 qpair failed and we were unable to recover it. 00:30:04.309 [2024-07-23 01:09:48.422191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.309 [2024-07-23 01:09:48.422388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.309 [2024-07-23 01:09:48.422413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.309 qpair failed and we were unable to recover it. 00:30:04.309 [2024-07-23 01:09:48.422576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.309 [2024-07-23 01:09:48.422778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.309 [2024-07-23 01:09:48.422804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.309 qpair failed and we were unable to recover it. 00:30:04.309 [2024-07-23 01:09:48.422942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.309 [2024-07-23 01:09:48.423109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.309 [2024-07-23 01:09:48.423136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.309 qpair failed and we were unable to recover it. 00:30:04.309 [2024-07-23 01:09:48.423302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.309 [2024-07-23 01:09:48.423437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.309 [2024-07-23 01:09:48.423462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.309 qpair failed and we were unable to recover it. 00:30:04.309 [2024-07-23 01:09:48.423604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.309 [2024-07-23 01:09:48.423788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.309 [2024-07-23 01:09:48.423814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.309 qpair failed and we were unable to recover it. 00:30:04.309 [2024-07-23 01:09:48.423961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.309 [2024-07-23 01:09:48.424120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.309 [2024-07-23 01:09:48.424145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.309 qpair failed and we were unable to recover it. 00:30:04.309 [2024-07-23 01:09:48.424339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.309 [2024-07-23 01:09:48.424529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.309 [2024-07-23 01:09:48.424557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.309 qpair failed and we were unable to recover it. 00:30:04.309 [2024-07-23 01:09:48.424727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.309 [2024-07-23 01:09:48.424862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.309 [2024-07-23 01:09:48.424894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.309 qpair failed and we were unable to recover it. 00:30:04.309 [2024-07-23 01:09:48.425053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.309 [2024-07-23 01:09:48.425242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.309 [2024-07-23 01:09:48.425267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.309 qpair failed and we were unable to recover it. 00:30:04.309 [2024-07-23 01:09:48.425460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.309 [2024-07-23 01:09:48.425664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.310 [2024-07-23 01:09:48.425689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.310 qpair failed and we were unable to recover it. 00:30:04.310 [2024-07-23 01:09:48.425888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.310 [2024-07-23 01:09:48.426052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.310 [2024-07-23 01:09:48.426077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.310 qpair failed and we were unable to recover it. 00:30:04.310 [2024-07-23 01:09:48.426243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.310 [2024-07-23 01:09:48.426377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.310 [2024-07-23 01:09:48.426402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.310 qpair failed and we were unable to recover it. 00:30:04.310 [2024-07-23 01:09:48.426542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.310 [2024-07-23 01:09:48.426710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.310 [2024-07-23 01:09:48.426735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.310 qpair failed and we were unable to recover it. 00:30:04.310 [2024-07-23 01:09:48.426876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.310 [2024-07-23 01:09:48.427035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.310 [2024-07-23 01:09:48.427059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.310 qpair failed and we were unable to recover it. 00:30:04.310 [2024-07-23 01:09:48.427200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.310 [2024-07-23 01:09:48.427333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.310 [2024-07-23 01:09:48.427359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.310 qpair failed and we were unable to recover it. 00:30:04.310 [2024-07-23 01:09:48.427504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.310 [2024-07-23 01:09:48.427667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.310 [2024-07-23 01:09:48.427693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.310 qpair failed and we were unable to recover it. 00:30:04.310 [2024-07-23 01:09:48.427831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.310 [2024-07-23 01:09:48.428009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.310 [2024-07-23 01:09:48.428038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.310 qpair failed and we were unable to recover it. 00:30:04.310 [2024-07-23 01:09:48.428205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.310 [2024-07-23 01:09:48.428364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.310 [2024-07-23 01:09:48.428388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.310 qpair failed and we were unable to recover it. 00:30:04.310 [2024-07-23 01:09:48.428579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.310 [2024-07-23 01:09:48.428752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.310 [2024-07-23 01:09:48.428778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.310 qpair failed and we were unable to recover it. 00:30:04.310 [2024-07-23 01:09:48.428952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.310 [2024-07-23 01:09:48.429117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.310 [2024-07-23 01:09:48.429142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.310 qpair failed and we were unable to recover it. 00:30:04.310 [2024-07-23 01:09:48.429284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.310 [2024-07-23 01:09:48.429420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.310 [2024-07-23 01:09:48.429446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.310 qpair failed and we were unable to recover it. 00:30:04.310 [2024-07-23 01:09:48.429610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.310 [2024-07-23 01:09:48.429778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.310 [2024-07-23 01:09:48.429804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.310 qpair failed and we were unable to recover it. 00:30:04.310 [2024-07-23 01:09:48.429972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.310 [2024-07-23 01:09:48.430163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.310 [2024-07-23 01:09:48.430187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.310 qpair failed and we were unable to recover it. 00:30:04.310 [2024-07-23 01:09:48.430319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.310 [2024-07-23 01:09:48.430481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.310 [2024-07-23 01:09:48.430505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.310 qpair failed and we were unable to recover it. 00:30:04.310 [2024-07-23 01:09:48.430645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.310 [2024-07-23 01:09:48.430814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.310 [2024-07-23 01:09:48.430839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.310 qpair failed and we were unable to recover it. 00:30:04.310 [2024-07-23 01:09:48.430985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.310 [2024-07-23 01:09:48.431123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.310 [2024-07-23 01:09:48.431148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.310 qpair failed and we were unable to recover it. 00:30:04.310 [2024-07-23 01:09:48.431314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.310 [2024-07-23 01:09:48.431505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.310 [2024-07-23 01:09:48.431534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.310 qpair failed and we were unable to recover it. 00:30:04.310 [2024-07-23 01:09:48.431718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.310 [2024-07-23 01:09:48.431888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.310 [2024-07-23 01:09:48.431913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.310 qpair failed and we were unable to recover it. 00:30:04.310 [2024-07-23 01:09:48.432093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.310 [2024-07-23 01:09:48.432265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.310 [2024-07-23 01:09:48.432290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.310 qpair failed and we were unable to recover it. 00:30:04.310 [2024-07-23 01:09:48.432478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.310 [2024-07-23 01:09:48.432675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.310 [2024-07-23 01:09:48.432701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.310 qpair failed and we were unable to recover it. 00:30:04.310 [2024-07-23 01:09:48.432866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.310 [2024-07-23 01:09:48.433030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.310 [2024-07-23 01:09:48.433054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.310 qpair failed and we were unable to recover it. 00:30:04.310 [2024-07-23 01:09:48.433223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.310 [2024-07-23 01:09:48.433386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.310 [2024-07-23 01:09:48.433412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.310 qpair failed and we were unable to recover it. 00:30:04.310 [2024-07-23 01:09:48.433578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.310 [2024-07-23 01:09:48.433750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.310 [2024-07-23 01:09:48.433775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.310 qpair failed and we were unable to recover it. 00:30:04.310 [2024-07-23 01:09:48.433926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.310 [2024-07-23 01:09:48.434100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.310 [2024-07-23 01:09:48.434125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.310 qpair failed and we were unable to recover it. 00:30:04.310 [2024-07-23 01:09:48.434297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.310 [2024-07-23 01:09:48.434437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.310 [2024-07-23 01:09:48.434461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.310 qpair failed and we were unable to recover it. 00:30:04.310 [2024-07-23 01:09:48.434661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.310 [2024-07-23 01:09:48.434825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.310 [2024-07-23 01:09:48.434852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.310 qpair failed and we were unable to recover it. 00:30:04.310 [2024-07-23 01:09:48.434985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.310 [2024-07-23 01:09:48.435150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.310 [2024-07-23 01:09:48.435175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.310 qpair failed and we were unable to recover it. 00:30:04.310 [2024-07-23 01:09:48.435339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.310 [2024-07-23 01:09:48.435501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.310 [2024-07-23 01:09:48.435525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.310 qpair failed and we were unable to recover it. 00:30:04.310 [2024-07-23 01:09:48.435693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.311 [2024-07-23 01:09:48.435858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.311 [2024-07-23 01:09:48.435883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.311 qpair failed and we were unable to recover it. 00:30:04.311 [2024-07-23 01:09:48.436044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.311 [2024-07-23 01:09:48.436206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.311 [2024-07-23 01:09:48.436230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.311 qpair failed and we were unable to recover it. 00:30:04.311 [2024-07-23 01:09:48.436398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.311 [2024-07-23 01:09:48.436533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.311 [2024-07-23 01:09:48.436557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.311 qpair failed and we were unable to recover it. 00:30:04.311 [2024-07-23 01:09:48.436751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.311 [2024-07-23 01:09:48.436916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.311 [2024-07-23 01:09:48.436943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.311 qpair failed and we were unable to recover it. 00:30:04.311 [2024-07-23 01:09:48.437110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.311 [2024-07-23 01:09:48.437299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.311 [2024-07-23 01:09:48.437323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.311 qpair failed and we were unable to recover it. 00:30:04.311 [2024-07-23 01:09:48.437511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.311 [2024-07-23 01:09:48.437675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.311 [2024-07-23 01:09:48.437701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.311 qpair failed and we were unable to recover it. 00:30:04.311 [2024-07-23 01:09:48.437863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.311 [2024-07-23 01:09:48.438031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.311 [2024-07-23 01:09:48.438055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.311 qpair failed and we were unable to recover it. 00:30:04.311 [2024-07-23 01:09:48.438247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.311 [2024-07-23 01:09:48.438413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.311 [2024-07-23 01:09:48.438438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.311 qpair failed and we were unable to recover it. 00:30:04.311 [2024-07-23 01:09:48.438609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.311 [2024-07-23 01:09:48.438791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.311 [2024-07-23 01:09:48.438816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.311 qpair failed and we were unable to recover it. 00:30:04.311 [2024-07-23 01:09:48.438989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.311 [2024-07-23 01:09:48.439128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.311 [2024-07-23 01:09:48.439152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.311 qpair failed and we were unable to recover it. 00:30:04.311 [2024-07-23 01:09:48.439323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.311 [2024-07-23 01:09:48.439484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.311 [2024-07-23 01:09:48.439510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.311 qpair failed and we were unable to recover it. 00:30:04.311 [2024-07-23 01:09:48.439673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.311 [2024-07-23 01:09:48.439857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.311 [2024-07-23 01:09:48.439881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.311 qpair failed and we were unable to recover it. 00:30:04.311 [2024-07-23 01:09:48.440042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.311 [2024-07-23 01:09:48.440230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.311 [2024-07-23 01:09:48.440255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.311 qpair failed and we were unable to recover it. 00:30:04.311 [2024-07-23 01:09:48.440395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.311 [2024-07-23 01:09:48.440557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.311 [2024-07-23 01:09:48.440583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.311 qpair failed and we were unable to recover it. 00:30:04.311 [2024-07-23 01:09:48.440754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.311 [2024-07-23 01:09:48.440947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.311 [2024-07-23 01:09:48.440970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.311 qpair failed and we were unable to recover it. 00:30:04.311 [2024-07-23 01:09:48.441102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.311 [2024-07-23 01:09:48.441242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.311 [2024-07-23 01:09:48.441268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.311 qpair failed and we were unable to recover it. 00:30:04.311 [2024-07-23 01:09:48.441471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.311 [2024-07-23 01:09:48.441619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.311 [2024-07-23 01:09:48.441646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.311 qpair failed and we were unable to recover it. 00:30:04.311 [2024-07-23 01:09:48.441841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.311 [2024-07-23 01:09:48.442005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.311 [2024-07-23 01:09:48.442029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.311 qpair failed and we were unable to recover it. 00:30:04.311 [2024-07-23 01:09:48.442165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.311 [2024-07-23 01:09:48.442301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.311 [2024-07-23 01:09:48.442328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.311 qpair failed and we were unable to recover it. 00:30:04.311 [2024-07-23 01:09:48.442498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.311 [2024-07-23 01:09:48.442669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.311 [2024-07-23 01:09:48.442694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.311 qpair failed and we were unable to recover it. 00:30:04.311 [2024-07-23 01:09:48.442840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.311 [2024-07-23 01:09:48.443001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.311 [2024-07-23 01:09:48.443025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.311 qpair failed and we were unable to recover it. 00:30:04.311 [2024-07-23 01:09:48.443216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.311 [2024-07-23 01:09:48.443373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.311 [2024-07-23 01:09:48.443397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.311 qpair failed and we were unable to recover it. 00:30:04.311 [2024-07-23 01:09:48.443554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.311 [2024-07-23 01:09:48.443712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.311 [2024-07-23 01:09:48.443737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.311 qpair failed and we were unable to recover it. 00:30:04.311 [2024-07-23 01:09:48.443884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.311 [2024-07-23 01:09:48.444075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.311 [2024-07-23 01:09:48.444100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.311 qpair failed and we were unable to recover it. 00:30:04.311 [2024-07-23 01:09:48.444273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.311 [2024-07-23 01:09:48.444438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.311 [2024-07-23 01:09:48.444462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.312 qpair failed and we were unable to recover it. 00:30:04.312 [2024-07-23 01:09:48.444605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.312 [2024-07-23 01:09:48.444797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.312 [2024-07-23 01:09:48.444823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.312 qpair failed and we were unable to recover it. 00:30:04.312 [2024-07-23 01:09:48.444988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.312 [2024-07-23 01:09:48.445125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.312 [2024-07-23 01:09:48.445150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.312 qpair failed and we were unable to recover it. 00:30:04.312 [2024-07-23 01:09:48.445312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.312 [2024-07-23 01:09:48.445443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.312 [2024-07-23 01:09:48.445468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.312 qpair failed and we were unable to recover it. 00:30:04.312 [2024-07-23 01:09:48.445620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.312 [2024-07-23 01:09:48.445765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.312 [2024-07-23 01:09:48.445789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.312 qpair failed and we were unable to recover it. 00:30:04.312 [2024-07-23 01:09:48.445947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.312 [2024-07-23 01:09:48.446087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.312 [2024-07-23 01:09:48.446113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.312 qpair failed and we were unable to recover it. 00:30:04.312 [2024-07-23 01:09:48.446305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.312 [2024-07-23 01:09:48.446466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.312 [2024-07-23 01:09:48.446492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.312 qpair failed and we were unable to recover it. 00:30:04.312 [2024-07-23 01:09:48.446684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.312 [2024-07-23 01:09:48.446822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.312 [2024-07-23 01:09:48.446846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.312 qpair failed and we were unable to recover it. 00:30:04.312 [2024-07-23 01:09:48.447012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.312 [2024-07-23 01:09:48.447184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.312 [2024-07-23 01:09:48.447209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.312 qpair failed and we were unable to recover it. 00:30:04.312 [2024-07-23 01:09:48.447403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.312 [2024-07-23 01:09:48.447593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.312 [2024-07-23 01:09:48.447625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.312 qpair failed and we were unable to recover it. 00:30:04.312 [2024-07-23 01:09:48.447819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.312 [2024-07-23 01:09:48.447986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.312 [2024-07-23 01:09:48.448011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.312 qpair failed and we were unable to recover it. 00:30:04.312 [2024-07-23 01:09:48.448177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.312 [2024-07-23 01:09:48.448315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.312 [2024-07-23 01:09:48.448338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.312 qpair failed and we were unable to recover it. 00:30:04.312 [2024-07-23 01:09:48.448511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.312 [2024-07-23 01:09:48.448680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.312 [2024-07-23 01:09:48.448707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.312 qpair failed and we were unable to recover it. 00:30:04.312 [2024-07-23 01:09:48.448869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.312 [2024-07-23 01:09:48.449055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.312 [2024-07-23 01:09:48.449079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.312 qpair failed and we were unable to recover it. 00:30:04.312 [2024-07-23 01:09:48.449242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.312 [2024-07-23 01:09:48.449429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.312 [2024-07-23 01:09:48.449453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.312 qpair failed and we were unable to recover it. 00:30:04.312 [2024-07-23 01:09:48.449648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.312 [2024-07-23 01:09:48.449786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.312 [2024-07-23 01:09:48.449811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.312 qpair failed and we were unable to recover it. 00:30:04.312 [2024-07-23 01:09:48.449958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.312 [2024-07-23 01:09:48.450122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.312 [2024-07-23 01:09:48.450147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.312 qpair failed and we were unable to recover it. 00:30:04.312 [2024-07-23 01:09:48.450313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.312 [2024-07-23 01:09:48.450479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.312 [2024-07-23 01:09:48.450504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.312 qpair failed and we were unable to recover it. 00:30:04.312 [2024-07-23 01:09:48.450644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.312 [2024-07-23 01:09:48.450834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.312 [2024-07-23 01:09:48.450859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.312 qpair failed and we were unable to recover it. 00:30:04.312 [2024-07-23 01:09:48.451011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.312 [2024-07-23 01:09:48.451176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.312 [2024-07-23 01:09:48.451200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.312 qpair failed and we were unable to recover it. 00:30:04.312 [2024-07-23 01:09:48.451394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.312 [2024-07-23 01:09:48.451551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.312 [2024-07-23 01:09:48.451576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.312 qpair failed and we were unable to recover it. 00:30:04.312 [2024-07-23 01:09:48.451772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.312 [2024-07-23 01:09:48.451936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.312 [2024-07-23 01:09:48.451961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.312 qpair failed and we were unable to recover it. 00:30:04.312 [2024-07-23 01:09:48.452102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.312 [2024-07-23 01:09:48.452263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.312 [2024-07-23 01:09:48.452288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.312 qpair failed and we were unable to recover it. 00:30:04.312 [2024-07-23 01:09:48.452455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.312 [2024-07-23 01:09:48.452591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.312 [2024-07-23 01:09:48.452634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.312 qpair failed and we were unable to recover it. 00:30:04.312 [2024-07-23 01:09:48.452807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.312 [2024-07-23 01:09:48.452996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.312 [2024-07-23 01:09:48.453021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.312 qpair failed and we were unable to recover it. 00:30:04.312 [2024-07-23 01:09:48.453191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.312 [2024-07-23 01:09:48.453352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.312 [2024-07-23 01:09:48.453376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.312 qpair failed and we were unable to recover it. 00:30:04.312 [2024-07-23 01:09:48.453539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.312 [2024-07-23 01:09:48.453713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.312 [2024-07-23 01:09:48.453738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.312 qpair failed and we were unable to recover it. 00:30:04.312 [2024-07-23 01:09:48.453880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.312 [2024-07-23 01:09:48.454016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.312 [2024-07-23 01:09:48.454040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.312 qpair failed and we were unable to recover it. 00:30:04.312 [2024-07-23 01:09:48.454245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.312 [2024-07-23 01:09:48.454404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.312 [2024-07-23 01:09:48.454430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.312 qpair failed and we were unable to recover it. 00:30:04.312 [2024-07-23 01:09:48.454625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.313 [2024-07-23 01:09:48.454788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.313 [2024-07-23 01:09:48.454812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.313 qpair failed and we were unable to recover it. 00:30:04.313 [2024-07-23 01:09:48.454981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.313 [2024-07-23 01:09:48.455141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.313 [2024-07-23 01:09:48.455167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.313 qpair failed and we were unable to recover it. 00:30:04.313 [2024-07-23 01:09:48.455339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.313 [2024-07-23 01:09:48.455502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.313 [2024-07-23 01:09:48.455528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.313 qpair failed and we were unable to recover it. 00:30:04.313 [2024-07-23 01:09:48.455693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.313 [2024-07-23 01:09:48.455884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.313 [2024-07-23 01:09:48.455909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.313 qpair failed and we were unable to recover it. 00:30:04.313 [2024-07-23 01:09:48.456072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.313 [2024-07-23 01:09:48.456234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.313 [2024-07-23 01:09:48.456258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.313 qpair failed and we were unable to recover it. 00:30:04.313 [2024-07-23 01:09:48.456445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.313 [2024-07-23 01:09:48.456584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.313 [2024-07-23 01:09:48.456609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.313 qpair failed and we were unable to recover it. 00:30:04.313 [2024-07-23 01:09:48.456765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.313 [2024-07-23 01:09:48.456906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.313 [2024-07-23 01:09:48.456932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.313 qpair failed and we were unable to recover it. 00:30:04.313 [2024-07-23 01:09:48.457103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.313 [2024-07-23 01:09:48.457296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.313 [2024-07-23 01:09:48.457321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.313 qpair failed and we were unable to recover it. 00:30:04.313 [2024-07-23 01:09:48.457487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.313 [2024-07-23 01:09:48.457627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.313 [2024-07-23 01:09:48.457654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.313 qpair failed and we were unable to recover it. 00:30:04.313 [2024-07-23 01:09:48.457819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.313 [2024-07-23 01:09:48.457946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.313 [2024-07-23 01:09:48.457971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.313 qpair failed and we were unable to recover it. 00:30:04.313 [2024-07-23 01:09:48.458139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.313 [2024-07-23 01:09:48.458281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.313 [2024-07-23 01:09:48.458308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.313 qpair failed and we were unable to recover it. 00:30:04.313 [2024-07-23 01:09:48.458484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.313 [2024-07-23 01:09:48.458645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.313 [2024-07-23 01:09:48.458671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.313 qpair failed and we were unable to recover it. 00:30:04.313 [2024-07-23 01:09:48.458861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.313 [2024-07-23 01:09:48.459022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.313 [2024-07-23 01:09:48.459047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.313 qpair failed and we were unable to recover it. 00:30:04.313 [2024-07-23 01:09:48.459216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.313 [2024-07-23 01:09:48.459375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.313 [2024-07-23 01:09:48.459400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.313 qpair failed and we were unable to recover it. 00:30:04.313 [2024-07-23 01:09:48.459541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.313 [2024-07-23 01:09:48.459714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.313 [2024-07-23 01:09:48.459741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.313 qpair failed and we were unable to recover it. 00:30:04.313 [2024-07-23 01:09:48.459913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.313 [2024-07-23 01:09:48.460101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.313 [2024-07-23 01:09:48.460125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.313 qpair failed and we were unable to recover it. 00:30:04.313 [2024-07-23 01:09:48.460318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.313 [2024-07-23 01:09:48.460455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.313 [2024-07-23 01:09:48.460480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.313 qpair failed and we were unable to recover it. 00:30:04.313 [2024-07-23 01:09:48.460675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.313 [2024-07-23 01:09:48.460839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.313 [2024-07-23 01:09:48.460865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.313 qpair failed and we were unable to recover it. 00:30:04.313 [2024-07-23 01:09:48.461033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.313 [2024-07-23 01:09:48.461223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.313 [2024-07-23 01:09:48.461248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.313 qpair failed and we were unable to recover it. 00:30:04.313 [2024-07-23 01:09:48.461393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.313 [2024-07-23 01:09:48.461591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.313 [2024-07-23 01:09:48.461625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.313 qpair failed and we were unable to recover it. 00:30:04.313 [2024-07-23 01:09:48.461788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.313 [2024-07-23 01:09:48.461927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.313 [2024-07-23 01:09:48.461951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.313 qpair failed and we were unable to recover it. 00:30:04.313 [2024-07-23 01:09:48.462118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.313 [2024-07-23 01:09:48.462283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.313 [2024-07-23 01:09:48.462308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.313 qpair failed and we were unable to recover it. 00:30:04.313 [2024-07-23 01:09:48.462497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.313 [2024-07-23 01:09:48.462687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.313 [2024-07-23 01:09:48.462713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.313 qpair failed and we were unable to recover it. 00:30:04.313 [2024-07-23 01:09:48.462861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.313 [2024-07-23 01:09:48.463051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.313 [2024-07-23 01:09:48.463075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.313 qpair failed and we were unable to recover it. 00:30:04.313 [2024-07-23 01:09:48.463216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.313 [2024-07-23 01:09:48.463360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.313 [2024-07-23 01:09:48.463386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.313 qpair failed and we were unable to recover it. 00:30:04.313 [2024-07-23 01:09:48.463532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.313 [2024-07-23 01:09:48.463697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.313 [2024-07-23 01:09:48.463722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.313 qpair failed and we were unable to recover it. 00:30:04.313 [2024-07-23 01:09:48.463869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.313 [2024-07-23 01:09:48.464040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.313 [2024-07-23 01:09:48.464065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.313 qpair failed and we were unable to recover it. 00:30:04.313 [2024-07-23 01:09:48.464233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.313 [2024-07-23 01:09:48.464401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.313 [2024-07-23 01:09:48.464426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.313 qpair failed and we were unable to recover it. 00:30:04.313 [2024-07-23 01:09:48.464590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.313 [2024-07-23 01:09:48.464749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.314 [2024-07-23 01:09:48.464776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.314 qpair failed and we were unable to recover it. 00:30:04.314 [2024-07-23 01:09:48.464927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.314 [2024-07-23 01:09:48.465094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.314 [2024-07-23 01:09:48.465119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.314 qpair failed and we were unable to recover it. 00:30:04.314 [2024-07-23 01:09:48.465255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.314 [2024-07-23 01:09:48.465445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.314 [2024-07-23 01:09:48.465470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.314 qpair failed and we were unable to recover it. 00:30:04.314 [2024-07-23 01:09:48.465641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.314 [2024-07-23 01:09:48.465783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.314 [2024-07-23 01:09:48.465808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.314 qpair failed and we were unable to recover it. 00:30:04.314 [2024-07-23 01:09:48.465987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.314 [2024-07-23 01:09:48.466152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.314 [2024-07-23 01:09:48.466179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.314 qpair failed and we were unable to recover it. 00:30:04.314 [2024-07-23 01:09:48.466343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.314 [2024-07-23 01:09:48.466530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.314 [2024-07-23 01:09:48.466558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.314 qpair failed and we were unable to recover it. 00:30:04.314 [2024-07-23 01:09:48.466742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.314 [2024-07-23 01:09:48.466911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.314 [2024-07-23 01:09:48.466937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.314 qpair failed and we were unable to recover it. 00:30:04.314 [2024-07-23 01:09:48.467115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.314 [2024-07-23 01:09:48.467277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.314 [2024-07-23 01:09:48.467304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.314 qpair failed and we were unable to recover it. 00:30:04.314 [2024-07-23 01:09:48.467456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.314 [2024-07-23 01:09:48.467629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.314 [2024-07-23 01:09:48.467657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.314 qpair failed and we were unable to recover it. 00:30:04.314 [2024-07-23 01:09:48.467804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.314 [2024-07-23 01:09:48.468001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.314 [2024-07-23 01:09:48.468026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.314 qpair failed and we were unable to recover it. 00:30:04.314 [2024-07-23 01:09:48.468193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.314 [2024-07-23 01:09:48.468359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.314 [2024-07-23 01:09:48.468385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.314 qpair failed and we were unable to recover it. 00:30:04.314 [2024-07-23 01:09:48.468589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.314 [2024-07-23 01:09:48.468787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.314 [2024-07-23 01:09:48.468816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.314 qpair failed and we were unable to recover it. 00:30:04.314 [2024-07-23 01:09:48.468989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.314 [2024-07-23 01:09:48.469157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.314 [2024-07-23 01:09:48.469185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.314 qpair failed and we were unable to recover it. 00:30:04.314 [2024-07-23 01:09:48.469351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.314 [2024-07-23 01:09:48.469531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.314 [2024-07-23 01:09:48.469557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.314 qpair failed and we were unable to recover it. 00:30:04.314 [2024-07-23 01:09:48.469696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.314 [2024-07-23 01:09:48.469892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.314 [2024-07-23 01:09:48.469917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.314 qpair failed and we were unable to recover it. 00:30:04.314 [2024-07-23 01:09:48.470084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.314 [2024-07-23 01:09:48.470281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.314 [2024-07-23 01:09:48.470315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.314 qpair failed and we were unable to recover it. 00:30:04.314 [2024-07-23 01:09:48.470495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.314 [2024-07-23 01:09:48.470706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.314 [2024-07-23 01:09:48.470733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.314 qpair failed and we were unable to recover it. 00:30:04.314 [2024-07-23 01:09:48.470872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.587 [2024-07-23 01:09:48.471034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.587 [2024-07-23 01:09:48.471060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.587 qpair failed and we were unable to recover it. 00:30:04.587 [2024-07-23 01:09:48.471229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.587 [2024-07-23 01:09:48.471398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.587 [2024-07-23 01:09:48.471424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.587 qpair failed and we were unable to recover it. 00:30:04.587 [2024-07-23 01:09:48.471629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.587 [2024-07-23 01:09:48.471831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.587 [2024-07-23 01:09:48.471864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.587 qpair failed and we were unable to recover it. 00:30:04.587 [2024-07-23 01:09:48.472074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.587 [2024-07-23 01:09:48.472256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.587 [2024-07-23 01:09:48.472292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.587 qpair failed and we were unable to recover it. 00:30:04.587 [2024-07-23 01:09:48.472474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.587 [2024-07-23 01:09:48.472679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.587 [2024-07-23 01:09:48.472722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.587 qpair failed and we were unable to recover it. 00:30:04.587 [2024-07-23 01:09:48.472880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.587 [2024-07-23 01:09:48.473066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.587 [2024-07-23 01:09:48.473099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.587 qpair failed and we were unable to recover it. 00:30:04.587 [2024-07-23 01:09:48.473284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.587 [2024-07-23 01:09:48.473488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.587 [2024-07-23 01:09:48.473516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.587 qpair failed and we were unable to recover it. 00:30:04.587 [2024-07-23 01:09:48.473684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.587 [2024-07-23 01:09:48.473889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.587 [2024-07-23 01:09:48.473918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.587 qpair failed and we were unable to recover it. 00:30:04.587 [2024-07-23 01:09:48.474094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.587 [2024-07-23 01:09:48.474272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.587 [2024-07-23 01:09:48.474302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.587 qpair failed and we were unable to recover it. 00:30:04.587 [2024-07-23 01:09:48.474471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.587 [2024-07-23 01:09:48.474641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.587 [2024-07-23 01:09:48.474668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.587 qpair failed and we were unable to recover it. 00:30:04.587 [2024-07-23 01:09:48.474830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.587 [2024-07-23 01:09:48.475004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.587 [2024-07-23 01:09:48.475029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.587 qpair failed and we were unable to recover it. 00:30:04.587 [2024-07-23 01:09:48.475196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.587 [2024-07-23 01:09:48.475394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.587 [2024-07-23 01:09:48.475419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.587 qpair failed and we were unable to recover it. 00:30:04.587 [2024-07-23 01:09:48.475611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.587 [2024-07-23 01:09:48.475750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.587 [2024-07-23 01:09:48.475774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.587 qpair failed and we were unable to recover it. 00:30:04.587 [2024-07-23 01:09:48.475938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.587 [2024-07-23 01:09:48.476105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.587 [2024-07-23 01:09:48.476132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.587 qpair failed and we were unable to recover it. 00:30:04.587 [2024-07-23 01:09:48.476329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.587 [2024-07-23 01:09:48.476490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-07-23 01:09:48.476514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.588 qpair failed and we were unable to recover it. 00:30:04.588 [2024-07-23 01:09:48.476703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-07-23 01:09:48.476902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-07-23 01:09:48.476927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.588 qpair failed and we were unable to recover it. 00:30:04.588 [2024-07-23 01:09:48.477120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-07-23 01:09:48.477285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-07-23 01:09:48.477311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.588 qpair failed and we were unable to recover it. 00:30:04.588 [2024-07-23 01:09:48.477455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-07-23 01:09:48.477627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-07-23 01:09:48.477653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.588 qpair failed and we were unable to recover it. 00:30:04.588 [2024-07-23 01:09:48.477856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-07-23 01:09:48.478040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-07-23 01:09:48.478064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.588 qpair failed and we were unable to recover it. 00:30:04.588 [2024-07-23 01:09:48.478258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-07-23 01:09:48.478415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-07-23 01:09:48.478439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.588 qpair failed and we were unable to recover it. 00:30:04.588 [2024-07-23 01:09:48.478602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-07-23 01:09:48.478798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-07-23 01:09:48.478823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.588 qpair failed and we were unable to recover it. 00:30:04.588 [2024-07-23 01:09:48.479016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-07-23 01:09:48.479176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-07-23 01:09:48.479204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.588 qpair failed and we were unable to recover it. 00:30:04.588 [2024-07-23 01:09:48.479345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-07-23 01:09:48.479537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-07-23 01:09:48.479567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.588 qpair failed and we were unable to recover it. 00:30:04.588 [2024-07-23 01:09:48.479729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-07-23 01:09:48.479896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-07-23 01:09:48.479920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.588 qpair failed and we were unable to recover it. 00:30:04.588 [2024-07-23 01:09:48.480085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-07-23 01:09:48.480253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-07-23 01:09:48.480278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.588 qpair failed and we were unable to recover it. 00:30:04.588 [2024-07-23 01:09:48.480466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-07-23 01:09:48.480590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-07-23 01:09:48.480623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.588 qpair failed and we were unable to recover it. 00:30:04.588 [2024-07-23 01:09:48.480801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-07-23 01:09:48.480995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-07-23 01:09:48.481018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.588 qpair failed and we were unable to recover it. 00:30:04.588 [2024-07-23 01:09:48.481209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-07-23 01:09:48.481352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-07-23 01:09:48.481378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.588 qpair failed and we were unable to recover it. 00:30:04.588 [2024-07-23 01:09:48.481571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-07-23 01:09:48.481725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-07-23 01:09:48.481750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.588 qpair failed and we were unable to recover it. 00:30:04.588 [2024-07-23 01:09:48.481914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-07-23 01:09:48.482103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-07-23 01:09:48.482127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.588 qpair failed and we were unable to recover it. 00:30:04.588 [2024-07-23 01:09:48.482261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-07-23 01:09:48.482402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-07-23 01:09:48.482427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.588 qpair failed and we were unable to recover it. 00:30:04.588 [2024-07-23 01:09:48.482591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-07-23 01:09:48.482766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-07-23 01:09:48.482796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.588 qpair failed and we were unable to recover it. 00:30:04.588 [2024-07-23 01:09:48.482990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-07-23 01:09:48.483129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-07-23 01:09:48.483153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.588 qpair failed and we were unable to recover it. 00:30:04.588 [2024-07-23 01:09:48.483347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-07-23 01:09:48.483541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-07-23 01:09:48.483564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.588 qpair failed and we were unable to recover it. 00:30:04.588 [2024-07-23 01:09:48.483767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-07-23 01:09:48.483937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-07-23 01:09:48.483962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.588 qpair failed and we were unable to recover it. 00:30:04.588 [2024-07-23 01:09:48.484134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-07-23 01:09:48.484297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-07-23 01:09:48.484323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.588 qpair failed and we were unable to recover it. 00:30:04.588 [2024-07-23 01:09:48.484490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-07-23 01:09:48.484652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-07-23 01:09:48.484678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.588 qpair failed and we were unable to recover it. 00:30:04.588 [2024-07-23 01:09:48.484869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-07-23 01:09:48.485060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-07-23 01:09:48.485084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.588 qpair failed and we were unable to recover it. 00:30:04.588 [2024-07-23 01:09:48.485279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-07-23 01:09:48.485438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-07-23 01:09:48.485462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.588 qpair failed and we were unable to recover it. 00:30:04.588 [2024-07-23 01:09:48.485629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-07-23 01:09:48.485763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-07-23 01:09:48.485786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.588 qpair failed and we were unable to recover it. 00:30:04.588 [2024-07-23 01:09:48.485961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-07-23 01:09:48.486123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-07-23 01:09:48.486147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.588 qpair failed and we were unable to recover it. 00:30:04.588 [2024-07-23 01:09:48.486292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-07-23 01:09:48.486455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-07-23 01:09:48.486484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.588 qpair failed and we were unable to recover it. 00:30:04.588 [2024-07-23 01:09:48.486652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-07-23 01:09:48.486817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-07-23 01:09:48.486841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.588 qpair failed and we were unable to recover it. 00:30:04.588 [2024-07-23 01:09:48.487031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-07-23 01:09:48.487193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-07-23 01:09:48.487218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.588 qpair failed and we were unable to recover it. 00:30:04.588 [2024-07-23 01:09:48.487410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-07-23 01:09:48.487570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-07-23 01:09:48.487594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.588 qpair failed and we were unable to recover it. 00:30:04.588 [2024-07-23 01:09:48.487788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-07-23 01:09:48.487924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-07-23 01:09:48.487947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.588 qpair failed and we were unable to recover it. 00:30:04.588 [2024-07-23 01:09:48.488135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-07-23 01:09:48.488268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-07-23 01:09:48.488292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.588 qpair failed and we were unable to recover it. 00:30:04.588 [2024-07-23 01:09:48.488456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-07-23 01:09:48.488641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-07-23 01:09:48.488674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.588 qpair failed and we were unable to recover it. 00:30:04.588 [2024-07-23 01:09:48.488842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-07-23 01:09:48.488986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-07-23 01:09:48.489010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.588 qpair failed and we were unable to recover it. 00:30:04.588 [2024-07-23 01:09:48.489172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-07-23 01:09:48.489313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-07-23 01:09:48.489338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.588 qpair failed and we were unable to recover it. 00:30:04.588 [2024-07-23 01:09:48.489530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-07-23 01:09:48.489693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-07-23 01:09:48.489717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.588 qpair failed and we were unable to recover it. 00:30:04.588 [2024-07-23 01:09:48.489882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-07-23 01:09:48.490050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-07-23 01:09:48.490079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.588 qpair failed and we were unable to recover it. 00:30:04.588 [2024-07-23 01:09:48.490271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-07-23 01:09:48.490434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-07-23 01:09:48.490459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.588 qpair failed and we were unable to recover it. 00:30:04.588 [2024-07-23 01:09:48.490594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-07-23 01:09:48.490745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-07-23 01:09:48.490771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.588 qpair failed and we were unable to recover it. 00:30:04.588 [2024-07-23 01:09:48.490966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-07-23 01:09:48.491098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-07-23 01:09:48.491123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.588 qpair failed and we were unable to recover it. 00:30:04.588 [2024-07-23 01:09:48.491259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-07-23 01:09:48.491418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-07-23 01:09:48.491442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.588 qpair failed and we were unable to recover it. 00:30:04.588 [2024-07-23 01:09:48.491602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-07-23 01:09:48.491753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-07-23 01:09:48.491779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.588 qpair failed and we were unable to recover it. 00:30:04.588 [2024-07-23 01:09:48.491925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-07-23 01:09:48.492082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-07-23 01:09:48.492106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.588 qpair failed and we were unable to recover it. 00:30:04.588 [2024-07-23 01:09:48.492299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-07-23 01:09:48.492462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-07-23 01:09:48.492487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.588 qpair failed and we were unable to recover it. 00:30:04.588 [2024-07-23 01:09:48.492646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-07-23 01:09:48.492787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-07-23 01:09:48.492812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.588 qpair failed and we were unable to recover it. 00:30:04.588 [2024-07-23 01:09:48.493004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-07-23 01:09:48.493189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-07-23 01:09:48.493213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.588 qpair failed and we were unable to recover it. 00:30:04.588 [2024-07-23 01:09:48.493376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-07-23 01:09:48.493538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-07-23 01:09:48.493563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.588 qpair failed and we were unable to recover it. 00:30:04.588 [2024-07-23 01:09:48.493776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-07-23 01:09:48.493938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-07-23 01:09:48.493962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.588 qpair failed and we were unable to recover it. 00:30:04.588 [2024-07-23 01:09:48.494152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-07-23 01:09:48.494314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-07-23 01:09:48.494339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.588 qpair failed and we were unable to recover it. 00:30:04.588 [2024-07-23 01:09:48.494503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-07-23 01:09:48.494669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-07-23 01:09:48.494694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.588 qpair failed and we were unable to recover it. 00:30:04.588 [2024-07-23 01:09:48.494853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-07-23 01:09:48.495042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-07-23 01:09:48.495067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.588 qpair failed and we were unable to recover it. 00:30:04.588 [2024-07-23 01:09:48.495230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-07-23 01:09:48.495422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-07-23 01:09:48.495446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.588 qpair failed and we were unable to recover it. 00:30:04.588 [2024-07-23 01:09:48.495617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.495807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.495832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.589 qpair failed and we were unable to recover it. 00:30:04.589 [2024-07-23 01:09:48.495966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.496132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.496159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.589 qpair failed and we were unable to recover it. 00:30:04.589 [2024-07-23 01:09:48.496300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.496458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.496482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.589 qpair failed and we were unable to recover it. 00:30:04.589 [2024-07-23 01:09:48.496678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.496843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.496868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.589 qpair failed and we were unable to recover it. 00:30:04.589 [2024-07-23 01:09:48.497059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.497224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.497249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.589 qpair failed and we were unable to recover it. 00:30:04.589 [2024-07-23 01:09:48.497392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.497556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.497581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.589 qpair failed and we were unable to recover it. 00:30:04.589 [2024-07-23 01:09:48.497743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.497905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.497930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.589 qpair failed and we were unable to recover it. 00:30:04.589 [2024-07-23 01:09:48.498070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.498229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.498252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.589 qpair failed and we were unable to recover it. 00:30:04.589 [2024-07-23 01:09:48.498393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.498583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.498608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.589 qpair failed and we were unable to recover it. 00:30:04.589 [2024-07-23 01:09:48.498777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.498937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.498961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.589 qpair failed and we were unable to recover it. 00:30:04.589 [2024-07-23 01:09:48.499148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.499290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.499314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.589 qpair failed and we were unable to recover it. 00:30:04.589 [2024-07-23 01:09:48.499461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.499624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.499649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.589 qpair failed and we were unable to recover it. 00:30:04.589 [2024-07-23 01:09:48.499789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.499946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.499971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.589 qpair failed and we were unable to recover it. 00:30:04.589 [2024-07-23 01:09:48.500133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.500266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.500289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.589 qpair failed and we were unable to recover it. 00:30:04.589 [2024-07-23 01:09:48.500453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.500640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.500666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.589 qpair failed and we were unable to recover it. 00:30:04.589 [2024-07-23 01:09:48.500865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.501009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.501034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.589 qpair failed and we were unable to recover it. 00:30:04.589 [2024-07-23 01:09:48.501178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.501324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.501348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.589 qpair failed and we were unable to recover it. 00:30:04.589 [2024-07-23 01:09:48.501542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.501710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.501737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.589 qpair failed and we were unable to recover it. 00:30:04.589 [2024-07-23 01:09:48.501925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.502104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.502143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.589 qpair failed and we were unable to recover it. 00:30:04.589 [2024-07-23 01:09:48.502352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.502516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.502540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.589 qpair failed and we were unable to recover it. 00:30:04.589 [2024-07-23 01:09:48.502703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.502864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.502891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.589 qpair failed and we were unable to recover it. 00:30:04.589 [2024-07-23 01:09:48.503086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.503255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.503280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.589 qpair failed and we were unable to recover it. 00:30:04.589 [2024-07-23 01:09:48.503444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.503631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.503665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.589 qpair failed and we were unable to recover it. 00:30:04.589 [2024-07-23 01:09:48.503845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.504004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.504031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.589 qpair failed and we were unable to recover it. 00:30:04.589 [2024-07-23 01:09:48.504201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.504402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.504428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.589 qpair failed and we were unable to recover it. 00:30:04.589 [2024-07-23 01:09:48.504573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.504789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.504816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.589 qpair failed and we were unable to recover it. 00:30:04.589 [2024-07-23 01:09:48.504943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.505109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.505134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.589 qpair failed and we were unable to recover it. 00:30:04.589 [2024-07-23 01:09:48.505302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.505524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.505549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.589 qpair failed and we were unable to recover it. 00:30:04.589 [2024-07-23 01:09:48.505716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.505847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.505872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.589 qpair failed and we were unable to recover it. 00:30:04.589 [2024-07-23 01:09:48.506009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.506201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.506226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.589 qpair failed and we were unable to recover it. 00:30:04.589 [2024-07-23 01:09:48.506418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.506579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.506604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.589 qpair failed and we were unable to recover it. 00:30:04.589 [2024-07-23 01:09:48.506814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.506971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.507010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.589 qpair failed and we were unable to recover it. 00:30:04.589 [2024-07-23 01:09:48.507208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.507371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.507396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.589 qpair failed and we were unable to recover it. 00:30:04.589 [2024-07-23 01:09:48.507530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.507719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.507744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.589 qpair failed and we were unable to recover it. 00:30:04.589 [2024-07-23 01:09:48.507888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.508037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.508062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.589 qpair failed and we were unable to recover it. 00:30:04.589 [2024-07-23 01:09:48.508273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.508447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.508473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.589 qpair failed and we were unable to recover it. 00:30:04.589 [2024-07-23 01:09:48.508660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.508906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.508930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.589 qpair failed and we were unable to recover it. 00:30:04.589 [2024-07-23 01:09:48.509081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.509386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.509411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.589 qpair failed and we were unable to recover it. 00:30:04.589 [2024-07-23 01:09:48.509569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.509754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.509779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.589 qpair failed and we were unable to recover it. 00:30:04.589 [2024-07-23 01:09:48.509989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.510176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.510202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.589 qpair failed and we were unable to recover it. 00:30:04.589 [2024-07-23 01:09:48.510444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.510592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.510627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.589 qpair failed and we were unable to recover it. 00:30:04.589 [2024-07-23 01:09:48.510789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.510950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.510975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.589 qpair failed and we were unable to recover it. 00:30:04.589 [2024-07-23 01:09:48.511146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.511338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.511363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.589 qpair failed and we were unable to recover it. 00:30:04.589 [2024-07-23 01:09:48.511532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.511720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.511746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.589 qpair failed and we were unable to recover it. 00:30:04.589 [2024-07-23 01:09:48.511913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.512105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.512130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.589 qpair failed and we were unable to recover it. 00:30:04.589 [2024-07-23 01:09:48.512273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.512460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.512485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.589 qpair failed and we were unable to recover it. 00:30:04.589 [2024-07-23 01:09:48.512676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.512847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.512872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.589 qpair failed and we were unable to recover it. 00:30:04.589 [2024-07-23 01:09:48.513065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.513225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.513267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.589 qpair failed and we were unable to recover it. 00:30:04.589 [2024-07-23 01:09:48.513483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.513662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.513688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.589 qpair failed and we were unable to recover it. 00:30:04.589 [2024-07-23 01:09:48.513879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.514041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.514066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.589 qpair failed and we were unable to recover it. 00:30:04.589 [2024-07-23 01:09:48.514215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.514387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.514412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.589 qpair failed and we were unable to recover it. 00:30:04.589 [2024-07-23 01:09:48.514580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.514760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.514787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.589 qpair failed and we were unable to recover it. 00:30:04.589 [2024-07-23 01:09:48.514986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.515152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.515179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.589 qpair failed and we were unable to recover it. 00:30:04.589 [2024-07-23 01:09:48.515344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.515475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.515499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.589 qpair failed and we were unable to recover it. 00:30:04.589 [2024-07-23 01:09:48.515673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.515827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.515852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:04.589 qpair failed and we were unable to recover it. 00:30:04.589 [2024-07-23 01:09:48.516052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.516263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.516292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.589 qpair failed and we were unable to recover it. 00:30:04.589 [2024-07-23 01:09:48.516470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.516643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.516676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.589 qpair failed and we were unable to recover it. 00:30:04.589 [2024-07-23 01:09:48.516880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.517050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.517076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.589 qpair failed and we were unable to recover it. 00:30:04.589 [2024-07-23 01:09:48.517224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.517421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.517447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.589 qpair failed and we were unable to recover it. 00:30:04.589 [2024-07-23 01:09:48.517625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.517772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.517797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.589 qpair failed and we were unable to recover it. 00:30:04.589 [2024-07-23 01:09:48.517943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.518139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.589 [2024-07-23 01:09:48.518166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.590 qpair failed and we were unable to recover it. 00:30:04.590 [2024-07-23 01:09:48.518333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.518532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.518559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.590 qpair failed and we were unable to recover it. 00:30:04.590 [2024-07-23 01:09:48.518707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.518876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.518902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.590 qpair failed and we were unable to recover it. 00:30:04.590 [2024-07-23 01:09:48.519048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.519222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.519248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.590 qpair failed and we were unable to recover it. 00:30:04.590 [2024-07-23 01:09:48.519413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.519554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.519579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.590 qpair failed and we were unable to recover it. 00:30:04.590 [2024-07-23 01:09:48.519791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.519930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.519956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.590 qpair failed and we were unable to recover it. 00:30:04.590 [2024-07-23 01:09:48.520124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.520295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.520321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.590 qpair failed and we were unable to recover it. 00:30:04.590 [2024-07-23 01:09:48.520462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.520638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.520665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.590 qpair failed and we were unable to recover it. 00:30:04.590 [2024-07-23 01:09:48.523626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.523820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.523850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.590 qpair failed and we were unable to recover it. 00:30:04.590 [2024-07-23 01:09:48.524062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.524232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.524259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.590 qpair failed and we were unable to recover it. 00:30:04.590 [2024-07-23 01:09:48.524430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.524576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.524603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.590 qpair failed and we were unable to recover it. 00:30:04.590 [2024-07-23 01:09:48.524778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.524943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.524969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.590 qpair failed and we were unable to recover it. 00:30:04.590 [2024-07-23 01:09:48.525165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.525334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.525359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.590 qpair failed and we were unable to recover it. 00:30:04.590 [2024-07-23 01:09:48.525552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.525724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.525750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.590 qpair failed and we were unable to recover it. 00:30:04.590 [2024-07-23 01:09:48.525948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.526120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.526146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.590 qpair failed and we were unable to recover it. 00:30:04.590 [2024-07-23 01:09:48.526316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.526485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.526511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.590 qpair failed and we were unable to recover it. 00:30:04.590 [2024-07-23 01:09:48.526702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.526895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.526923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.590 qpair failed and we were unable to recover it. 00:30:04.590 [2024-07-23 01:09:48.527127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.527298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.527324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.590 qpair failed and we were unable to recover it. 00:30:04.590 [2024-07-23 01:09:48.527495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.527663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.527689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.590 qpair failed and we were unable to recover it. 00:30:04.590 [2024-07-23 01:09:48.527860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.528024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.528050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.590 qpair failed and we were unable to recover it. 00:30:04.590 [2024-07-23 01:09:48.528219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.528389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.528414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.590 qpair failed and we were unable to recover it. 00:30:04.590 [2024-07-23 01:09:48.528564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.528706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.528733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.590 qpair failed and we were unable to recover it. 00:30:04.590 [2024-07-23 01:09:48.528929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.529075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.529100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.590 qpair failed and we were unable to recover it. 00:30:04.590 [2024-07-23 01:09:48.529271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.529440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.529466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.590 qpair failed and we were unable to recover it. 00:30:04.590 [2024-07-23 01:09:48.529671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.529815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.529840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.590 qpair failed and we were unable to recover it. 00:30:04.590 [2024-07-23 01:09:48.530040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.530188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.530214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.590 qpair failed and we were unable to recover it. 00:30:04.590 [2024-07-23 01:09:48.530413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.530587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.530618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.590 qpair failed and we were unable to recover it. 00:30:04.590 [2024-07-23 01:09:48.530784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.530945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.530972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.590 qpair failed and we were unable to recover it. 00:30:04.590 [2024-07-23 01:09:48.532638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.532866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.532895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.590 qpair failed and we were unable to recover it. 00:30:04.590 [2024-07-23 01:09:48.533080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.533264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.533294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.590 qpair failed and we were unable to recover it. 00:30:04.590 [2024-07-23 01:09:48.533506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.533684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.533712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.590 qpair failed and we were unable to recover it. 00:30:04.590 [2024-07-23 01:09:48.533906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.534097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.534126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.590 qpair failed and we were unable to recover it. 00:30:04.590 [2024-07-23 01:09:48.534396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.534575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.534601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.590 qpair failed and we were unable to recover it. 00:30:04.590 [2024-07-23 01:09:48.534772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.534911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.534937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.590 qpair failed and we were unable to recover it. 00:30:04.590 [2024-07-23 01:09:48.535130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.535275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.535315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.590 qpair failed and we were unable to recover it. 00:30:04.590 [2024-07-23 01:09:48.535493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.535696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.535724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.590 qpair failed and we were unable to recover it. 00:30:04.590 [2024-07-23 01:09:48.535891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.536118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.536144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.590 qpair failed and we were unable to recover it. 00:30:04.590 [2024-07-23 01:09:48.536330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.537631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.537667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.590 qpair failed and we were unable to recover it. 00:30:04.590 [2024-07-23 01:09:48.537886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.538100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.538130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.590 qpair failed and we were unable to recover it. 00:30:04.590 [2024-07-23 01:09:48.538329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.538518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.538548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.590 qpair failed and we were unable to recover it. 00:30:04.590 [2024-07-23 01:09:48.538721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.538900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.538927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.590 qpair failed and we were unable to recover it. 00:30:04.590 [2024-07-23 01:09:48.539109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.539271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.539297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.590 qpair failed and we were unable to recover it. 00:30:04.590 [2024-07-23 01:09:48.539473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.539675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.539701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.590 qpair failed and we were unable to recover it. 00:30:04.590 [2024-07-23 01:09:48.539868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.540042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.540084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.590 qpair failed and we were unable to recover it. 00:30:04.590 [2024-07-23 01:09:48.540262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.540453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.540479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.590 qpair failed and we were unable to recover it. 00:30:04.590 [2024-07-23 01:09:48.542627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.542848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.542880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.590 qpair failed and we were unable to recover it. 00:30:04.590 [2024-07-23 01:09:48.543108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.543298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.543324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.590 qpair failed and we were unable to recover it. 00:30:04.590 [2024-07-23 01:09:48.543543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.543721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.543747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.590 qpair failed and we were unable to recover it. 00:30:04.590 [2024-07-23 01:09:48.543921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.544067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.544093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.590 qpair failed and we were unable to recover it. 00:30:04.590 [2024-07-23 01:09:48.544286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.544464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.544491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.590 qpair failed and we were unable to recover it. 00:30:04.590 [2024-07-23 01:09:48.544638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.544832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.544858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.590 qpair failed and we were unable to recover it. 00:30:04.590 [2024-07-23 01:09:48.545002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.545224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.545250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.590 qpair failed and we were unable to recover it. 00:30:04.590 [2024-07-23 01:09:48.545404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.545649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.545675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.590 qpair failed and we were unable to recover it. 00:30:04.590 [2024-07-23 01:09:48.545861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.546120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.546147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.590 qpair failed and we were unable to recover it. 00:30:04.590 [2024-07-23 01:09:48.546344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.546477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.546503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.590 qpair failed and we were unable to recover it. 00:30:04.590 [2024-07-23 01:09:48.546644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.546816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.546846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.590 qpair failed and we were unable to recover it. 00:30:04.590 [2024-07-23 01:09:48.547001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.547140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.547166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.590 qpair failed and we were unable to recover it. 00:30:04.590 [2024-07-23 01:09:48.547346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.547514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.547540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.590 qpair failed and we were unable to recover it. 00:30:04.590 [2024-07-23 01:09:48.547775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.551638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.590 [2024-07-23 01:09:48.551673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.590 qpair failed and we were unable to recover it. 00:30:04.591 [2024-07-23 01:09:48.551895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.552089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.552118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.591 qpair failed and we were unable to recover it. 00:30:04.591 [2024-07-23 01:09:48.552282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.552474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.552500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.591 qpair failed and we were unable to recover it. 00:30:04.591 [2024-07-23 01:09:48.552673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.552848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.552874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.591 qpair failed and we were unable to recover it. 00:30:04.591 [2024-07-23 01:09:48.553057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.553194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.553219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.591 qpair failed and we were unable to recover it. 00:30:04.591 [2024-07-23 01:09:48.553389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.553554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.553580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.591 qpair failed and we were unable to recover it. 00:30:04.591 [2024-07-23 01:09:48.553813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.554006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.554047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.591 qpair failed and we were unable to recover it. 00:30:04.591 [2024-07-23 01:09:48.554271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.554445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.554476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.591 qpair failed and we were unable to recover it. 00:30:04.591 [2024-07-23 01:09:48.554721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.554898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.554924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.591 qpair failed and we were unable to recover it. 00:30:04.591 [2024-07-23 01:09:48.555098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.555288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.555314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.591 qpair failed and we were unable to recover it. 00:30:04.591 [2024-07-23 01:09:48.555461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.555655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.555682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.591 qpair failed and we were unable to recover it. 00:30:04.591 [2024-07-23 01:09:48.555855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.556048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.556074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.591 qpair failed and we were unable to recover it. 00:30:04.591 [2024-07-23 01:09:48.556239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.556410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.556451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.591 qpair failed and we were unable to recover it. 00:30:04.591 [2024-07-23 01:09:48.556699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.556905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.556930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.591 qpair failed and we were unable to recover it. 00:30:04.591 [2024-07-23 01:09:48.557076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.557242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.557268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.591 qpair failed and we were unable to recover it. 00:30:04.591 [2024-07-23 01:09:48.557420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.557587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.557618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.591 qpair failed and we were unable to recover it. 00:30:04.591 [2024-07-23 01:09:48.557777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.557916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.557942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.591 qpair failed and we were unable to recover it. 00:30:04.591 [2024-07-23 01:09:48.558155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.558348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.558379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.591 qpair failed and we were unable to recover it. 00:30:04.591 [2024-07-23 01:09:48.558532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.558682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.558708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.591 qpair failed and we were unable to recover it. 00:30:04.591 [2024-07-23 01:09:48.558912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.559113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.559139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.591 qpair failed and we were unable to recover it. 00:30:04.591 [2024-07-23 01:09:48.559308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.559513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.559539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.591 qpair failed and we were unable to recover it. 00:30:04.591 [2024-07-23 01:09:48.559708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.559880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.559906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.591 qpair failed and we were unable to recover it. 00:30:04.591 [2024-07-23 01:09:48.560072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.560249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.560275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.591 qpair failed and we were unable to recover it. 00:30:04.591 [2024-07-23 01:09:48.560427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.560639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.560667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.591 qpair failed and we were unable to recover it. 00:30:04.591 [2024-07-23 01:09:48.560859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.561009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.561034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.591 qpair failed and we were unable to recover it. 00:30:04.591 [2024-07-23 01:09:48.561232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.561625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.561651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.591 qpair failed and we were unable to recover it. 00:30:04.591 [2024-07-23 01:09:48.561832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.561977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.562003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.591 qpair failed and we were unable to recover it. 00:30:04.591 [2024-07-23 01:09:48.565629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.565815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.565849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.591 qpair failed and we were unable to recover it. 00:30:04.591 [2024-07-23 01:09:48.566062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.566210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.566251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.591 qpair failed and we were unable to recover it. 00:30:04.591 [2024-07-23 01:09:48.566434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.566644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.566671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.591 qpair failed and we were unable to recover it. 00:30:04.591 [2024-07-23 01:09:48.566842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.567038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.567064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.591 qpair failed and we were unable to recover it. 00:30:04.591 [2024-07-23 01:09:48.567235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.567406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.567432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.591 qpair failed and we were unable to recover it. 00:30:04.591 [2024-07-23 01:09:48.567600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.567765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.567791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.591 qpair failed and we were unable to recover it. 00:30:04.591 [2024-07-23 01:09:48.567972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.568166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.568194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.591 qpair failed and we were unable to recover it. 00:30:04.591 [2024-07-23 01:09:48.568437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.568579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.568629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.591 qpair failed and we were unable to recover it. 00:30:04.591 [2024-07-23 01:09:48.568903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.569060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.569087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.591 qpair failed and we were unable to recover it. 00:30:04.591 [2024-07-23 01:09:48.569348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.569519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.569545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.591 qpair failed and we were unable to recover it. 00:30:04.591 [2024-07-23 01:09:48.569745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.569919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.569945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.591 qpair failed and we were unable to recover it. 00:30:04.591 [2024-07-23 01:09:48.570092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.570262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.570287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.591 qpair failed and we were unable to recover it. 00:30:04.591 [2024-07-23 01:09:48.570455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.570625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.570659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.591 qpair failed and we were unable to recover it. 00:30:04.591 [2024-07-23 01:09:48.570839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.571003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.571029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.591 qpair failed and we were unable to recover it. 00:30:04.591 [2024-07-23 01:09:48.571222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.571416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.571443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.591 qpair failed and we were unable to recover it. 00:30:04.591 [2024-07-23 01:09:48.571638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.571785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.571811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.591 qpair failed and we were unable to recover it. 00:30:04.591 [2024-07-23 01:09:48.571975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.572171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.572197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.591 qpair failed and we were unable to recover it. 00:30:04.591 [2024-07-23 01:09:48.572343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.572512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.572539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.591 qpair failed and we were unable to recover it. 00:30:04.591 [2024-07-23 01:09:48.572683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.572852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.572879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.591 qpair failed and we were unable to recover it. 00:30:04.591 [2024-07-23 01:09:48.573094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.573237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.573263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.591 qpair failed and we were unable to recover it. 00:30:04.591 [2024-07-23 01:09:48.573455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.573625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.573651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.591 qpair failed and we were unable to recover it. 00:30:04.591 [2024-07-23 01:09:48.573827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.574023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.574049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.591 qpair failed and we were unable to recover it. 00:30:04.591 [2024-07-23 01:09:48.574193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.574434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.574460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.591 qpair failed and we were unable to recover it. 00:30:04.591 [2024-07-23 01:09:48.574624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.575627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.575658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.591 qpair failed and we were unable to recover it. 00:30:04.591 [2024-07-23 01:09:48.575854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.576072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.576101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.591 qpair failed and we were unable to recover it. 00:30:04.591 [2024-07-23 01:09:48.576279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.576455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.576484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.591 qpair failed and we were unable to recover it. 00:30:04.591 [2024-07-23 01:09:48.576688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.576862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.576889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.591 qpair failed and we were unable to recover it. 00:30:04.591 [2024-07-23 01:09:48.577124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.577323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.577349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.591 qpair failed and we were unable to recover it. 00:30:04.591 [2024-07-23 01:09:48.577496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.577671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.577698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.591 qpair failed and we were unable to recover it. 00:30:04.591 [2024-07-23 01:09:48.577890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.580645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.580679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.591 qpair failed and we were unable to recover it. 00:30:04.591 [2024-07-23 01:09:48.580872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.581089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.581118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.591 qpair failed and we were unable to recover it. 00:30:04.591 [2024-07-23 01:09:48.581323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.581517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.581543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.591 qpair failed and we were unable to recover it. 00:30:04.591 [2024-07-23 01:09:48.581688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.581855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.581881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.591 qpair failed and we were unable to recover it. 00:30:04.591 [2024-07-23 01:09:48.582050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.582204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.582245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.591 qpair failed and we were unable to recover it. 00:30:04.591 [2024-07-23 01:09:48.582505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.582769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.591 [2024-07-23 01:09:48.582797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.591 qpair failed and we were unable to recover it. 00:30:04.592 [2024-07-23 01:09:48.582973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.583164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.583190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.592 qpair failed and we were unable to recover it. 00:30:04.592 [2024-07-23 01:09:48.583336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.583504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.583546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.592 qpair failed and we were unable to recover it. 00:30:04.592 [2024-07-23 01:09:48.583701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.583849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.583875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.592 qpair failed and we were unable to recover it. 00:30:04.592 [2024-07-23 01:09:48.584050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.584221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.584246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.592 qpair failed and we were unable to recover it. 00:30:04.592 [2024-07-23 01:09:48.584455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.584676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.584704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.592 qpair failed and we were unable to recover it. 00:30:04.592 [2024-07-23 01:09:48.584876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.585069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.585095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.592 qpair failed and we were unable to recover it. 00:30:04.592 [2024-07-23 01:09:48.585272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.585508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.585534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.592 qpair failed and we were unable to recover it. 00:30:04.592 [2024-07-23 01:09:48.585724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.585876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.585901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.592 qpair failed and we were unable to recover it. 00:30:04.592 [2024-07-23 01:09:48.586086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.586281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.586307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.592 qpair failed and we were unable to recover it. 00:30:04.592 [2024-07-23 01:09:48.586487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.586624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.586650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.592 qpair failed and we were unable to recover it. 00:30:04.592 [2024-07-23 01:09:48.586820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.586968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.586993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.592 qpair failed and we were unable to recover it. 00:30:04.592 [2024-07-23 01:09:48.587225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.587366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.587392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.592 qpair failed and we were unable to recover it. 00:30:04.592 [2024-07-23 01:09:48.587588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.589625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.589655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.592 qpair failed and we were unable to recover it. 00:30:04.592 [2024-07-23 01:09:48.589860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.590006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.590047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.592 qpair failed and we were unable to recover it. 00:30:04.592 [2024-07-23 01:09:48.590225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.590395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.590421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.592 qpair failed and we were unable to recover it. 00:30:04.592 [2024-07-23 01:09:48.590589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.590768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.590794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.592 qpair failed and we were unable to recover it. 00:30:04.592 [2024-07-23 01:09:48.590970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.591162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.591188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.592 qpair failed and we were unable to recover it. 00:30:04.592 [2024-07-23 01:09:48.591385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.591531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.591572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.592 qpair failed and we were unable to recover it. 00:30:04.592 [2024-07-23 01:09:48.591794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.591957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.591982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.592 qpair failed and we were unable to recover it. 00:30:04.592 [2024-07-23 01:09:48.592149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.592354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.592380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.592 qpair failed and we were unable to recover it. 00:30:04.592 [2024-07-23 01:09:48.592524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.592766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.592792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.592 qpair failed and we were unable to recover it. 00:30:04.592 [2024-07-23 01:09:48.592967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.593137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.593163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.592 qpair failed and we were unable to recover it. 00:30:04.592 [2024-07-23 01:09:48.593360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.593499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.593525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.592 qpair failed and we were unable to recover it. 00:30:04.592 [2024-07-23 01:09:48.593665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.593831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.593857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.592 qpair failed and we were unable to recover it. 00:30:04.592 [2024-07-23 01:09:48.594120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.594328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.594354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.592 qpair failed and we were unable to recover it. 00:30:04.592 [2024-07-23 01:09:48.594521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.594716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.594744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.592 qpair failed and we were unable to recover it. 00:30:04.592 [2024-07-23 01:09:48.598627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.598824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.598855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.592 qpair failed and we were unable to recover it. 00:30:04.592 [2024-07-23 01:09:48.599047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.599236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.599265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.592 qpair failed and we were unable to recover it. 00:30:04.592 [2024-07-23 01:09:48.599429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.599725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.599752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.592 qpair failed and we were unable to recover it. 00:30:04.592 [2024-07-23 01:09:48.599966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.600130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.600170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.592 qpair failed and we were unable to recover it. 00:30:04.592 [2024-07-23 01:09:48.600373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.600518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.600545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.592 qpair failed and we were unable to recover it. 00:30:04.592 [2024-07-23 01:09:48.600700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.600874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.600899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.592 qpair failed and we were unable to recover it. 00:30:04.592 [2024-07-23 01:09:48.601097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.601267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.601292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.592 qpair failed and we were unable to recover it. 00:30:04.592 [2024-07-23 01:09:48.601460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.601621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.601648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.592 qpair failed and we were unable to recover it. 00:30:04.592 [2024-07-23 01:09:48.601809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.601955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.601980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.592 qpair failed and we were unable to recover it. 00:30:04.592 [2024-07-23 01:09:48.602166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.602400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.602426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.592 qpair failed and we were unable to recover it. 00:30:04.592 [2024-07-23 01:09:48.602649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.602864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.602891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.592 qpair failed and we were unable to recover it. 00:30:04.592 [2024-07-23 01:09:48.603057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.603257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.603284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.592 qpair failed and we were unable to recover it. 00:30:04.592 [2024-07-23 01:09:48.603454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.603658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.603684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.592 qpair failed and we were unable to recover it. 00:30:04.592 [2024-07-23 01:09:48.603833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.604000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.604027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.592 qpair failed and we were unable to recover it. 00:30:04.592 [2024-07-23 01:09:48.604219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.604394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.604421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.592 qpair failed and we were unable to recover it. 00:30:04.592 [2024-07-23 01:09:48.604585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.604800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.604826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.592 qpair failed and we were unable to recover it. 00:30:04.592 [2024-07-23 01:09:48.604993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.605237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.605279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.592 qpair failed and we were unable to recover it. 00:30:04.592 [2024-07-23 01:09:48.605467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.605637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.605671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.592 qpair failed and we were unable to recover it. 00:30:04.592 [2024-07-23 01:09:48.605837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.606069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.606096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.592 qpair failed and we were unable to recover it. 00:30:04.592 [2024-07-23 01:09:48.606352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.606560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.606586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.592 qpair failed and we were unable to recover it. 00:30:04.592 [2024-07-23 01:09:48.606760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.606941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.606968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.592 qpair failed and we were unable to recover it. 00:30:04.592 [2024-07-23 01:09:48.607115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.607289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.607314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.592 qpair failed and we were unable to recover it. 00:30:04.592 [2024-07-23 01:09:48.607495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.607664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.607691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.592 qpair failed and we were unable to recover it. 00:30:04.592 [2024-07-23 01:09:48.607883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.608031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.608056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.592 qpair failed and we were unable to recover it. 00:30:04.592 [2024-07-23 01:09:48.608213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.608407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.608433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.592 qpair failed and we were unable to recover it. 00:30:04.592 [2024-07-23 01:09:48.608574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.612625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.612659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.592 qpair failed and we were unable to recover it. 00:30:04.592 [2024-07-23 01:09:48.612897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.613101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.613128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.592 qpair failed and we were unable to recover it. 00:30:04.592 [2024-07-23 01:09:48.613329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.613480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.613506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.592 qpair failed and we were unable to recover it. 00:30:04.592 [2024-07-23 01:09:48.613648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.613806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.613832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.592 qpair failed and we were unable to recover it. 00:30:04.592 [2024-07-23 01:09:48.614030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.614179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.614204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.592 qpair failed and we were unable to recover it. 00:30:04.592 [2024-07-23 01:09:48.614385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.614663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.614691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.592 qpair failed and we were unable to recover it. 00:30:04.592 [2024-07-23 01:09:48.614896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.615073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.592 [2024-07-23 01:09:48.615099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.593 qpair failed and we were unable to recover it. 00:30:04.593 [2024-07-23 01:09:48.615331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.615482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.615508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.593 qpair failed and we were unable to recover it. 00:30:04.593 [2024-07-23 01:09:48.615718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.615924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.615950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.593 qpair failed and we were unable to recover it. 00:30:04.593 [2024-07-23 01:09:48.616126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.616407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.616433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.593 qpair failed and we were unable to recover it. 00:30:04.593 [2024-07-23 01:09:48.616663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.616834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.616860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.593 qpair failed and we were unable to recover it. 00:30:04.593 [2024-07-23 01:09:48.617006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.617202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.617243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.593 qpair failed and we were unable to recover it. 00:30:04.593 [2024-07-23 01:09:48.617445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.617638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.617670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.593 qpair failed and we were unable to recover it. 00:30:04.593 [2024-07-23 01:09:48.617883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.618086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.618113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.593 qpair failed and we were unable to recover it. 00:30:04.593 [2024-07-23 01:09:48.618246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.618412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.618438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.593 qpair failed and we were unable to recover it. 00:30:04.593 [2024-07-23 01:09:48.618609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.618897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.618923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.593 qpair failed and we were unable to recover it. 00:30:04.593 [2024-07-23 01:09:48.619078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.619225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.619251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.593 qpair failed and we were unable to recover it. 00:30:04.593 [2024-07-23 01:09:48.619444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.619649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.619676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.593 qpair failed and we were unable to recover it. 00:30:04.593 [2024-07-23 01:09:48.619817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.619995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.620021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.593 qpair failed and we were unable to recover it. 00:30:04.593 [2024-07-23 01:09:48.620224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.620397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.620424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.593 qpair failed and we were unable to recover it. 00:30:04.593 [2024-07-23 01:09:48.620591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.620777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.620804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.593 qpair failed and we were unable to recover it. 00:30:04.593 [2024-07-23 01:09:48.620974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.621131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.621157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.593 qpair failed and we were unable to recover it. 00:30:04.593 [2024-07-23 01:09:48.621353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.621524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.621551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.593 qpair failed and we were unable to recover it. 00:30:04.593 [2024-07-23 01:09:48.621692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.621940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.621966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.593 qpair failed and we were unable to recover it. 00:30:04.593 [2024-07-23 01:09:48.622234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.622464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.622491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.593 qpair failed and we were unable to recover it. 00:30:04.593 [2024-07-23 01:09:48.622668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.622873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.622900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.593 qpair failed and we were unable to recover it. 00:30:04.593 [2024-07-23 01:09:48.623069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.626627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.626664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.593 qpair failed and we were unable to recover it. 00:30:04.593 [2024-07-23 01:09:48.626887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.627058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.627100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.593 qpair failed and we were unable to recover it. 00:30:04.593 [2024-07-23 01:09:48.627284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.627432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.627457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.593 qpair failed and we were unable to recover it. 00:30:04.593 [2024-07-23 01:09:48.627630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.627826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.627852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.593 qpair failed and we were unable to recover it. 00:30:04.593 [2024-07-23 01:09:48.628011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.628183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.628224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.593 qpair failed and we were unable to recover it. 00:30:04.593 [2024-07-23 01:09:48.628375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.628542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.628568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.593 qpair failed and we were unable to recover it. 00:30:04.593 [2024-07-23 01:09:48.628845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.629092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.629118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.593 qpair failed and we were unable to recover it. 00:30:04.593 [2024-07-23 01:09:48.629311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.629490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.629516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.593 qpair failed and we were unable to recover it. 00:30:04.593 [2024-07-23 01:09:48.629751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.629895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.629921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.593 qpair failed and we were unable to recover it. 00:30:04.593 [2024-07-23 01:09:48.630074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.630256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.630287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.593 qpair failed and we were unable to recover it. 00:30:04.593 [2024-07-23 01:09:48.630444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.630627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.630654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.593 qpair failed and we were unable to recover it. 00:30:04.593 [2024-07-23 01:09:48.630822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.630989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.631014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.593 qpair failed and we were unable to recover it. 00:30:04.593 [2024-07-23 01:09:48.631204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.631392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.631417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.593 qpair failed and we were unable to recover it. 00:30:04.593 [2024-07-23 01:09:48.631576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.631723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.631750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.593 qpair failed and we were unable to recover it. 00:30:04.593 [2024-07-23 01:09:48.631894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.632056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.632080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.593 qpair failed and we were unable to recover it. 00:30:04.593 [2024-07-23 01:09:48.632270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.632407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.632433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.593 qpair failed and we were unable to recover it. 00:30:04.593 [2024-07-23 01:09:48.632602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.632775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.632801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.593 qpair failed and we were unable to recover it. 00:30:04.593 [2024-07-23 01:09:48.632940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.633107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.633132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.593 qpair failed and we were unable to recover it. 00:30:04.593 [2024-07-23 01:09:48.633294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.633439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.633466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.593 qpair failed and we were unable to recover it. 00:30:04.593 [2024-07-23 01:09:48.633647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.633813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.633844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.593 qpair failed and we were unable to recover it. 00:30:04.593 [2024-07-23 01:09:48.634016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.634147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.634171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.593 qpair failed and we were unable to recover it. 00:30:04.593 [2024-07-23 01:09:48.634357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.634523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.634549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.593 qpair failed and we were unable to recover it. 00:30:04.593 [2024-07-23 01:09:48.634730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.634894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.634919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.593 qpair failed and we were unable to recover it. 00:30:04.593 [2024-07-23 01:09:48.635081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.635268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.635293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.593 qpair failed and we were unable to recover it. 00:30:04.593 [2024-07-23 01:09:48.635432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.635572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.635596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.593 qpair failed and we were unable to recover it. 00:30:04.593 [2024-07-23 01:09:48.635748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.635912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.635938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.593 qpair failed and we were unable to recover it. 00:30:04.593 [2024-07-23 01:09:48.636109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.636273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.636298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.593 qpair failed and we were unable to recover it. 00:30:04.593 [2024-07-23 01:09:48.636457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.636624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.636650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.593 qpair failed and we were unable to recover it. 00:30:04.593 [2024-07-23 01:09:48.636790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.636956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.636981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.593 qpair failed and we were unable to recover it. 00:30:04.593 [2024-07-23 01:09:48.637122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.637262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.637292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.593 qpair failed and we were unable to recover it. 00:30:04.593 [2024-07-23 01:09:48.637483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.637647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.637673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.593 qpair failed and we were unable to recover it. 00:30:04.593 [2024-07-23 01:09:48.637853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.638079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.638104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.593 qpair failed and we were unable to recover it. 00:30:04.593 [2024-07-23 01:09:48.638243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.638405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.638431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.593 qpair failed and we were unable to recover it. 00:30:04.593 [2024-07-23 01:09:48.638571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.638777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.638803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.593 qpair failed and we were unable to recover it. 00:30:04.593 [2024-07-23 01:09:48.638976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.639143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.639169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.593 qpair failed and we were unable to recover it. 00:30:04.593 [2024-07-23 01:09:48.639327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.639463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.639488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.593 qpair failed and we were unable to recover it. 00:30:04.593 [2024-07-23 01:09:48.639678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.639809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.639833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.593 qpair failed and we were unable to recover it. 00:30:04.593 [2024-07-23 01:09:48.639971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.640130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.640154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.593 qpair failed and we were unable to recover it. 00:30:04.593 [2024-07-23 01:09:48.640315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.640477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.640501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.593 qpair failed and we were unable to recover it. 00:30:04.593 [2024-07-23 01:09:48.640663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.640827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.640856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.593 qpair failed and we were unable to recover it. 00:30:04.593 [2024-07-23 01:09:48.640992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.641129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.641154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.593 qpair failed and we were unable to recover it. 00:30:04.593 [2024-07-23 01:09:48.641344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.641509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.641533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.593 qpair failed and we were unable to recover it. 00:30:04.593 [2024-07-23 01:09:48.641673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.641809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.641834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.593 qpair failed and we were unable to recover it. 00:30:04.593 [2024-07-23 01:09:48.641998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.642142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.642169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.593 qpair failed and we were unable to recover it. 00:30:04.593 [2024-07-23 01:09:48.642333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.642495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.642520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.593 qpair failed and we were unable to recover it. 00:30:04.593 [2024-07-23 01:09:48.642661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.642825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.642850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.593 qpair failed and we were unable to recover it. 00:30:04.593 [2024-07-23 01:09:48.642985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.643152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.643177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.593 qpair failed and we were unable to recover it. 00:30:04.593 [2024-07-23 01:09:48.643309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.593 [2024-07-23 01:09:48.643481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.643505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.594 qpair failed and we were unable to recover it. 00:30:04.594 [2024-07-23 01:09:48.643642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.643886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.643911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.594 qpair failed and we were unable to recover it. 00:30:04.594 [2024-07-23 01:09:48.644069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.644239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.644263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.594 qpair failed and we were unable to recover it. 00:30:04.594 [2024-07-23 01:09:48.644404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.644566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.644590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.594 qpair failed and we were unable to recover it. 00:30:04.594 [2024-07-23 01:09:48.644759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.644945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.644970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.594 qpair failed and we were unable to recover it. 00:30:04.594 [2024-07-23 01:09:48.645135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.645375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.645399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.594 qpair failed and we were unable to recover it. 00:30:04.594 [2024-07-23 01:09:48.645566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.645702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.645727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.594 qpair failed and we were unable to recover it. 00:30:04.594 [2024-07-23 01:09:48.645912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.646070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.646094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.594 qpair failed and we were unable to recover it. 00:30:04.594 [2024-07-23 01:09:48.646226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.646385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.646409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.594 qpair failed and we were unable to recover it. 00:30:04.594 [2024-07-23 01:09:48.646596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.646769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.646795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.594 qpair failed and we were unable to recover it. 00:30:04.594 [2024-07-23 01:09:48.646932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.647120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.647145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.594 qpair failed and we were unable to recover it. 00:30:04.594 [2024-07-23 01:09:48.647309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.647474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.647499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.594 qpair failed and we were unable to recover it. 00:30:04.594 [2024-07-23 01:09:48.647670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.647804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.647830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.594 qpair failed and we were unable to recover it. 00:30:04.594 [2024-07-23 01:09:48.647976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.648141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.648167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.594 qpair failed and we were unable to recover it. 00:30:04.594 [2024-07-23 01:09:48.648313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.648501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.648525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.594 qpair failed and we were unable to recover it. 00:30:04.594 [2024-07-23 01:09:48.648686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.648855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.648882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.594 qpair failed and we were unable to recover it. 00:30:04.594 [2024-07-23 01:09:48.649047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.649232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.649256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.594 qpair failed and we were unable to recover it. 00:30:04.594 [2024-07-23 01:09:48.649445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.649634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.649659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.594 qpair failed and we were unable to recover it. 00:30:04.594 [2024-07-23 01:09:48.649847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.649987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.650012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.594 qpair failed and we were unable to recover it. 00:30:04.594 [2024-07-23 01:09:48.650154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.650344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.650369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.594 qpair failed and we were unable to recover it. 00:30:04.594 [2024-07-23 01:09:48.650568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.650753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.650778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.594 qpair failed and we were unable to recover it. 00:30:04.594 [2024-07-23 01:09:48.651019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.651208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.651233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.594 qpair failed and we were unable to recover it. 00:30:04.594 [2024-07-23 01:09:48.651370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.651511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.651535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.594 qpair failed and we were unable to recover it. 00:30:04.594 [2024-07-23 01:09:48.651677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.651842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.651867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.594 qpair failed and we were unable to recover it. 00:30:04.594 [2024-07-23 01:09:48.652033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.652196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.652220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.594 qpair failed and we were unable to recover it. 00:30:04.594 [2024-07-23 01:09:48.652358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.652546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.652570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.594 qpair failed and we were unable to recover it. 00:30:04.594 [2024-07-23 01:09:48.652714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.652876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.652901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.594 qpair failed and we were unable to recover it. 00:30:04.594 [2024-07-23 01:09:48.653069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.653231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.653255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.594 qpair failed and we were unable to recover it. 00:30:04.594 [2024-07-23 01:09:48.653425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.653586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.653610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.594 qpair failed and we were unable to recover it. 00:30:04.594 [2024-07-23 01:09:48.653769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.653962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.653987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.594 qpair failed and we were unable to recover it. 00:30:04.594 [2024-07-23 01:09:48.654149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.654312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.654337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.594 qpair failed and we were unable to recover it. 00:30:04.594 [2024-07-23 01:09:48.654494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.654632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.654659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.594 qpair failed and we were unable to recover it. 00:30:04.594 [2024-07-23 01:09:48.654830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.654996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.655020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.594 qpair failed and we were unable to recover it. 00:30:04.594 [2024-07-23 01:09:48.655190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.655352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.655378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.594 qpair failed and we were unable to recover it. 00:30:04.594 [2024-07-23 01:09:48.655506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.655697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.655722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.594 qpair failed and we were unable to recover it. 00:30:04.594 [2024-07-23 01:09:48.655864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.656063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.656087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.594 qpair failed and we were unable to recover it. 00:30:04.594 [2024-07-23 01:09:48.656280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.656475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.656499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.594 qpair failed and we were unable to recover it. 00:30:04.594 [2024-07-23 01:09:48.656663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.656795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.656820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.594 qpair failed and we were unable to recover it. 00:30:04.594 [2024-07-23 01:09:48.656980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.657118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.657144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.594 qpair failed and we were unable to recover it. 00:30:04.594 [2024-07-23 01:09:48.657280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.657443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.657467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.594 qpair failed and we were unable to recover it. 00:30:04.594 [2024-07-23 01:09:48.657628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.657821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.657845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.594 qpair failed and we were unable to recover it. 00:30:04.594 [2024-07-23 01:09:48.657980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.658124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.658148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.594 qpair failed and we were unable to recover it. 00:30:04.594 [2024-07-23 01:09:48.658338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.658499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.658524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.594 qpair failed and we were unable to recover it. 00:30:04.594 [2024-07-23 01:09:48.658690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.658828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.658852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.594 qpair failed and we were unable to recover it. 00:30:04.594 [2024-07-23 01:09:48.658983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.659149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.659173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.594 qpair failed and we were unable to recover it. 00:30:04.594 [2024-07-23 01:09:48.659315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.659555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.659580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.594 qpair failed and we were unable to recover it. 00:30:04.594 [2024-07-23 01:09:48.659753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.659886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.659910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.594 qpair failed and we were unable to recover it. 00:30:04.594 [2024-07-23 01:09:48.660095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.660260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.660287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.594 qpair failed and we were unable to recover it. 00:30:04.594 [2024-07-23 01:09:48.660453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.660693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.660719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.594 qpair failed and we were unable to recover it. 00:30:04.594 [2024-07-23 01:09:48.660880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.661075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.661100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.594 qpair failed and we were unable to recover it. 00:30:04.594 [2024-07-23 01:09:48.661260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.661396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.661423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.594 qpair failed and we were unable to recover it. 00:30:04.594 [2024-07-23 01:09:48.661591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.661770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.661795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.594 qpair failed and we were unable to recover it. 00:30:04.594 [2024-07-23 01:09:48.661949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.662109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.662133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.594 qpair failed and we were unable to recover it. 00:30:04.594 [2024-07-23 01:09:48.662336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.662505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.662530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.594 qpair failed and we were unable to recover it. 00:30:04.594 [2024-07-23 01:09:48.662724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.662864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.662890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.594 qpair failed and we were unable to recover it. 00:30:04.594 [2024-07-23 01:09:48.663026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.663160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.663185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.594 qpair failed and we were unable to recover it. 00:30:04.594 [2024-07-23 01:09:48.663428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.663593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.663625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.594 qpair failed and we were unable to recover it. 00:30:04.594 [2024-07-23 01:09:48.663788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.594 [2024-07-23 01:09:48.663929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.663953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.595 qpair failed and we were unable to recover it. 00:30:04.595 [2024-07-23 01:09:48.664109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.664272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.664296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.595 qpair failed and we were unable to recover it. 00:30:04.595 [2024-07-23 01:09:48.664462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.664627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.664652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.595 qpair failed and we were unable to recover it. 00:30:04.595 [2024-07-23 01:09:48.664817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.665007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.665032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.595 qpair failed and we were unable to recover it. 00:30:04.595 [2024-07-23 01:09:48.665274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.665462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.665487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.595 qpair failed and we were unable to recover it. 00:30:04.595 [2024-07-23 01:09:48.665673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.665864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.665889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.595 qpair failed and we were unable to recover it. 00:30:04.595 [2024-07-23 01:09:48.666052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.666209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.666234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.595 qpair failed and we were unable to recover it. 00:30:04.595 [2024-07-23 01:09:48.666400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.666592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.666625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.595 qpair failed and we were unable to recover it. 00:30:04.595 [2024-07-23 01:09:48.666788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.666948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.666973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.595 qpair failed and we were unable to recover it. 00:30:04.595 [2024-07-23 01:09:48.667160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.667320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.667345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.595 qpair failed and we were unable to recover it. 00:30:04.595 [2024-07-23 01:09:48.667532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.667696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.667723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.595 qpair failed and we were unable to recover it. 00:30:04.595 [2024-07-23 01:09:48.667890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.668085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.668110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.595 qpair failed and we were unable to recover it. 00:30:04.595 [2024-07-23 01:09:48.668248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.668411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.668437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.595 qpair failed and we were unable to recover it. 00:30:04.595 [2024-07-23 01:09:48.668594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.668789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.668814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.595 qpair failed and we were unable to recover it. 00:30:04.595 [2024-07-23 01:09:48.668954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.669146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.669170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.595 qpair failed and we were unable to recover it. 00:30:04.595 [2024-07-23 01:09:48.669336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.669467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.669492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.595 qpair failed and we were unable to recover it. 00:30:04.595 [2024-07-23 01:09:48.669666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.669831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.669855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.595 qpair failed and we were unable to recover it. 00:30:04.595 [2024-07-23 01:09:48.670018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.670189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.670213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.595 qpair failed and we were unable to recover it. 00:30:04.595 [2024-07-23 01:09:48.670354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.670513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.670538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.595 qpair failed and we were unable to recover it. 00:30:04.595 [2024-07-23 01:09:48.670732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.670875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.670900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.595 qpair failed and we were unable to recover it. 00:30:04.595 [2024-07-23 01:09:48.671093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.671229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.671253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.595 qpair failed and we were unable to recover it. 00:30:04.595 [2024-07-23 01:09:48.671414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.671574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.671598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.595 qpair failed and we were unable to recover it. 00:30:04.595 [2024-07-23 01:09:48.671791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.671926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.671953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.595 qpair failed and we were unable to recover it. 00:30:04.595 [2024-07-23 01:09:48.672141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.672298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.672322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.595 qpair failed and we were unable to recover it. 00:30:04.595 [2024-07-23 01:09:48.672510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.672643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.672669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.595 qpair failed and we were unable to recover it. 00:30:04.595 [2024-07-23 01:09:48.672805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.672960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.672984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.595 qpair failed and we were unable to recover it. 00:30:04.595 [2024-07-23 01:09:48.673177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.673356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.673380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.595 qpair failed and we were unable to recover it. 00:30:04.595 [2024-07-23 01:09:48.673516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.673682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.673707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.595 qpair failed and we were unable to recover it. 00:30:04.595 [2024-07-23 01:09:48.673870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.674032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.674056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.595 qpair failed and we were unable to recover it. 00:30:04.595 [2024-07-23 01:09:48.674213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.674392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.674416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.595 qpair failed and we were unable to recover it. 00:30:04.595 [2024-07-23 01:09:48.674581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.674756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.674781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.595 qpair failed and we were unable to recover it. 00:30:04.595 [2024-07-23 01:09:48.674941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.675141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.675166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.595 qpair failed and we were unable to recover it. 00:30:04.595 [2024-07-23 01:09:48.675301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.675462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.675487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.595 qpair failed and we were unable to recover it. 00:30:04.595 [2024-07-23 01:09:48.675630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.675788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.675812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.595 qpair failed and we were unable to recover it. 00:30:04.595 [2024-07-23 01:09:48.675999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.676185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.676209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.595 qpair failed and we were unable to recover it. 00:30:04.595 [2024-07-23 01:09:48.676368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.676531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.676555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.595 qpair failed and we were unable to recover it. 00:30:04.595 [2024-07-23 01:09:48.676694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.676875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.676900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.595 qpair failed and we were unable to recover it. 00:30:04.595 [2024-07-23 01:09:48.677089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.677223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.677248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.595 qpair failed and we were unable to recover it. 00:30:04.595 [2024-07-23 01:09:48.677389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.677555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.677580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.595 qpair failed and we were unable to recover it. 00:30:04.595 [2024-07-23 01:09:48.677745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.677905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.677930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.595 qpair failed and we were unable to recover it. 00:30:04.595 [2024-07-23 01:09:48.678069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.678235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.678259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.595 qpair failed and we were unable to recover it. 00:30:04.595 [2024-07-23 01:09:48.678422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.678586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.678617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.595 qpair failed and we were unable to recover it. 00:30:04.595 [2024-07-23 01:09:48.678757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.678895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.678921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.595 qpair failed and we were unable to recover it. 00:30:04.595 [2024-07-23 01:09:48.679088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.679255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.679279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.595 qpair failed and we were unable to recover it. 00:30:04.595 [2024-07-23 01:09:48.679443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.679623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.679649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.595 qpair failed and we were unable to recover it. 00:30:04.595 [2024-07-23 01:09:48.679841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.680005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.680030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.595 qpair failed and we were unable to recover it. 00:30:04.595 [2024-07-23 01:09:48.680163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.680327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.680351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.595 qpair failed and we were unable to recover it. 00:30:04.595 [2024-07-23 01:09:48.680546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.680747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.680773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.595 qpair failed and we were unable to recover it. 00:30:04.595 [2024-07-23 01:09:48.680940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.681093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.681117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.595 qpair failed and we were unable to recover it. 00:30:04.595 [2024-07-23 01:09:48.681276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.681437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.681461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.595 qpair failed and we were unable to recover it. 00:30:04.595 [2024-07-23 01:09:48.681633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.681769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.681793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.595 qpair failed and we were unable to recover it. 00:30:04.595 [2024-07-23 01:09:48.681978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.682143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.682169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.595 qpair failed and we were unable to recover it. 00:30:04.595 [2024-07-23 01:09:48.682329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.682491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.682515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.595 qpair failed and we were unable to recover it. 00:30:04.595 [2024-07-23 01:09:48.682710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.682854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.682879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.595 qpair failed and we were unable to recover it. 00:30:04.595 [2024-07-23 01:09:48.683021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.683187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.683211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.595 qpair failed and we were unable to recover it. 00:30:04.595 [2024-07-23 01:09:48.683351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.683515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.683539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.595 qpair failed and we were unable to recover it. 00:30:04.595 [2024-07-23 01:09:48.683676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.683864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.683889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.595 qpair failed and we were unable to recover it. 00:30:04.595 [2024-07-23 01:09:48.684046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.684173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.684197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.595 qpair failed and we were unable to recover it. 00:30:04.595 [2024-07-23 01:09:48.684333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.684518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.684542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.595 qpair failed and we were unable to recover it. 00:30:04.595 [2024-07-23 01:09:48.684703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.684839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.595 [2024-07-23 01:09:48.684863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.595 qpair failed and we were unable to recover it. 00:30:04.596 [2024-07-23 01:09:48.685027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.685165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.685192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.596 qpair failed and we were unable to recover it. 00:30:04.596 [2024-07-23 01:09:48.685352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.685513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.685537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.596 qpair failed and we were unable to recover it. 00:30:04.596 [2024-07-23 01:09:48.685712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.685852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.685877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.596 qpair failed and we were unable to recover it. 00:30:04.596 [2024-07-23 01:09:48.686054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.686212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.686236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.596 qpair failed and we were unable to recover it. 00:30:04.596 [2024-07-23 01:09:48.686397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.686537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.686561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.596 qpair failed and we were unable to recover it. 00:30:04.596 [2024-07-23 01:09:48.686736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.686902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.686926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.596 qpair failed and we were unable to recover it. 00:30:04.596 [2024-07-23 01:09:48.687091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.687256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.687284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.596 qpair failed and we were unable to recover it. 00:30:04.596 [2024-07-23 01:09:48.687417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.687611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.687642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.596 qpair failed and we were unable to recover it. 00:30:04.596 [2024-07-23 01:09:48.687779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.687913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.687937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.596 qpair failed and we were unable to recover it. 00:30:04.596 [2024-07-23 01:09:48.688128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.688261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.688285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.596 qpair failed and we were unable to recover it. 00:30:04.596 [2024-07-23 01:09:48.688447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.688606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.688638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.596 qpair failed and we were unable to recover it. 00:30:04.596 [2024-07-23 01:09:48.688825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.689008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.689034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.596 qpair failed and we were unable to recover it. 00:30:04.596 [2024-07-23 01:09:48.689231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.689360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.689384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.596 qpair failed and we were unable to recover it. 00:30:04.596 [2024-07-23 01:09:48.689525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.689692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.689718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.596 qpair failed and we were unable to recover it. 00:30:04.596 [2024-07-23 01:09:48.689910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.690100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.690124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.596 qpair failed and we were unable to recover it. 00:30:04.596 [2024-07-23 01:09:48.690257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.690416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.690441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.596 qpair failed and we were unable to recover it. 00:30:04.596 [2024-07-23 01:09:48.690632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.690763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.690794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.596 qpair failed and we were unable to recover it. 00:30:04.596 [2024-07-23 01:09:48.690928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.691069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.691094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.596 qpair failed and we were unable to recover it. 00:30:04.596 [2024-07-23 01:09:48.691238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.691378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.691402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.596 qpair failed and we were unable to recover it. 00:30:04.596 [2024-07-23 01:09:48.691564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.691750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.691775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.596 qpair failed and we were unable to recover it. 00:30:04.596 [2024-07-23 01:09:48.691942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.692076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.692102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.596 qpair failed and we were unable to recover it. 00:30:04.596 [2024-07-23 01:09:48.692295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.692434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.692458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.596 qpair failed and we were unable to recover it. 00:30:04.596 [2024-07-23 01:09:48.692623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.692785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.692809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.596 qpair failed and we were unable to recover it. 00:30:04.596 [2024-07-23 01:09:48.692972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.693131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.693155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.596 qpair failed and we were unable to recover it. 00:30:04.596 [2024-07-23 01:09:48.693356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.693520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.693544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.596 qpair failed and we were unable to recover it. 00:30:04.596 [2024-07-23 01:09:48.693711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.693854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.693878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.596 qpair failed and we were unable to recover it. 00:30:04.596 [2024-07-23 01:09:48.694073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.694261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.694289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.596 qpair failed and we were unable to recover it. 00:30:04.596 [2024-07-23 01:09:48.694432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.694600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.694633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.596 qpair failed and we were unable to recover it. 00:30:04.596 [2024-07-23 01:09:48.694780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.694975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.695000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.596 qpair failed and we were unable to recover it. 00:30:04.596 [2024-07-23 01:09:48.695127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.695268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.695293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.596 qpair failed and we were unable to recover it. 00:30:04.596 [2024-07-23 01:09:48.695454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.695628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.695655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.596 qpair failed and we were unable to recover it. 00:30:04.596 [2024-07-23 01:09:48.695795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.695987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.696012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.596 qpair failed and we were unable to recover it. 00:30:04.596 [2024-07-23 01:09:48.696176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.696334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.696358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.596 qpair failed and we were unable to recover it. 00:30:04.596 [2024-07-23 01:09:48.696490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.696650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.696675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.596 qpair failed and we were unable to recover it. 00:30:04.596 [2024-07-23 01:09:48.696835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.697027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.697051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.596 qpair failed and we were unable to recover it. 00:30:04.596 [2024-07-23 01:09:48.697183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.697314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.697339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.596 qpair failed and we were unable to recover it. 00:30:04.596 [2024-07-23 01:09:48.697502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.697664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.697694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.596 qpair failed and we were unable to recover it. 00:30:04.596 [2024-07-23 01:09:48.697861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.698053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.698077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.596 qpair failed and we were unable to recover it. 00:30:04.596 [2024-07-23 01:09:48.698231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.698368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.698394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.596 qpair failed and we were unable to recover it. 00:30:04.596 [2024-07-23 01:09:48.698523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.698686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.698710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.596 qpair failed and we were unable to recover it. 00:30:04.596 [2024-07-23 01:09:48.698880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.699075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.699099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.596 qpair failed and we were unable to recover it. 00:30:04.596 [2024-07-23 01:09:48.699261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.699418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.699443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.596 qpair failed and we were unable to recover it. 00:30:04.596 [2024-07-23 01:09:48.699604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.699799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.699824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.596 qpair failed and we were unable to recover it. 00:30:04.596 [2024-07-23 01:09:48.699957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.700091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.700117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.596 qpair failed and we were unable to recover it. 00:30:04.596 [2024-07-23 01:09:48.700304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.700457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.700482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.596 qpair failed and we were unable to recover it. 00:30:04.596 [2024-07-23 01:09:48.700623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.700760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.700786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.596 qpair failed and we were unable to recover it. 00:30:04.596 [2024-07-23 01:09:48.700931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.701122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.701146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.596 qpair failed and we were unable to recover it. 00:30:04.596 [2024-07-23 01:09:48.701335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.701501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.701526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.596 qpair failed and we were unable to recover it. 00:30:04.596 [2024-07-23 01:09:48.701696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.701825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.701849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.596 qpair failed and we were unable to recover it. 00:30:04.596 [2024-07-23 01:09:48.702016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.702204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.702229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.596 qpair failed and we were unable to recover it. 00:30:04.596 [2024-07-23 01:09:48.702392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.702553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.702577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.596 qpair failed and we were unable to recover it. 00:30:04.596 [2024-07-23 01:09:48.702775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.702936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.702962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.596 qpair failed and we were unable to recover it. 00:30:04.596 [2024-07-23 01:09:48.703153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.703341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.703365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.596 qpair failed and we were unable to recover it. 00:30:04.596 [2024-07-23 01:09:48.703526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.703698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.703723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.596 qpair failed and we were unable to recover it. 00:30:04.596 [2024-07-23 01:09:48.703887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.704051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.704076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.596 qpair failed and we were unable to recover it. 00:30:04.596 [2024-07-23 01:09:48.704240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.704402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.704429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.596 qpair failed and we were unable to recover it. 00:30:04.596 [2024-07-23 01:09:48.704588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.704725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.704750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.596 qpair failed and we were unable to recover it. 00:30:04.596 [2024-07-23 01:09:48.704921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.705090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.705114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.596 qpair failed and we were unable to recover it. 00:30:04.596 [2024-07-23 01:09:48.705282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.705467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.705491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.596 qpair failed and we were unable to recover it. 00:30:04.596 [2024-07-23 01:09:48.705637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.705831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.596 [2024-07-23 01:09:48.705856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.597 qpair failed and we were unable to recover it. 00:30:04.597 [2024-07-23 01:09:48.705996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.706124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.706148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.597 qpair failed and we were unable to recover it. 00:30:04.597 [2024-07-23 01:09:48.706318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.706481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.706505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.597 qpair failed and we were unable to recover it. 00:30:04.597 [2024-07-23 01:09:48.706644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.706805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.706830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.597 qpair failed and we were unable to recover it. 00:30:04.597 [2024-07-23 01:09:48.707025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.707159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.707184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.597 qpair failed and we were unable to recover it. 00:30:04.597 [2024-07-23 01:09:48.707368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.707498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.707522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.597 qpair failed and we were unable to recover it. 00:30:04.597 [2024-07-23 01:09:48.707684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.707877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.707901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.597 qpair failed and we were unable to recover it. 00:30:04.597 [2024-07-23 01:09:48.708060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.708196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.708220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.597 qpair failed and we were unable to recover it. 00:30:04.597 [2024-07-23 01:09:48.708412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.708570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.708594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.597 qpair failed and we were unable to recover it. 00:30:04.597 [2024-07-23 01:09:48.708777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.708911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.708935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.597 qpair failed and we were unable to recover it. 00:30:04.597 [2024-07-23 01:09:48.709097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.709292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.709316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.597 qpair failed and we were unable to recover it. 00:30:04.597 [2024-07-23 01:09:48.709453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.709632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.709657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.597 qpair failed and we were unable to recover it. 00:30:04.597 [2024-07-23 01:09:48.709823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.709989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.710013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.597 qpair failed and we were unable to recover it. 00:30:04.597 [2024-07-23 01:09:48.710174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.710338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.710364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.597 qpair failed and we were unable to recover it. 00:30:04.597 [2024-07-23 01:09:48.710530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.710669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.710704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.597 qpair failed and we were unable to recover it. 00:30:04.597 [2024-07-23 01:09:48.710875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.711010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.711034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.597 qpair failed and we were unable to recover it. 00:30:04.597 [2024-07-23 01:09:48.711199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.711396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.711420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.597 qpair failed and we were unable to recover it. 00:30:04.597 [2024-07-23 01:09:48.711556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.711719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.711744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.597 qpair failed and we were unable to recover it. 00:30:04.597 [2024-07-23 01:09:48.711911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.712076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.712101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.597 qpair failed and we were unable to recover it. 00:30:04.597 [2024-07-23 01:09:48.712290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.712452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.712476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.597 qpair failed and we were unable to recover it. 00:30:04.597 [2024-07-23 01:09:48.712620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.712789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.712814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.597 qpair failed and we were unable to recover it. 00:30:04.597 [2024-07-23 01:09:48.712979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.713140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.713164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.597 qpair failed and we were unable to recover it. 00:30:04.597 [2024-07-23 01:09:48.713329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.713483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.713507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.597 qpair failed and we were unable to recover it. 00:30:04.597 [2024-07-23 01:09:48.713676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.713844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.713870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.597 qpair failed and we were unable to recover it. 00:30:04.597 [2024-07-23 01:09:48.714059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.714200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.714226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.597 qpair failed and we were unable to recover it. 00:30:04.597 [2024-07-23 01:09:48.714420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.714608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.714640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.597 qpair failed and we were unable to recover it. 00:30:04.597 [2024-07-23 01:09:48.714766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.714905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.714929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.597 qpair failed and we were unable to recover it. 00:30:04.597 [2024-07-23 01:09:48.715091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.715232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.715258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.597 qpair failed and we were unable to recover it. 00:30:04.597 [2024-07-23 01:09:48.715423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.715555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.715580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.597 qpair failed and we were unable to recover it. 00:30:04.597 [2024-07-23 01:09:48.715743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.715912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.715937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.597 qpair failed and we were unable to recover it. 00:30:04.597 [2024-07-23 01:09:48.716070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.716228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.716253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.597 qpair failed and we were unable to recover it. 00:30:04.597 [2024-07-23 01:09:48.716429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.716589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.716619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.597 qpair failed and we were unable to recover it. 00:30:04.597 [2024-07-23 01:09:48.716762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.716922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.716947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.597 qpair failed and we were unable to recover it. 00:30:04.597 [2024-07-23 01:09:48.717083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.717266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.717291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.597 qpair failed and we were unable to recover it. 00:30:04.597 [2024-07-23 01:09:48.717475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.717609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.717647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.597 qpair failed and we were unable to recover it. 00:30:04.597 [2024-07-23 01:09:48.717837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.717975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.717999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.597 qpair failed and we were unable to recover it. 00:30:04.597 [2024-07-23 01:09:48.718161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.718299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.718323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.597 qpair failed and we were unable to recover it. 00:30:04.597 [2024-07-23 01:09:48.718496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.718665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.718692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.597 qpair failed and we were unable to recover it. 00:30:04.597 [2024-07-23 01:09:48.718853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.719017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.719042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.597 qpair failed and we were unable to recover it. 00:30:04.597 [2024-07-23 01:09:48.719231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.719400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.719425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.597 qpair failed and we were unable to recover it. 00:30:04.597 [2024-07-23 01:09:48.719584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.719758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.719783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.597 qpair failed and we were unable to recover it. 00:30:04.597 [2024-07-23 01:09:48.719942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.720073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.720097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.597 qpair failed and we were unable to recover it. 00:30:04.597 [2024-07-23 01:09:48.720264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.720430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.720454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.597 qpair failed and we were unable to recover it. 00:30:04.597 [2024-07-23 01:09:48.720594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.720745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.720770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.597 qpair failed and we were unable to recover it. 00:30:04.597 [2024-07-23 01:09:48.720910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.721100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.721125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.597 qpair failed and we were unable to recover it. 00:30:04.597 [2024-07-23 01:09:48.721261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.721419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.721443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.597 qpair failed and we were unable to recover it. 00:30:04.597 [2024-07-23 01:09:48.721607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.721784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.721810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.597 qpair failed and we were unable to recover it. 00:30:04.597 [2024-07-23 01:09:48.721969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.722135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.722160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.597 qpair failed and we were unable to recover it. 00:30:04.597 [2024-07-23 01:09:48.722324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.722487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.722514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.597 qpair failed and we were unable to recover it. 00:30:04.597 [2024-07-23 01:09:48.722679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.722813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.722838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.597 qpair failed and we were unable to recover it. 00:30:04.597 [2024-07-23 01:09:48.723001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.723139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.723163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.597 qpair failed and we were unable to recover it. 00:30:04.597 [2024-07-23 01:09:48.723357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.723488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.723513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.597 qpair failed and we were unable to recover it. 00:30:04.597 [2024-07-23 01:09:48.723680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.723818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.723843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.597 qpair failed and we were unable to recover it. 00:30:04.597 [2024-07-23 01:09:48.723975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.724144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.724169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.597 qpair failed and we were unable to recover it. 00:30:04.597 [2024-07-23 01:09:48.724301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.724508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.724532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.597 qpair failed and we were unable to recover it. 00:30:04.597 [2024-07-23 01:09:48.724667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.724831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.724855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.597 qpair failed and we were unable to recover it. 00:30:04.597 [2024-07-23 01:09:48.725014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.725176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.725200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.597 qpair failed and we were unable to recover it. 00:30:04.597 [2024-07-23 01:09:48.725392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.725549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.725574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.597 qpair failed and we were unable to recover it. 00:30:04.597 [2024-07-23 01:09:48.725727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.725863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.725888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.597 qpair failed and we were unable to recover it. 00:30:04.597 [2024-07-23 01:09:48.726053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.726192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.726218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.597 qpair failed and we were unable to recover it. 00:30:04.597 [2024-07-23 01:09:48.726383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.726544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.726570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.597 qpair failed and we were unable to recover it. 00:30:04.597 [2024-07-23 01:09:48.726720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.726909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.726934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.597 qpair failed and we were unable to recover it. 00:30:04.597 [2024-07-23 01:09:48.727076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.727215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.727240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.597 qpair failed and we were unable to recover it. 00:30:04.597 [2024-07-23 01:09:48.727404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.727565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.727589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.597 qpair failed and we were unable to recover it. 00:30:04.597 [2024-07-23 01:09:48.727736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.727870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.597 [2024-07-23 01:09:48.727896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.597 qpair failed and we were unable to recover it. 00:30:04.597 [2024-07-23 01:09:48.728089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.728254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.728279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.598 qpair failed and we were unable to recover it. 00:30:04.598 [2024-07-23 01:09:48.728415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.728574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.728599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.598 qpair failed and we were unable to recover it. 00:30:04.598 [2024-07-23 01:09:48.728745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.728904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.728928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.598 qpair failed and we were unable to recover it. 00:30:04.598 [2024-07-23 01:09:48.729096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.729268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.729294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.598 qpair failed and we were unable to recover it. 00:30:04.598 [2024-07-23 01:09:48.729433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.729641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.729667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.598 qpair failed and we were unable to recover it. 00:30:04.598 [2024-07-23 01:09:48.729797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.729930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.729955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.598 qpair failed and we were unable to recover it. 00:30:04.598 [2024-07-23 01:09:48.730119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.730282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.730306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.598 qpair failed and we were unable to recover it. 00:30:04.598 [2024-07-23 01:09:48.730500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.730664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.730689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.598 qpair failed and we were unable to recover it. 00:30:04.598 [2024-07-23 01:09:48.730878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.731013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.731037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.598 qpair failed and we were unable to recover it. 00:30:04.598 [2024-07-23 01:09:48.731168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.731339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.731364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.598 qpair failed and we were unable to recover it. 00:30:04.598 [2024-07-23 01:09:48.731504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.731647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.731672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.598 qpair failed and we were unable to recover it. 00:30:04.598 [2024-07-23 01:09:48.731831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.731962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.731987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.598 qpair failed and we were unable to recover it. 00:30:04.598 [2024-07-23 01:09:48.732120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.732285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.732309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.598 qpair failed and we were unable to recover it. 00:30:04.598 [2024-07-23 01:09:48.732477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.732645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.732672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.598 qpair failed and we were unable to recover it. 00:30:04.598 [2024-07-23 01:09:48.732836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.733004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.733029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.598 qpair failed and we were unable to recover it. 00:30:04.598 [2024-07-23 01:09:48.733195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.733385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.733409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.598 qpair failed and we were unable to recover it. 00:30:04.598 [2024-07-23 01:09:48.733599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.733738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.733764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.598 qpair failed and we were unable to recover it. 00:30:04.598 [2024-07-23 01:09:48.733909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.734047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.734071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.598 qpair failed and we were unable to recover it. 00:30:04.598 [2024-07-23 01:09:48.734237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.734424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.734448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.598 qpair failed and we were unable to recover it. 00:30:04.598 [2024-07-23 01:09:48.734640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.734833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.734858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.598 qpair failed and we were unable to recover it. 00:30:04.598 [2024-07-23 01:09:48.735050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.735187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.735211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.598 qpair failed and we were unable to recover it. 00:30:04.598 [2024-07-23 01:09:48.735343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.735532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.735557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.598 qpair failed and we were unable to recover it. 00:30:04.598 [2024-07-23 01:09:48.735746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.735905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.735929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.598 qpair failed and we were unable to recover it. 00:30:04.598 [2024-07-23 01:09:48.736112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.736278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.736303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.598 qpair failed and we were unable to recover it. 00:30:04.598 [2024-07-23 01:09:48.736468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.736628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.736653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.598 qpair failed and we were unable to recover it. 00:30:04.598 [2024-07-23 01:09:48.736823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.736963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.736987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.598 qpair failed and we were unable to recover it. 00:30:04.598 [2024-07-23 01:09:48.737176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.737307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.737332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.598 qpair failed and we were unable to recover it. 00:30:04.598 [2024-07-23 01:09:48.737493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.737657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.737683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.598 qpair failed and we were unable to recover it. 00:30:04.598 [2024-07-23 01:09:48.737848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.738012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.738037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.598 qpair failed and we were unable to recover it. 00:30:04.598 [2024-07-23 01:09:48.738196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.738359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.738383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.598 qpair failed and we were unable to recover it. 00:30:04.598 [2024-07-23 01:09:48.738550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.738717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.738743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.598 qpair failed and we were unable to recover it. 00:30:04.598 [2024-07-23 01:09:48.738909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.739096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.739121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.598 qpair failed and we were unable to recover it. 00:30:04.598 [2024-07-23 01:09:48.739285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.739450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.739475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.598 qpair failed and we were unable to recover it. 00:30:04.598 [2024-07-23 01:09:48.739620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.739770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.739795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.598 qpair failed and we were unable to recover it. 00:30:04.598 [2024-07-23 01:09:48.739959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.740126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.740151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.598 qpair failed and we were unable to recover it. 00:30:04.598 [2024-07-23 01:09:48.740312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.740450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.740475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.598 qpair failed and we were unable to recover it. 00:30:04.598 [2024-07-23 01:09:48.740618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.740782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.740807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.598 qpair failed and we were unable to recover it. 00:30:04.598 [2024-07-23 01:09:48.740946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.741133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.741157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.598 qpair failed and we were unable to recover it. 00:30:04.598 [2024-07-23 01:09:48.741293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.741481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.741505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.598 qpair failed and we were unable to recover it. 00:30:04.598 [2024-07-23 01:09:48.741671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.741816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.741840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.598 qpair failed and we were unable to recover it. 00:30:04.598 [2024-07-23 01:09:48.742004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.742171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.742196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.598 qpair failed and we were unable to recover it. 00:30:04.598 [2024-07-23 01:09:48.742359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.742519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.742545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.598 qpair failed and we were unable to recover it. 00:30:04.598 [2024-07-23 01:09:48.742706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.742850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.742874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.598 qpair failed and we were unable to recover it. 00:30:04.598 [2024-07-23 01:09:48.743010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.743170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.743198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.598 qpair failed and we were unable to recover it. 00:30:04.598 [2024-07-23 01:09:48.743364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.743522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.743546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.598 qpair failed and we were unable to recover it. 00:30:04.598 [2024-07-23 01:09:48.743737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.743894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.743919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.598 qpair failed and we were unable to recover it. 00:30:04.598 [2024-07-23 01:09:48.744050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.744191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.744217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.598 qpair failed and we were unable to recover it. 00:30:04.598 [2024-07-23 01:09:48.744353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.744543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.744568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.598 qpair failed and we were unable to recover it. 00:30:04.598 [2024-07-23 01:09:48.744750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.744913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.744940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.598 qpair failed and we were unable to recover it. 00:30:04.598 [2024-07-23 01:09:48.745125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.745256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.745281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.598 qpair failed and we were unable to recover it. 00:30:04.598 [2024-07-23 01:09:48.745445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.745639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.745665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.598 qpair failed and we were unable to recover it. 00:30:04.598 [2024-07-23 01:09:48.745804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.745968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.745992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.598 qpair failed and we were unable to recover it. 00:30:04.598 [2024-07-23 01:09:48.746127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.746290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.746315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.598 qpair failed and we were unable to recover it. 00:30:04.598 [2024-07-23 01:09:48.746472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.746632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.746661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.598 qpair failed and we were unable to recover it. 00:30:04.598 [2024-07-23 01:09:48.746798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.746986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.747011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.598 qpair failed and we were unable to recover it. 00:30:04.598 [2024-07-23 01:09:48.747170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.747305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.747331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.598 qpair failed and we were unable to recover it. 00:30:04.598 [2024-07-23 01:09:48.747519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.747682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.747706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.598 qpair failed and we were unable to recover it. 00:30:04.598 [2024-07-23 01:09:48.747898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.748059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.748083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.598 qpair failed and we were unable to recover it. 00:30:04.598 [2024-07-23 01:09:48.748236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.748371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.748395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.598 qpair failed and we were unable to recover it. 00:30:04.598 [2024-07-23 01:09:48.748557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.748723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.748748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.598 qpair failed and we were unable to recover it. 00:30:04.598 [2024-07-23 01:09:48.748902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.749068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.749092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.598 qpair failed and we were unable to recover it. 00:30:04.598 [2024-07-23 01:09:48.749255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.598 [2024-07-23 01:09:48.749393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.749418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.599 qpair failed and we were unable to recover it. 00:30:04.599 [2024-07-23 01:09:48.749553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.749745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.749771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.599 qpair failed and we were unable to recover it. 00:30:04.599 [2024-07-23 01:09:48.749932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.750062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.750090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.599 qpair failed and we were unable to recover it. 00:30:04.599 [2024-07-23 01:09:48.750282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.750471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.750495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.599 qpair failed and we were unable to recover it. 00:30:04.599 [2024-07-23 01:09:48.750655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.750823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.750848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.599 qpair failed and we were unable to recover it. 00:30:04.599 [2024-07-23 01:09:48.750988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.751153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.751177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.599 qpair failed and we were unable to recover it. 00:30:04.599 [2024-07-23 01:09:48.751338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.751497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.751521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.599 qpair failed and we were unable to recover it. 00:30:04.599 [2024-07-23 01:09:48.751704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.751861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.751886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.599 qpair failed and we were unable to recover it. 00:30:04.599 [2024-07-23 01:09:48.752050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.752190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.752214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.599 qpair failed and we were unable to recover it. 00:30:04.599 [2024-07-23 01:09:48.752378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.752519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.752545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.599 qpair failed and we were unable to recover it. 00:30:04.599 [2024-07-23 01:09:48.752709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.752865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.752890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.599 qpair failed and we were unable to recover it. 00:30:04.599 [2024-07-23 01:09:48.753053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.753214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.753238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.599 qpair failed and we were unable to recover it. 00:30:04.599 [2024-07-23 01:09:48.753440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.753601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.753645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.599 qpair failed and we were unable to recover it. 00:30:04.599 [2024-07-23 01:09:48.753780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.753970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.753995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.599 qpair failed and we were unable to recover it. 00:30:04.599 [2024-07-23 01:09:48.754196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.754360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.754384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.599 qpair failed and we were unable to recover it. 00:30:04.599 [2024-07-23 01:09:48.754543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.754681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.754706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.599 qpair failed and we were unable to recover it. 00:30:04.599 [2024-07-23 01:09:48.754849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.755008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.755033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.599 qpair failed and we were unable to recover it. 00:30:04.599 [2024-07-23 01:09:48.755200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.755330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.755355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.599 qpair failed and we were unable to recover it. 00:30:04.599 [2024-07-23 01:09:48.755515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.755708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.755733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.599 qpair failed and we were unable to recover it. 00:30:04.599 [2024-07-23 01:09:48.755892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.756020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.756045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.599 qpair failed and we were unable to recover it. 00:30:04.599 [2024-07-23 01:09:48.756188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.756354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.756378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.599 qpair failed and we were unable to recover it. 00:30:04.599 [2024-07-23 01:09:48.756539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.756702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.756727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.599 qpair failed and we were unable to recover it. 00:30:04.599 [2024-07-23 01:09:48.756896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.757058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.757083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.599 qpair failed and we were unable to recover it. 00:30:04.599 [2024-07-23 01:09:48.757251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.757410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.757435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.599 qpair failed and we were unable to recover it. 00:30:04.599 [2024-07-23 01:09:48.757597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.757775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.757801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.599 qpair failed and we were unable to recover it. 00:30:04.599 [2024-07-23 01:09:48.757990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.758159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.758183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.599 qpair failed and we were unable to recover it. 00:30:04.599 [2024-07-23 01:09:48.758349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.758484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.758509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.599 qpair failed and we were unable to recover it. 00:30:04.599 [2024-07-23 01:09:48.758666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.758802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.758827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.599 qpair failed and we were unable to recover it. 00:30:04.599 [2024-07-23 01:09:48.758966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.759123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.759147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.599 qpair failed and we were unable to recover it. 00:30:04.599 [2024-07-23 01:09:48.759335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.759469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.759494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.599 qpair failed and we were unable to recover it. 00:30:04.599 [2024-07-23 01:09:48.759690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.759853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.759878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.599 qpair failed and we were unable to recover it. 00:30:04.599 [2024-07-23 01:09:48.760040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.760204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.760229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.599 qpair failed and we were unable to recover it. 00:30:04.599 [2024-07-23 01:09:48.760419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.760579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.760603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.599 qpair failed and we were unable to recover it. 00:30:04.599 [2024-07-23 01:09:48.760753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.760915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.760940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.599 qpair failed and we were unable to recover it. 00:30:04.599 [2024-07-23 01:09:48.761104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.761267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.761291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.599 qpair failed and we were unable to recover it. 00:30:04.599 [2024-07-23 01:09:48.761428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.761636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.761662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.599 qpair failed and we were unable to recover it. 00:30:04.599 [2024-07-23 01:09:48.761802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.761966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.761992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.599 qpair failed and we were unable to recover it. 00:30:04.599 [2024-07-23 01:09:48.762134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.762269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.762293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.599 qpair failed and we were unable to recover it. 00:30:04.599 [2024-07-23 01:09:48.762484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.762649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.762675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.599 qpair failed and we were unable to recover it. 00:30:04.599 [2024-07-23 01:09:48.762867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.762999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.763025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.599 qpair failed and we were unable to recover it. 00:30:04.599 [2024-07-23 01:09:48.763191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.763332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.763357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.599 qpair failed and we were unable to recover it. 00:30:04.599 [2024-07-23 01:09:48.763491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.763651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.763676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.599 qpair failed and we were unable to recover it. 00:30:04.599 [2024-07-23 01:09:48.763842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.764002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.764027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.599 qpair failed and we were unable to recover it. 00:30:04.599 [2024-07-23 01:09:48.764196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.764382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.764406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.599 qpair failed and we were unable to recover it. 00:30:04.599 [2024-07-23 01:09:48.764567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.764732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.764757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.599 qpair failed and we were unable to recover it. 00:30:04.599 [2024-07-23 01:09:48.764892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.765025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.765050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.599 qpair failed and we were unable to recover it. 00:30:04.599 [2024-07-23 01:09:48.765206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.765370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.765394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.599 qpair failed and we were unable to recover it. 00:30:04.599 [2024-07-23 01:09:48.765585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.765753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.765779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.599 qpair failed and we were unable to recover it. 00:30:04.599 [2024-07-23 01:09:48.765950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.766088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.766113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.599 qpair failed and we were unable to recover it. 00:30:04.599 [2024-07-23 01:09:48.766252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.766418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.766442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.599 qpair failed and we were unable to recover it. 00:30:04.599 [2024-07-23 01:09:48.766604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.766779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.766805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.599 qpair failed and we were unable to recover it. 00:30:04.599 [2024-07-23 01:09:48.766944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.767083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.767108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.599 qpair failed and we were unable to recover it. 00:30:04.599 [2024-07-23 01:09:48.767285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.767443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.767467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.599 qpair failed and we were unable to recover it. 00:30:04.599 [2024-07-23 01:09:48.767643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.767802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.767826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.599 qpair failed and we were unable to recover it. 00:30:04.599 [2024-07-23 01:09:48.768012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.768153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.768179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.599 qpair failed and we were unable to recover it. 00:30:04.599 [2024-07-23 01:09:48.768369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.768526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.768551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.599 qpair failed and we were unable to recover it. 00:30:04.599 [2024-07-23 01:09:48.768695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.768829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.768854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.599 qpair failed and we were unable to recover it. 00:30:04.599 [2024-07-23 01:09:48.769018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.769178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.599 [2024-07-23 01:09:48.769202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.599 qpair failed and we were unable to recover it. 00:30:04.599 [2024-07-23 01:09:48.769363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.600 [2024-07-23 01:09:48.769521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.600 [2024-07-23 01:09:48.769546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.600 qpair failed and we were unable to recover it. 00:30:04.600 [2024-07-23 01:09:48.769739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.600 [2024-07-23 01:09:48.769908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.600 [2024-07-23 01:09:48.769934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.600 qpair failed and we were unable to recover it. 00:30:04.600 [2024-07-23 01:09:48.770099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.600 [2024-07-23 01:09:48.770261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.600 [2024-07-23 01:09:48.770287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.600 qpair failed and we were unable to recover it. 00:30:04.600 [2024-07-23 01:09:48.770427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.600 [2024-07-23 01:09:48.770607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.600 [2024-07-23 01:09:48.770643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.600 qpair failed and we were unable to recover it. 00:30:04.600 [2024-07-23 01:09:48.770783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.600 [2024-07-23 01:09:48.770946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.600 [2024-07-23 01:09:48.770969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.600 qpair failed and we were unable to recover it. 00:30:04.600 [2024-07-23 01:09:48.771108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.600 [2024-07-23 01:09:48.771273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.600 [2024-07-23 01:09:48.771298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.600 qpair failed and we were unable to recover it. 00:30:04.600 [2024-07-23 01:09:48.771469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.600 [2024-07-23 01:09:48.771667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.600 [2024-07-23 01:09:48.771693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.600 qpair failed and we were unable to recover it. 00:30:04.600 [2024-07-23 01:09:48.771862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.600 [2024-07-23 01:09:48.772016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.600 [2024-07-23 01:09:48.772041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.600 qpair failed and we were unable to recover it. 00:30:04.600 [2024-07-23 01:09:48.772231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.600 [2024-07-23 01:09:48.772390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.600 [2024-07-23 01:09:48.772414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.600 qpair failed and we were unable to recover it. 00:30:04.600 [2024-07-23 01:09:48.772577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.600 [2024-07-23 01:09:48.772718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.600 [2024-07-23 01:09:48.772743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.600 qpair failed and we were unable to recover it. 00:30:04.600 [2024-07-23 01:09:48.772887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.600 [2024-07-23 01:09:48.773027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.600 [2024-07-23 01:09:48.773051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.600 qpair failed and we were unable to recover it. 00:30:04.600 [2024-07-23 01:09:48.773213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.600 [2024-07-23 01:09:48.773381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.600 [2024-07-23 01:09:48.773405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.600 qpair failed and we were unable to recover it. 00:30:04.600 [2024-07-23 01:09:48.773569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.600 [2024-07-23 01:09:48.773735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.600 [2024-07-23 01:09:48.773759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.600 qpair failed and we were unable to recover it. 00:30:04.600 [2024-07-23 01:09:48.773900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.600 [2024-07-23 01:09:48.774035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.600 [2024-07-23 01:09:48.774058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.600 qpair failed and we were unable to recover it. 00:30:04.600 [2024-07-23 01:09:48.774246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.600 [2024-07-23 01:09:48.774401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.600 [2024-07-23 01:09:48.774426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.600 qpair failed and we were unable to recover it. 00:30:04.600 [2024-07-23 01:09:48.774621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.600 [2024-07-23 01:09:48.774761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.600 [2024-07-23 01:09:48.774786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.600 qpair failed and we were unable to recover it. 00:30:04.600 [2024-07-23 01:09:48.774923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.600 [2024-07-23 01:09:48.775085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.600 [2024-07-23 01:09:48.775108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.600 qpair failed and we were unable to recover it. 00:30:04.600 [2024-07-23 01:09:48.775247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.600 [2024-07-23 01:09:48.775389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.600 [2024-07-23 01:09:48.775414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.600 qpair failed and we were unable to recover it. 00:30:04.600 [2024-07-23 01:09:48.775578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.600 [2024-07-23 01:09:48.775716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.600 [2024-07-23 01:09:48.775742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.600 qpair failed and we were unable to recover it. 00:30:04.600 [2024-07-23 01:09:48.775935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.600 [2024-07-23 01:09:48.776089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.600 [2024-07-23 01:09:48.776114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.600 qpair failed and we were unable to recover it. 00:30:04.600 [2024-07-23 01:09:48.776278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.600 [2024-07-23 01:09:48.776406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.600 [2024-07-23 01:09:48.776429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.600 qpair failed and we were unable to recover it. 00:30:04.600 [2024-07-23 01:09:48.776563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.600 [2024-07-23 01:09:48.776687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.600 [2024-07-23 01:09:48.776712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.600 qpair failed and we were unable to recover it. 00:30:04.600 [2024-07-23 01:09:48.776874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.600 [2024-07-23 01:09:48.777033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.600 [2024-07-23 01:09:48.777058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.600 qpair failed and we were unable to recover it. 00:30:04.600 [2024-07-23 01:09:48.777218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.600 [2024-07-23 01:09:48.777366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.600 [2024-07-23 01:09:48.777390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.600 qpair failed and we were unable to recover it. 00:30:04.600 [2024-07-23 01:09:48.777556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.600 [2024-07-23 01:09:48.777714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.600 [2024-07-23 01:09:48.777738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.600 qpair failed and we were unable to recover it. 00:30:04.600 [2024-07-23 01:09:48.777906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.600 [2024-07-23 01:09:48.778040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.600 [2024-07-23 01:09:48.778065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.600 qpair failed and we were unable to recover it. 00:30:04.600 [2024-07-23 01:09:48.778231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.600 [2024-07-23 01:09:48.778388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.600 [2024-07-23 01:09:48.778412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.600 qpair failed and we were unable to recover it. 00:30:04.600 [2024-07-23 01:09:48.778550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.600 [2024-07-23 01:09:48.778714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.600 [2024-07-23 01:09:48.778738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.600 qpair failed and we were unable to recover it. 00:30:04.600 [2024-07-23 01:09:48.778905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.600 [2024-07-23 01:09:48.779065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.600 [2024-07-23 01:09:48.779089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.600 qpair failed and we were unable to recover it. 00:30:04.600 [2024-07-23 01:09:48.779227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.600 [2024-07-23 01:09:48.779356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.600 [2024-07-23 01:09:48.779381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.600 qpair failed and we were unable to recover it. 00:30:04.600 [2024-07-23 01:09:48.779527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.600 [2024-07-23 01:09:48.779694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.600 [2024-07-23 01:09:48.779718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.600 qpair failed and we were unable to recover it. 00:30:04.600 [2024-07-23 01:09:48.779853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.600 [2024-07-23 01:09:48.780013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.600 [2024-07-23 01:09:48.780038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.600 qpair failed and we were unable to recover it. 00:30:04.600 [2024-07-23 01:09:48.780212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.600 [2024-07-23 01:09:48.780397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.600 [2024-07-23 01:09:48.780421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.600 qpair failed and we were unable to recover it. 00:30:04.600 [2024-07-23 01:09:48.780591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.600 [2024-07-23 01:09:48.780759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.600 [2024-07-23 01:09:48.780784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.600 qpair failed and we were unable to recover it. 00:30:04.600 [2024-07-23 01:09:48.780922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.874 [2024-07-23 01:09:48.781062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.874 [2024-07-23 01:09:48.781087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.874 qpair failed and we were unable to recover it. 00:30:04.874 [2024-07-23 01:09:48.781249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.874 [2024-07-23 01:09:48.781388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.874 [2024-07-23 01:09:48.781412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.874 qpair failed and we were unable to recover it. 00:30:04.874 [2024-07-23 01:09:48.781549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.874 [2024-07-23 01:09:48.781717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.874 [2024-07-23 01:09:48.781742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.874 qpair failed and we were unable to recover it. 00:30:04.874 [2024-07-23 01:09:48.781887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.874 [2024-07-23 01:09:48.782076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.874 [2024-07-23 01:09:48.782101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.874 qpair failed and we were unable to recover it. 00:30:04.874 [2024-07-23 01:09:48.782233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.874 [2024-07-23 01:09:48.782394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.874 [2024-07-23 01:09:48.782420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.874 qpair failed and we were unable to recover it. 00:30:04.874 [2024-07-23 01:09:48.782580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.874 [2024-07-23 01:09:48.782740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.874 [2024-07-23 01:09:48.782764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.874 qpair failed and we were unable to recover it. 00:30:04.874 [2024-07-23 01:09:48.782897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.874 [2024-07-23 01:09:48.783062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.874 [2024-07-23 01:09:48.783088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.874 qpair failed and we were unable to recover it. 00:30:04.874 [2024-07-23 01:09:48.783226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.874 [2024-07-23 01:09:48.783413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.874 [2024-07-23 01:09:48.783438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.874 qpair failed and we were unable to recover it. 00:30:04.874 [2024-07-23 01:09:48.783596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.874 [2024-07-23 01:09:48.783807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.874 [2024-07-23 01:09:48.783832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.874 qpair failed and we were unable to recover it. 00:30:04.874 [2024-07-23 01:09:48.783962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.874 [2024-07-23 01:09:48.784102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.874 [2024-07-23 01:09:48.784132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.874 qpair failed and we were unable to recover it. 00:30:04.874 [2024-07-23 01:09:48.784321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.874 [2024-07-23 01:09:48.784507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.874 [2024-07-23 01:09:48.784530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.874 qpair failed and we were unable to recover it. 00:30:04.874 [2024-07-23 01:09:48.784720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.874 [2024-07-23 01:09:48.784893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.874 [2024-07-23 01:09:48.784917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.874 qpair failed and we were unable to recover it. 00:30:04.874 [2024-07-23 01:09:48.785078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.874 [2024-07-23 01:09:48.785215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.874 [2024-07-23 01:09:48.785240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.874 qpair failed and we were unable to recover it. 00:30:04.874 [2024-07-23 01:09:48.785401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.874 [2024-07-23 01:09:48.785586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.874 [2024-07-23 01:09:48.785610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.874 qpair failed and we were unable to recover it. 00:30:04.874 [2024-07-23 01:09:48.785760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.874 [2024-07-23 01:09:48.785896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.874 [2024-07-23 01:09:48.785922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.874 qpair failed and we were unable to recover it. 00:30:04.874 [2024-07-23 01:09:48.786063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.874 [2024-07-23 01:09:48.786228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.874 [2024-07-23 01:09:48.786252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.874 qpair failed and we were unable to recover it. 00:30:04.874 [2024-07-23 01:09:48.786395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.874 [2024-07-23 01:09:48.786555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.874 [2024-07-23 01:09:48.786580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.874 qpair failed and we were unable to recover it. 00:30:04.874 [2024-07-23 01:09:48.786787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.874 [2024-07-23 01:09:48.786957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.874 [2024-07-23 01:09:48.786981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.874 qpair failed and we were unable to recover it. 00:30:04.874 [2024-07-23 01:09:48.787177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.874 [2024-07-23 01:09:48.787339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.874 [2024-07-23 01:09:48.787362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.874 qpair failed and we were unable to recover it. 00:30:04.874 [2024-07-23 01:09:48.787520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.874 [2024-07-23 01:09:48.787662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.874 [2024-07-23 01:09:48.787686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.874 qpair failed and we were unable to recover it. 00:30:04.874 [2024-07-23 01:09:48.787859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.874 [2024-07-23 01:09:48.788023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.874 [2024-07-23 01:09:48.788048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.874 qpair failed and we were unable to recover it. 00:30:04.874 [2024-07-23 01:09:48.788233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.874 [2024-07-23 01:09:48.788429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.874 [2024-07-23 01:09:48.788452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.874 qpair failed and we were unable to recover it. 00:30:04.874 [2024-07-23 01:09:48.788600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.874 [2024-07-23 01:09:48.788770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.874 [2024-07-23 01:09:48.788797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.874 qpair failed and we were unable to recover it. 00:30:04.874 [2024-07-23 01:09:48.788933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.874 [2024-07-23 01:09:48.789102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.874 [2024-07-23 01:09:48.789126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.874 qpair failed and we were unable to recover it. 00:30:04.874 [2024-07-23 01:09:48.789288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.874 [2024-07-23 01:09:48.789451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.874 [2024-07-23 01:09:48.789477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.874 qpair failed and we were unable to recover it. 00:30:04.874 [2024-07-23 01:09:48.789629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.874 [2024-07-23 01:09:48.789762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.874 [2024-07-23 01:09:48.789786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.874 qpair failed and we were unable to recover it. 00:30:04.874 [2024-07-23 01:09:48.789948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.874 [2024-07-23 01:09:48.790080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.875 [2024-07-23 01:09:48.790106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.875 qpair failed and we were unable to recover it. 00:30:04.875 [2024-07-23 01:09:48.790299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.875 [2024-07-23 01:09:48.790486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.875 [2024-07-23 01:09:48.790510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.875 qpair failed and we were unable to recover it. 00:30:04.875 [2024-07-23 01:09:48.790670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.875 [2024-07-23 01:09:48.790862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.875 [2024-07-23 01:09:48.790886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.875 qpair failed and we were unable to recover it. 00:30:04.875 [2024-07-23 01:09:48.791025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.875 [2024-07-23 01:09:48.791187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.875 [2024-07-23 01:09:48.791212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.875 qpair failed and we were unable to recover it. 00:30:04.875 [2024-07-23 01:09:48.791349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.875 [2024-07-23 01:09:48.791538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.875 [2024-07-23 01:09:48.791561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.875 qpair failed and we were unable to recover it. 00:30:04.875 [2024-07-23 01:09:48.791699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.875 [2024-07-23 01:09:48.791863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.875 [2024-07-23 01:09:48.791888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.875 qpair failed and we were unable to recover it. 00:30:04.875 [2024-07-23 01:09:48.792018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.875 [2024-07-23 01:09:48.792177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.875 [2024-07-23 01:09:48.792200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.875 qpair failed and we were unable to recover it. 00:30:04.875 [2024-07-23 01:09:48.792337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.875 [2024-07-23 01:09:48.792498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.875 [2024-07-23 01:09:48.792522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.875 qpair failed and we were unable to recover it. 00:30:04.875 [2024-07-23 01:09:48.792659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.875 [2024-07-23 01:09:48.792819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.875 [2024-07-23 01:09:48.792844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.875 qpair failed and we were unable to recover it. 00:30:04.875 [2024-07-23 01:09:48.793001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.875 [2024-07-23 01:09:48.793171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.875 [2024-07-23 01:09:48.793195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.875 qpair failed and we were unable to recover it. 00:30:04.875 [2024-07-23 01:09:48.793327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.875 [2024-07-23 01:09:48.793494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.875 [2024-07-23 01:09:48.793518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.875 qpair failed and we were unable to recover it. 00:30:04.875 [2024-07-23 01:09:48.793705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.875 [2024-07-23 01:09:48.793865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.875 [2024-07-23 01:09:48.793889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.875 qpair failed and we were unable to recover it. 00:30:04.875 [2024-07-23 01:09:48.794052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.875 [2024-07-23 01:09:48.794209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.875 [2024-07-23 01:09:48.794233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.875 qpair failed and we were unable to recover it. 00:30:04.875 [2024-07-23 01:09:48.794365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.875 [2024-07-23 01:09:48.794528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.875 [2024-07-23 01:09:48.794552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.875 qpair failed and we were unable to recover it. 00:30:04.875 [2024-07-23 01:09:48.794700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.875 [2024-07-23 01:09:48.794868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.875 [2024-07-23 01:09:48.794892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.875 qpair failed and we were unable to recover it. 00:30:04.875 [2024-07-23 01:09:48.795036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.875 [2024-07-23 01:09:48.795182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.875 [2024-07-23 01:09:48.795211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.875 qpair failed and we were unable to recover it. 00:30:04.875 [2024-07-23 01:09:48.795378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.875 [2024-07-23 01:09:48.795538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.875 [2024-07-23 01:09:48.795561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.875 qpair failed and we were unable to recover it. 00:30:04.875 [2024-07-23 01:09:48.795696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.875 [2024-07-23 01:09:48.795862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.875 [2024-07-23 01:09:48.795886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.875 qpair failed and we were unable to recover it. 00:30:04.875 [2024-07-23 01:09:48.796027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.875 [2024-07-23 01:09:48.796183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.875 [2024-07-23 01:09:48.796207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.875 qpair failed and we were unable to recover it. 00:30:04.875 [2024-07-23 01:09:48.796372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.875 [2024-07-23 01:09:48.796539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.875 [2024-07-23 01:09:48.796562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.875 qpair failed and we were unable to recover it. 00:30:04.875 [2024-07-23 01:09:48.796723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.875 [2024-07-23 01:09:48.796862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.875 [2024-07-23 01:09:48.796887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.875 qpair failed and we were unable to recover it. 00:30:04.875 [2024-07-23 01:09:48.797077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.875 [2024-07-23 01:09:48.797233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.875 [2024-07-23 01:09:48.797258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.875 qpair failed and we were unable to recover it. 00:30:04.875 [2024-07-23 01:09:48.797414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.875 [2024-07-23 01:09:48.797550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.875 [2024-07-23 01:09:48.797574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.875 qpair failed and we were unable to recover it. 00:30:04.875 [2024-07-23 01:09:48.797781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.875 [2024-07-23 01:09:48.797941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.875 [2024-07-23 01:09:48.797964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.875 qpair failed and we were unable to recover it. 00:30:04.875 [2024-07-23 01:09:48.798123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.875 [2024-07-23 01:09:48.798285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.875 [2024-07-23 01:09:48.798309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.875 qpair failed and we were unable to recover it. 00:30:04.875 [2024-07-23 01:09:48.798477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.875 [2024-07-23 01:09:48.798645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.875 [2024-07-23 01:09:48.798675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.875 qpair failed and we were unable to recover it. 00:30:04.875 [2024-07-23 01:09:48.798886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.875 [2024-07-23 01:09:48.799027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.875 [2024-07-23 01:09:48.799052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.876 qpair failed and we were unable to recover it. 00:30:04.876 [2024-07-23 01:09:48.799239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.876 [2024-07-23 01:09:48.799409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.876 [2024-07-23 01:09:48.799434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.876 qpair failed and we were unable to recover it. 00:30:04.876 [2024-07-23 01:09:48.799599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.876 [2024-07-23 01:09:48.799798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.876 [2024-07-23 01:09:48.799822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.876 qpair failed and we were unable to recover it. 00:30:04.876 [2024-07-23 01:09:48.799983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.876 [2024-07-23 01:09:48.800167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.876 [2024-07-23 01:09:48.800191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.876 qpair failed and we were unable to recover it. 00:30:04.876 [2024-07-23 01:09:48.800353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.876 [2024-07-23 01:09:48.800539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.876 [2024-07-23 01:09:48.800562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.876 qpair failed and we were unable to recover it. 00:30:04.876 [2024-07-23 01:09:48.800725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.876 [2024-07-23 01:09:48.800867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.876 [2024-07-23 01:09:48.800892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.876 qpair failed and we were unable to recover it. 00:30:04.876 [2024-07-23 01:09:48.801052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.876 [2024-07-23 01:09:48.801245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.876 [2024-07-23 01:09:48.801269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.876 qpair failed and we were unable to recover it. 00:30:04.876 [2024-07-23 01:09:48.801431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.876 [2024-07-23 01:09:48.801595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.876 [2024-07-23 01:09:48.801626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.876 qpair failed and we were unable to recover it. 00:30:04.876 [2024-07-23 01:09:48.801793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.876 [2024-07-23 01:09:48.801977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.876 [2024-07-23 01:09:48.802001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.876 qpair failed and we were unable to recover it. 00:30:04.876 [2024-07-23 01:09:48.802161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.876 [2024-07-23 01:09:48.802288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.876 [2024-07-23 01:09:48.802317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.876 qpair failed and we were unable to recover it. 00:30:04.876 [2024-07-23 01:09:48.802482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.876 [2024-07-23 01:09:48.802668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.876 [2024-07-23 01:09:48.802693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.876 qpair failed and we were unable to recover it. 00:30:04.876 [2024-07-23 01:09:48.802852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.876 [2024-07-23 01:09:48.803014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.876 [2024-07-23 01:09:48.803040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.876 qpair failed and we were unable to recover it. 00:30:04.876 [2024-07-23 01:09:48.803205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.876 [2024-07-23 01:09:48.803345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.876 [2024-07-23 01:09:48.803369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.876 qpair failed and we were unable to recover it. 00:30:04.876 [2024-07-23 01:09:48.803536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.876 [2024-07-23 01:09:48.803703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.876 [2024-07-23 01:09:48.803728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.876 qpair failed and we were unable to recover it. 00:30:04.876 [2024-07-23 01:09:48.803860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.876 [2024-07-23 01:09:48.804023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.876 [2024-07-23 01:09:48.804048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.876 qpair failed and we were unable to recover it. 00:30:04.876 [2024-07-23 01:09:48.804204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.876 [2024-07-23 01:09:48.804341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.876 [2024-07-23 01:09:48.804365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.876 qpair failed and we were unable to recover it. 00:30:04.876 [2024-07-23 01:09:48.804532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.876 [2024-07-23 01:09:48.804666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.876 [2024-07-23 01:09:48.804691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.876 qpair failed and we were unable to recover it. 00:30:04.876 [2024-07-23 01:09:48.804827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.876 [2024-07-23 01:09:48.804984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.876 [2024-07-23 01:09:48.805009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.876 qpair failed and we were unable to recover it. 00:30:04.876 [2024-07-23 01:09:48.805145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.876 [2024-07-23 01:09:48.805310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.876 [2024-07-23 01:09:48.805334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.876 qpair failed and we were unable to recover it. 00:30:04.876 [2024-07-23 01:09:48.805498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.876 [2024-07-23 01:09:48.805637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.876 [2024-07-23 01:09:48.805666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.876 qpair failed and we were unable to recover it. 00:30:04.876 [2024-07-23 01:09:48.805838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.876 [2024-07-23 01:09:48.805998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.876 [2024-07-23 01:09:48.806023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.876 qpair failed and we were unable to recover it. 00:30:04.876 [2024-07-23 01:09:48.806189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.876 [2024-07-23 01:09:48.806355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.876 [2024-07-23 01:09:48.806379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.876 qpair failed and we were unable to recover it. 00:30:04.876 [2024-07-23 01:09:48.806516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.876 [2024-07-23 01:09:48.806647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.876 [2024-07-23 01:09:48.806673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.876 qpair failed and we were unable to recover it. 00:30:04.876 [2024-07-23 01:09:48.806806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.876 [2024-07-23 01:09:48.806989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.876 [2024-07-23 01:09:48.807014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.876 qpair failed and we were unable to recover it. 00:30:04.876 [2024-07-23 01:09:48.807154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.876 [2024-07-23 01:09:48.807354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.876 [2024-07-23 01:09:48.807378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.876 qpair failed and we were unable to recover it. 00:30:04.876 [2024-07-23 01:09:48.807520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.876 [2024-07-23 01:09:48.807711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.876 [2024-07-23 01:09:48.807737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.876 qpair failed and we were unable to recover it. 00:30:04.876 [2024-07-23 01:09:48.807907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.876 [2024-07-23 01:09:48.808043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.876 [2024-07-23 01:09:48.808067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.876 qpair failed and we were unable to recover it. 00:30:04.876 [2024-07-23 01:09:48.808262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.876 [2024-07-23 01:09:48.808400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.876 [2024-07-23 01:09:48.808425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.876 qpair failed and we were unable to recover it. 00:30:04.876 [2024-07-23 01:09:48.808587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.876 [2024-07-23 01:09:48.808782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.876 [2024-07-23 01:09:48.808807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.876 qpair failed and we were unable to recover it. 00:30:04.877 [2024-07-23 01:09:48.808999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.877 [2024-07-23 01:09:48.809165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.877 [2024-07-23 01:09:48.809190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.877 qpair failed and we were unable to recover it. 00:30:04.877 [2024-07-23 01:09:48.809392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.877 [2024-07-23 01:09:48.809531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.877 [2024-07-23 01:09:48.809555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.877 qpair failed and we were unable to recover it. 00:30:04.877 [2024-07-23 01:09:48.809700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.877 [2024-07-23 01:09:48.809863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.877 [2024-07-23 01:09:48.809889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.877 qpair failed and we were unable to recover it. 00:30:04.877 [2024-07-23 01:09:48.810054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.877 [2024-07-23 01:09:48.810238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.877 [2024-07-23 01:09:48.810263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.877 qpair failed and we were unable to recover it. 00:30:04.877 [2024-07-23 01:09:48.810424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.877 [2024-07-23 01:09:48.810610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.877 [2024-07-23 01:09:48.810641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.877 qpair failed and we were unable to recover it. 00:30:04.877 [2024-07-23 01:09:48.810832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.877 [2024-07-23 01:09:48.810994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.877 [2024-07-23 01:09:48.811018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.877 qpair failed and we were unable to recover it. 00:30:04.877 [2024-07-23 01:09:48.811177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.877 [2024-07-23 01:09:48.811331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.877 [2024-07-23 01:09:48.811356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.877 qpair failed and we were unable to recover it. 00:30:04.877 [2024-07-23 01:09:48.811546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.877 [2024-07-23 01:09:48.811736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.877 [2024-07-23 01:09:48.811762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.877 qpair failed and we were unable to recover it. 00:30:04.877 [2024-07-23 01:09:48.811949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.877 [2024-07-23 01:09:48.812112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.877 [2024-07-23 01:09:48.812136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.877 qpair failed and we were unable to recover it. 00:30:04.877 [2024-07-23 01:09:48.812305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.877 [2024-07-23 01:09:48.812468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.877 [2024-07-23 01:09:48.812492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.877 qpair failed and we were unable to recover it. 00:30:04.877 [2024-07-23 01:09:48.812657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.877 [2024-07-23 01:09:48.812806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.877 [2024-07-23 01:09:48.812830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.877 qpair failed and we were unable to recover it. 00:30:04.877 [2024-07-23 01:09:48.813034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.877 [2024-07-23 01:09:48.813175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.877 [2024-07-23 01:09:48.813200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.877 qpair failed and we were unable to recover it. 00:30:04.877 [2024-07-23 01:09:48.813397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.877 [2024-07-23 01:09:48.813526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.877 [2024-07-23 01:09:48.813550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.877 qpair failed and we were unable to recover it. 00:30:04.877 [2024-07-23 01:09:48.813711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.877 [2024-07-23 01:09:48.813845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.877 [2024-07-23 01:09:48.813871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.877 qpair failed and we were unable to recover it. 00:30:04.877 [2024-07-23 01:09:48.814061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.877 [2024-07-23 01:09:48.814194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.877 [2024-07-23 01:09:48.814221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.877 qpair failed and we were unable to recover it. 00:30:04.877 [2024-07-23 01:09:48.814384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.877 [2024-07-23 01:09:48.814545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.877 [2024-07-23 01:09:48.814568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.877 qpair failed and we were unable to recover it. 00:30:04.877 [2024-07-23 01:09:48.814779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.877 [2024-07-23 01:09:48.814911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.877 [2024-07-23 01:09:48.814935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.877 qpair failed and we were unable to recover it. 00:30:04.877 [2024-07-23 01:09:48.815071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.877 [2024-07-23 01:09:48.815232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.877 [2024-07-23 01:09:48.815256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.877 qpair failed and we were unable to recover it. 00:30:04.877 [2024-07-23 01:09:48.815420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.877 [2024-07-23 01:09:48.815608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.877 [2024-07-23 01:09:48.815660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.877 qpair failed and we were unable to recover it. 00:30:04.877 [2024-07-23 01:09:48.815799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.877 [2024-07-23 01:09:48.815993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.877 [2024-07-23 01:09:48.816016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.877 qpair failed and we were unable to recover it. 00:30:04.877 [2024-07-23 01:09:48.816148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.877 [2024-07-23 01:09:48.816309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.877 [2024-07-23 01:09:48.816334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.877 qpair failed and we were unable to recover it. 00:30:04.877 [2024-07-23 01:09:48.816503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.877 [2024-07-23 01:09:48.816646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.877 [2024-07-23 01:09:48.816672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.877 qpair failed and we were unable to recover it. 00:30:04.877 [2024-07-23 01:09:48.816809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.877 [2024-07-23 01:09:48.816975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.877 [2024-07-23 01:09:48.816999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.877 qpair failed and we were unable to recover it. 00:30:04.877 [2024-07-23 01:09:48.817142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.877 [2024-07-23 01:09:48.817299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.877 [2024-07-23 01:09:48.817323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.877 qpair failed and we were unable to recover it. 00:30:04.877 [2024-07-23 01:09:48.817459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.877 [2024-07-23 01:09:48.817588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.877 [2024-07-23 01:09:48.817622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.877 qpair failed and we were unable to recover it. 00:30:04.877 [2024-07-23 01:09:48.817786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.877 [2024-07-23 01:09:48.817976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.877 [2024-07-23 01:09:48.818001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.877 qpair failed and we were unable to recover it. 00:30:04.877 [2024-07-23 01:09:48.818198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.877 [2024-07-23 01:09:48.818359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.877 [2024-07-23 01:09:48.818384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.877 qpair failed and we were unable to recover it. 00:30:04.877 [2024-07-23 01:09:48.818574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.877 [2024-07-23 01:09:48.818734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.877 [2024-07-23 01:09:48.818759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.877 qpair failed and we were unable to recover it. 00:30:04.878 [2024-07-23 01:09:48.818919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.878 [2024-07-23 01:09:48.819080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.878 [2024-07-23 01:09:48.819103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.878 qpair failed and we were unable to recover it. 00:30:04.878 [2024-07-23 01:09:48.819261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.878 [2024-07-23 01:09:48.819422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.878 [2024-07-23 01:09:48.819446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.878 qpair failed and we were unable to recover it. 00:30:04.878 [2024-07-23 01:09:48.819621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.878 [2024-07-23 01:09:48.819763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.878 [2024-07-23 01:09:48.819788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.878 qpair failed and we were unable to recover it. 00:30:04.878 [2024-07-23 01:09:48.819948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.878 [2024-07-23 01:09:48.820110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.878 [2024-07-23 01:09:48.820133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.878 qpair failed and we were unable to recover it. 00:30:04.878 [2024-07-23 01:09:48.820273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.878 [2024-07-23 01:09:48.820464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.878 [2024-07-23 01:09:48.820488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.878 qpair failed and we were unable to recover it. 00:30:04.878 [2024-07-23 01:09:48.820654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.878 [2024-07-23 01:09:48.820792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.878 [2024-07-23 01:09:48.820817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.878 qpair failed and we were unable to recover it. 00:30:04.878 [2024-07-23 01:09:48.821014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.878 [2024-07-23 01:09:48.821180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.878 [2024-07-23 01:09:48.821203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.878 qpair failed and we were unable to recover it. 00:30:04.878 [2024-07-23 01:09:48.821365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.878 [2024-07-23 01:09:48.821506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.878 [2024-07-23 01:09:48.821531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.878 qpair failed and we were unable to recover it. 00:30:04.878 [2024-07-23 01:09:48.821695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.878 [2024-07-23 01:09:48.821856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.878 [2024-07-23 01:09:48.821882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.878 qpair failed and we were unable to recover it. 00:30:04.878 [2024-07-23 01:09:48.822070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.878 [2024-07-23 01:09:48.822235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.878 [2024-07-23 01:09:48.822259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.878 qpair failed and we were unable to recover it. 00:30:04.878 [2024-07-23 01:09:48.822451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.878 [2024-07-23 01:09:48.822583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.878 [2024-07-23 01:09:48.822609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.878 qpair failed and we were unable to recover it. 00:30:04.878 [2024-07-23 01:09:48.822806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.878 [2024-07-23 01:09:48.822974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.878 [2024-07-23 01:09:48.822998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.878 qpair failed and we were unable to recover it. 00:30:04.878 [2024-07-23 01:09:48.823132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.878 [2024-07-23 01:09:48.823271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.878 [2024-07-23 01:09:48.823297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.878 qpair failed and we were unable to recover it. 00:30:04.878 [2024-07-23 01:09:48.823464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.878 [2024-07-23 01:09:48.823633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.878 [2024-07-23 01:09:48.823664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.878 qpair failed and we were unable to recover it. 00:30:04.878 [2024-07-23 01:09:48.823804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.878 [2024-07-23 01:09:48.823982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.878 [2024-07-23 01:09:48.824005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.878 qpair failed and we were unable to recover it. 00:30:04.878 [2024-07-23 01:09:48.824163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.878 [2024-07-23 01:09:48.824324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.878 [2024-07-23 01:09:48.824348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.878 qpair failed and we were unable to recover it. 00:30:04.878 [2024-07-23 01:09:48.824508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.878 [2024-07-23 01:09:48.824656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.878 [2024-07-23 01:09:48.824682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.878 qpair failed and we were unable to recover it. 00:30:04.878 [2024-07-23 01:09:48.824874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.878 [2024-07-23 01:09:48.825035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.878 [2024-07-23 01:09:48.825058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.878 qpair failed and we were unable to recover it. 00:30:04.878 [2024-07-23 01:09:48.825199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.878 [2024-07-23 01:09:48.825358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.878 [2024-07-23 01:09:48.825382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.878 qpair failed and we were unable to recover it. 00:30:04.878 [2024-07-23 01:09:48.825546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.878 [2024-07-23 01:09:48.825710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.878 [2024-07-23 01:09:48.825736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.878 qpair failed and we were unable to recover it. 00:30:04.878 [2024-07-23 01:09:48.825903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.878 [2024-07-23 01:09:48.826065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.878 [2024-07-23 01:09:48.826090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.878 qpair failed and we were unable to recover it. 00:30:04.878 [2024-07-23 01:09:48.826229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.878 [2024-07-23 01:09:48.826369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.878 [2024-07-23 01:09:48.826395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.878 qpair failed and we were unable to recover it. 00:30:04.878 [2024-07-23 01:09:48.826563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.878 [2024-07-23 01:09:48.826705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.878 [2024-07-23 01:09:48.826731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.878 qpair failed and we were unable to recover it. 00:30:04.878 [2024-07-23 01:09:48.826899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.879 [2024-07-23 01:09:48.827040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.879 [2024-07-23 01:09:48.827065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.879 qpair failed and we were unable to recover it. 00:30:04.879 [2024-07-23 01:09:48.827229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.879 [2024-07-23 01:09:48.827366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.879 [2024-07-23 01:09:48.827393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.879 qpair failed and we were unable to recover it. 00:30:04.879 [2024-07-23 01:09:48.827556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.879 [2024-07-23 01:09:48.827713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.879 [2024-07-23 01:09:48.827739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.879 qpair failed and we were unable to recover it. 00:30:04.879 [2024-07-23 01:09:48.827903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.879 [2024-07-23 01:09:48.828093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.879 [2024-07-23 01:09:48.828117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.879 qpair failed and we were unable to recover it. 00:30:04.879 [2024-07-23 01:09:48.828257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.879 [2024-07-23 01:09:48.828420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.879 [2024-07-23 01:09:48.828444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.879 qpair failed and we were unable to recover it. 00:30:04.879 [2024-07-23 01:09:48.828580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.879 [2024-07-23 01:09:48.828746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.879 [2024-07-23 01:09:48.828771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.879 qpair failed and we were unable to recover it. 00:30:04.879 [2024-07-23 01:09:48.828966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.879 [2024-07-23 01:09:48.829161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.879 [2024-07-23 01:09:48.829185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.879 qpair failed and we were unable to recover it. 00:30:04.879 [2024-07-23 01:09:48.829347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.879 [2024-07-23 01:09:48.829485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.879 [2024-07-23 01:09:48.829511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.879 qpair failed and we were unable to recover it. 00:30:04.879 [2024-07-23 01:09:48.829658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.879 [2024-07-23 01:09:48.829824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.879 [2024-07-23 01:09:48.829849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.879 qpair failed and we were unable to recover it. 00:30:04.879 [2024-07-23 01:09:48.830013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.879 [2024-07-23 01:09:48.830174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.879 [2024-07-23 01:09:48.830200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.879 qpair failed and we were unable to recover it. 00:30:04.879 [2024-07-23 01:09:48.830371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.879 [2024-07-23 01:09:48.830544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.879 [2024-07-23 01:09:48.830568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.879 qpair failed and we were unable to recover it. 00:30:04.879 [2024-07-23 01:09:48.830708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.879 [2024-07-23 01:09:48.830845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.879 [2024-07-23 01:09:48.830871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.879 qpair failed and we were unable to recover it. 00:30:04.879 [2024-07-23 01:09:48.831033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.879 [2024-07-23 01:09:48.831197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.879 [2024-07-23 01:09:48.831222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.879 qpair failed and we were unable to recover it. 00:30:04.879 [2024-07-23 01:09:48.831380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.879 [2024-07-23 01:09:48.831568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.879 [2024-07-23 01:09:48.831592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.879 qpair failed and we were unable to recover it. 00:30:04.879 [2024-07-23 01:09:48.831763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.879 [2024-07-23 01:09:48.831946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.879 [2024-07-23 01:09:48.831971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.879 qpair failed and we were unable to recover it. 00:30:04.879 [2024-07-23 01:09:48.832106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.879 [2024-07-23 01:09:48.832269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.879 [2024-07-23 01:09:48.832294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.879 qpair failed and we were unable to recover it. 00:30:04.879 [2024-07-23 01:09:48.832431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.879 [2024-07-23 01:09:48.832567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.879 [2024-07-23 01:09:48.832592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.879 qpair failed and we were unable to recover it. 00:30:04.879 [2024-07-23 01:09:48.832739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.879 [2024-07-23 01:09:48.832924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.879 [2024-07-23 01:09:48.832948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.879 qpair failed and we were unable to recover it. 00:30:04.879 [2024-07-23 01:09:48.833108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.879 [2024-07-23 01:09:48.833298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.879 [2024-07-23 01:09:48.833322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.879 qpair failed and we were unable to recover it. 00:30:04.879 [2024-07-23 01:09:48.833459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.879 [2024-07-23 01:09:48.833590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.879 [2024-07-23 01:09:48.833622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.879 qpair failed and we were unable to recover it. 00:30:04.879 [2024-07-23 01:09:48.833794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.879 [2024-07-23 01:09:48.833982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.879 [2024-07-23 01:09:48.834006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.879 qpair failed and we were unable to recover it. 00:30:04.879 [2024-07-23 01:09:48.834171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.879 [2024-07-23 01:09:48.834343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.879 [2024-07-23 01:09:48.834367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.879 qpair failed and we were unable to recover it. 00:30:04.879 [2024-07-23 01:09:48.834506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.879 [2024-07-23 01:09:48.834653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.879 [2024-07-23 01:09:48.834679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.879 qpair failed and we were unable to recover it. 00:30:04.879 [2024-07-23 01:09:48.834845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.879 [2024-07-23 01:09:48.835011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.879 [2024-07-23 01:09:48.835035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.879 qpair failed and we were unable to recover it. 00:30:04.879 [2024-07-23 01:09:48.835198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.879 [2024-07-23 01:09:48.835355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.879 [2024-07-23 01:09:48.835379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.879 qpair failed and we were unable to recover it. 00:30:04.879 [2024-07-23 01:09:48.835520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.879 [2024-07-23 01:09:48.835673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.879 [2024-07-23 01:09:48.835698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.879 qpair failed and we were unable to recover it. 00:30:04.879 [2024-07-23 01:09:48.835858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.879 [2024-07-23 01:09:48.836019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.879 [2024-07-23 01:09:48.836044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.879 qpair failed and we were unable to recover it. 00:30:04.879 [2024-07-23 01:09:48.836238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.879 [2024-07-23 01:09:48.836374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.879 [2024-07-23 01:09:48.836400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.879 qpair failed and we were unable to recover it. 00:30:04.879 [2024-07-23 01:09:48.836563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.879 [2024-07-23 01:09:48.836700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.880 [2024-07-23 01:09:48.836725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.880 qpair failed and we were unable to recover it. 00:30:04.880 [2024-07-23 01:09:48.836862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.880 [2024-07-23 01:09:48.837003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.880 [2024-07-23 01:09:48.837028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.880 qpair failed and we were unable to recover it. 00:30:04.880 [2024-07-23 01:09:48.837220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.880 [2024-07-23 01:09:48.837410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.880 [2024-07-23 01:09:48.837434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.880 qpair failed and we were unable to recover it. 00:30:04.880 [2024-07-23 01:09:48.837565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.880 [2024-07-23 01:09:48.837735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.880 [2024-07-23 01:09:48.837760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.880 qpair failed and we were unable to recover it. 00:30:04.880 [2024-07-23 01:09:48.837924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.880 [2024-07-23 01:09:48.838085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.880 [2024-07-23 01:09:48.838109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.880 qpair failed and we were unable to recover it. 00:30:04.880 [2024-07-23 01:09:48.838261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.880 [2024-07-23 01:09:48.838418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.880 [2024-07-23 01:09:48.838442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.880 qpair failed and we were unable to recover it. 00:30:04.880 [2024-07-23 01:09:48.838605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.880 [2024-07-23 01:09:48.838746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.880 [2024-07-23 01:09:48.838772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.880 qpair failed and we were unable to recover it. 00:30:04.880 [2024-07-23 01:09:48.838936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.880 [2024-07-23 01:09:48.839125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.880 [2024-07-23 01:09:48.839149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.880 qpair failed and we were unable to recover it. 00:30:04.880 [2024-07-23 01:09:48.839315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.880 [2024-07-23 01:09:48.839472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.880 [2024-07-23 01:09:48.839496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.880 qpair failed and we were unable to recover it. 00:30:04.880 [2024-07-23 01:09:48.839653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.880 [2024-07-23 01:09:48.839789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.880 [2024-07-23 01:09:48.839814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.880 qpair failed and we were unable to recover it. 00:30:04.880 [2024-07-23 01:09:48.839963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.880 [2024-07-23 01:09:48.840128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.880 [2024-07-23 01:09:48.840152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.880 qpair failed and we were unable to recover it. 00:30:04.880 [2024-07-23 01:09:48.840316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.880 [2024-07-23 01:09:48.840502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.880 [2024-07-23 01:09:48.840526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.880 qpair failed and we were unable to recover it. 00:30:04.880 [2024-07-23 01:09:48.840720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.880 [2024-07-23 01:09:48.840899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.880 [2024-07-23 01:09:48.840924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.880 qpair failed and we were unable to recover it. 00:30:04.880 [2024-07-23 01:09:48.841057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.880 [2024-07-23 01:09:48.841188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.880 [2024-07-23 01:09:48.841211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.880 qpair failed and we were unable to recover it. 00:30:04.880 [2024-07-23 01:09:48.841394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.880 [2024-07-23 01:09:48.841561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.880 [2024-07-23 01:09:48.841585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.880 qpair failed and we were unable to recover it. 00:30:04.880 [2024-07-23 01:09:48.841742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.880 [2024-07-23 01:09:48.841909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.880 [2024-07-23 01:09:48.841934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.880 qpair failed and we were unable to recover it. 00:30:04.880 [2024-07-23 01:09:48.842094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.880 [2024-07-23 01:09:48.842252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.880 [2024-07-23 01:09:48.842276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.880 qpair failed and we were unable to recover it. 00:30:04.880 [2024-07-23 01:09:48.842447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.880 [2024-07-23 01:09:48.842618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.880 [2024-07-23 01:09:48.842644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.880 qpair failed and we were unable to recover it. 00:30:04.880 [2024-07-23 01:09:48.842835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.880 [2024-07-23 01:09:48.843023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.880 [2024-07-23 01:09:48.843048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.880 qpair failed and we were unable to recover it. 00:30:04.880 [2024-07-23 01:09:48.843210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.880 [2024-07-23 01:09:48.843370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.880 [2024-07-23 01:09:48.843393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.880 qpair failed and we were unable to recover it. 00:30:04.880 [2024-07-23 01:09:48.843559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.880 [2024-07-23 01:09:48.843718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.880 [2024-07-23 01:09:48.843744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.880 qpair failed and we were unable to recover it. 00:30:04.880 [2024-07-23 01:09:48.843881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.880 [2024-07-23 01:09:48.844054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.880 [2024-07-23 01:09:48.844078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.880 qpair failed and we were unable to recover it. 00:30:04.880 [2024-07-23 01:09:48.844269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.880 [2024-07-23 01:09:48.844411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.880 [2024-07-23 01:09:48.844435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.880 qpair failed and we were unable to recover it. 00:30:04.880 [2024-07-23 01:09:48.844575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.880 [2024-07-23 01:09:48.844714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.880 [2024-07-23 01:09:48.844738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.880 qpair failed and we were unable to recover it. 00:30:04.880 [2024-07-23 01:09:48.844883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.880 [2024-07-23 01:09:48.845015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.880 [2024-07-23 01:09:48.845040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.880 qpair failed and we were unable to recover it. 00:30:04.880 [2024-07-23 01:09:48.845200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.880 [2024-07-23 01:09:48.845388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.880 [2024-07-23 01:09:48.845413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.880 qpair failed and we were unable to recover it. 00:30:04.880 [2024-07-23 01:09:48.845551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.880 [2024-07-23 01:09:48.845719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.880 [2024-07-23 01:09:48.845745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.880 qpair failed and we were unable to recover it. 00:30:04.880 [2024-07-23 01:09:48.845903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.880 [2024-07-23 01:09:48.846039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.881 [2024-07-23 01:09:48.846063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.881 qpair failed and we were unable to recover it. 00:30:04.881 [2024-07-23 01:09:48.846229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.881 [2024-07-23 01:09:48.846361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.881 [2024-07-23 01:09:48.846384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.881 qpair failed and we were unable to recover it. 00:30:04.881 [2024-07-23 01:09:48.846553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.881 [2024-07-23 01:09:48.846716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.881 [2024-07-23 01:09:48.846742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.881 qpair failed and we were unable to recover it. 00:30:04.881 [2024-07-23 01:09:48.846904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.881 [2024-07-23 01:09:48.847093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.881 [2024-07-23 01:09:48.847117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.881 qpair failed and we were unable to recover it. 00:30:04.881 [2024-07-23 01:09:48.847282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.881 [2024-07-23 01:09:48.847445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.881 [2024-07-23 01:09:48.847471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.881 qpair failed and we were unable to recover it. 00:30:04.881 [2024-07-23 01:09:48.847604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.881 [2024-07-23 01:09:48.847759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.881 [2024-07-23 01:09:48.847783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.881 qpair failed and we were unable to recover it. 00:30:04.881 [2024-07-23 01:09:48.847918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.881 [2024-07-23 01:09:48.848079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.881 [2024-07-23 01:09:48.848105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.881 qpair failed and we were unable to recover it. 00:30:04.881 [2024-07-23 01:09:48.848253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.881 [2024-07-23 01:09:48.848417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.881 [2024-07-23 01:09:48.848440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.881 qpair failed and we were unable to recover it. 00:30:04.881 [2024-07-23 01:09:48.848611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.881 [2024-07-23 01:09:48.848779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.881 [2024-07-23 01:09:48.848804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.881 qpair failed and we were unable to recover it. 00:30:04.881 [2024-07-23 01:09:48.848964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.881 [2024-07-23 01:09:48.849102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.881 [2024-07-23 01:09:48.849125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.881 qpair failed and we were unable to recover it. 00:30:04.881 [2024-07-23 01:09:48.849261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.881 [2024-07-23 01:09:48.849396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.881 [2024-07-23 01:09:48.849422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.881 qpair failed and we were unable to recover it. 00:30:04.881 [2024-07-23 01:09:48.849633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.881 [2024-07-23 01:09:48.849778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.881 [2024-07-23 01:09:48.849802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.881 qpair failed and we were unable to recover it. 00:30:04.881 [2024-07-23 01:09:48.849958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.881 [2024-07-23 01:09:48.850145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.881 [2024-07-23 01:09:48.850169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.881 qpair failed and we were unable to recover it. 00:30:04.881 [2024-07-23 01:09:48.850337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.881 [2024-07-23 01:09:48.850499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.881 [2024-07-23 01:09:48.850523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.881 qpair failed and we were unable to recover it. 00:30:04.881 [2024-07-23 01:09:48.850698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.881 [2024-07-23 01:09:48.850856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.881 [2024-07-23 01:09:48.850879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.881 qpair failed and we were unable to recover it. 00:30:04.881 [2024-07-23 01:09:48.851045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.881 [2024-07-23 01:09:48.851233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.881 [2024-07-23 01:09:48.851262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.881 qpair failed and we were unable to recover it. 00:30:04.881 [2024-07-23 01:09:48.851429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.881 [2024-07-23 01:09:48.851626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.881 [2024-07-23 01:09:48.851651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.881 qpair failed and we were unable to recover it. 00:30:04.881 [2024-07-23 01:09:48.851792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.881 [2024-07-23 01:09:48.851962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.881 [2024-07-23 01:09:48.851986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.881 qpair failed and we were unable to recover it. 00:30:04.881 [2024-07-23 01:09:48.852152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.881 [2024-07-23 01:09:48.852294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.881 [2024-07-23 01:09:48.852319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.881 qpair failed and we were unable to recover it. 00:30:04.881 [2024-07-23 01:09:48.852454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.881 [2024-07-23 01:09:48.852639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.881 [2024-07-23 01:09:48.852664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.881 qpair failed and we were unable to recover it. 00:30:04.881 [2024-07-23 01:09:48.852856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.881 [2024-07-23 01:09:48.853010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.881 [2024-07-23 01:09:48.853034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.881 qpair failed and we were unable to recover it. 00:30:04.881 [2024-07-23 01:09:48.853201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.881 [2024-07-23 01:09:48.853367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.881 [2024-07-23 01:09:48.853392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.881 qpair failed and we were unable to recover it. 00:30:04.881 [2024-07-23 01:09:48.853556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.881 [2024-07-23 01:09:48.853693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.881 [2024-07-23 01:09:48.853718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.881 qpair failed and we were unable to recover it. 00:30:04.881 [2024-07-23 01:09:48.853921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.881 [2024-07-23 01:09:48.854079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.881 [2024-07-23 01:09:48.854104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.881 qpair failed and we were unable to recover it. 00:30:04.881 [2024-07-23 01:09:48.854267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.881 [2024-07-23 01:09:48.854447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.881 [2024-07-23 01:09:48.854471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.881 qpair failed and we were unable to recover it. 00:30:04.881 [2024-07-23 01:09:48.854635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.881 [2024-07-23 01:09:48.854791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.881 [2024-07-23 01:09:48.854819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.881 qpair failed and we were unable to recover it. 00:30:04.881 [2024-07-23 01:09:48.854953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.881 [2024-07-23 01:09:48.855087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.881 [2024-07-23 01:09:48.855111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.881 qpair failed and we were unable to recover it. 00:30:04.881 [2024-07-23 01:09:48.855276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.881 [2024-07-23 01:09:48.855434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.881 [2024-07-23 01:09:48.855458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.881 qpair failed and we were unable to recover it. 00:30:04.881 [2024-07-23 01:09:48.855624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.881 [2024-07-23 01:09:48.855758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.882 [2024-07-23 01:09:48.855781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.882 qpair failed and we were unable to recover it. 00:30:04.882 [2024-07-23 01:09:48.855967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.882 [2024-07-23 01:09:48.856101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.882 [2024-07-23 01:09:48.856126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.882 qpair failed and we were unable to recover it. 00:30:04.882 [2024-07-23 01:09:48.856265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.882 [2024-07-23 01:09:48.856426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.882 [2024-07-23 01:09:48.856452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.882 qpair failed and we were unable to recover it. 00:30:04.882 [2024-07-23 01:09:48.856610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.882 [2024-07-23 01:09:48.856754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.882 [2024-07-23 01:09:48.856778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.882 qpair failed and we were unable to recover it. 00:30:04.882 [2024-07-23 01:09:48.856920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.882 [2024-07-23 01:09:48.857081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.882 [2024-07-23 01:09:48.857105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.882 qpair failed and we were unable to recover it. 00:30:04.882 [2024-07-23 01:09:48.857248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.882 [2024-07-23 01:09:48.857384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.882 [2024-07-23 01:09:48.857409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.882 qpair failed and we were unable to recover it. 00:30:04.882 [2024-07-23 01:09:48.857569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.882 [2024-07-23 01:09:48.857713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.882 [2024-07-23 01:09:48.857739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.882 qpair failed and we were unable to recover it. 00:30:04.882 [2024-07-23 01:09:48.857944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.882 [2024-07-23 01:09:48.858109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.882 [2024-07-23 01:09:48.858138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.882 qpair failed and we were unable to recover it. 00:30:04.882 [2024-07-23 01:09:48.858278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.882 [2024-07-23 01:09:48.858467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.882 [2024-07-23 01:09:48.858492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.882 qpair failed and we were unable to recover it. 00:30:04.882 [2024-07-23 01:09:48.858658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.882 [2024-07-23 01:09:48.858836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.882 [2024-07-23 01:09:48.858860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.882 qpair failed and we were unable to recover it. 00:30:04.882 [2024-07-23 01:09:48.859008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.882 [2024-07-23 01:09:48.859138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.882 [2024-07-23 01:09:48.859162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.882 qpair failed and we were unable to recover it. 00:30:04.882 [2024-07-23 01:09:48.859351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.882 [2024-07-23 01:09:48.859487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.882 [2024-07-23 01:09:48.859510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.882 qpair failed and we were unable to recover it. 00:30:04.882 [2024-07-23 01:09:48.859703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.882 [2024-07-23 01:09:48.859867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.882 [2024-07-23 01:09:48.859891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.882 qpair failed and we were unable to recover it. 00:30:04.882 [2024-07-23 01:09:48.860030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.882 [2024-07-23 01:09:48.860226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.882 [2024-07-23 01:09:48.860251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.882 qpair failed and we were unable to recover it. 00:30:04.882 [2024-07-23 01:09:48.860411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.882 [2024-07-23 01:09:48.860568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.882 [2024-07-23 01:09:48.860592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.882 qpair failed and we were unable to recover it. 00:30:04.882 [2024-07-23 01:09:48.860732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.882 [2024-07-23 01:09:48.860863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.882 [2024-07-23 01:09:48.860887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.882 qpair failed and we were unable to recover it. 00:30:04.882 [2024-07-23 01:09:48.861022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.882 [2024-07-23 01:09:48.861186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.882 [2024-07-23 01:09:48.861211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.882 qpair failed and we were unable to recover it. 00:30:04.882 [2024-07-23 01:09:48.861375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.882 [2024-07-23 01:09:48.861538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.882 [2024-07-23 01:09:48.861567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.882 qpair failed and we were unable to recover it. 00:30:04.882 [2024-07-23 01:09:48.861722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.882 [2024-07-23 01:09:48.861860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.882 [2024-07-23 01:09:48.861884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.882 qpair failed and we were unable to recover it. 00:30:04.882 [2024-07-23 01:09:48.862074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.882 [2024-07-23 01:09:48.862210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.882 [2024-07-23 01:09:48.862233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.882 qpair failed and we were unable to recover it. 00:30:04.882 [2024-07-23 01:09:48.862410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.882 [2024-07-23 01:09:48.862550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.882 [2024-07-23 01:09:48.862574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.882 qpair failed and we were unable to recover it. 00:30:04.882 [2024-07-23 01:09:48.862726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.882 [2024-07-23 01:09:48.862890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.882 [2024-07-23 01:09:48.862915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.882 qpair failed and we were unable to recover it. 00:30:04.882 [2024-07-23 01:09:48.863084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.882 [2024-07-23 01:09:48.863243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.882 [2024-07-23 01:09:48.863267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.882 qpair failed and we were unable to recover it. 00:30:04.882 [2024-07-23 01:09:48.863450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.882 [2024-07-23 01:09:48.863622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.882 [2024-07-23 01:09:48.863647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.882 qpair failed and we were unable to recover it. 00:30:04.882 [2024-07-23 01:09:48.863814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.882 [2024-07-23 01:09:48.863988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.882 [2024-07-23 01:09:48.864012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.882 qpair failed and we were unable to recover it. 00:30:04.882 [2024-07-23 01:09:48.864155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.882 [2024-07-23 01:09:48.864314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.882 [2024-07-23 01:09:48.864338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.882 qpair failed and we were unable to recover it. 00:30:04.882 [2024-07-23 01:09:48.864478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.882 [2024-07-23 01:09:48.864668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.882 [2024-07-23 01:09:48.864695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.882 qpair failed and we were unable to recover it. 00:30:04.882 [2024-07-23 01:09:48.864827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.882 [2024-07-23 01:09:48.864994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.882 [2024-07-23 01:09:48.865018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.882 qpair failed and we were unable to recover it. 00:30:04.882 [2024-07-23 01:09:48.865190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.882 [2024-07-23 01:09:48.865353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.883 [2024-07-23 01:09:48.865377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.883 qpair failed and we were unable to recover it. 00:30:04.883 [2024-07-23 01:09:48.865574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.883 [2024-07-23 01:09:48.865753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.883 [2024-07-23 01:09:48.865779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.883 qpair failed and we were unable to recover it. 00:30:04.883 [2024-07-23 01:09:48.865922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.883 [2024-07-23 01:09:48.866086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.883 [2024-07-23 01:09:48.866111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.883 qpair failed and we were unable to recover it. 00:30:04.883 [2024-07-23 01:09:48.866271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.883 [2024-07-23 01:09:48.866434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.883 [2024-07-23 01:09:48.866458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.883 qpair failed and we were unable to recover it. 00:30:04.883 [2024-07-23 01:09:48.866647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.883 [2024-07-23 01:09:48.866811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.883 [2024-07-23 01:09:48.866836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.883 qpair failed and we were unable to recover it. 00:30:04.883 [2024-07-23 01:09:48.866971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.883 [2024-07-23 01:09:48.867137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.883 [2024-07-23 01:09:48.867160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.883 qpair failed and we were unable to recover it. 00:30:04.883 [2024-07-23 01:09:48.867326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.883 [2024-07-23 01:09:48.867462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.883 [2024-07-23 01:09:48.867488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.883 qpair failed and we were unable to recover it. 00:30:04.883 [2024-07-23 01:09:48.867632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.883 [2024-07-23 01:09:48.867825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.883 [2024-07-23 01:09:48.867849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.883 qpair failed and we were unable to recover it. 00:30:04.883 [2024-07-23 01:09:48.868023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.883 [2024-07-23 01:09:48.868156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.883 [2024-07-23 01:09:48.868181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.883 qpair failed and we were unable to recover it. 00:30:04.883 [2024-07-23 01:09:48.868344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.883 [2024-07-23 01:09:48.868532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.883 [2024-07-23 01:09:48.868557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.883 qpair failed and we were unable to recover it. 00:30:04.883 [2024-07-23 01:09:48.868737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.883 [2024-07-23 01:09:48.868934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.883 [2024-07-23 01:09:48.868958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.883 qpair failed and we were unable to recover it. 00:30:04.883 [2024-07-23 01:09:48.869098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.883 [2024-07-23 01:09:48.869259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.883 [2024-07-23 01:09:48.869283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.883 qpair failed and we were unable to recover it. 00:30:04.883 [2024-07-23 01:09:48.869443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.883 [2024-07-23 01:09:48.869628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.883 [2024-07-23 01:09:48.869654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.883 qpair failed and we were unable to recover it. 00:30:04.883 [2024-07-23 01:09:48.869792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.883 [2024-07-23 01:09:48.869935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.883 [2024-07-23 01:09:48.869958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.883 qpair failed and we were unable to recover it. 00:30:04.883 [2024-07-23 01:09:48.870150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.883 [2024-07-23 01:09:48.870288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.883 [2024-07-23 01:09:48.870313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.883 qpair failed and we were unable to recover it. 00:30:04.883 [2024-07-23 01:09:48.870504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.883 [2024-07-23 01:09:48.870666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.883 [2024-07-23 01:09:48.870691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.883 qpair failed and we were unable to recover it. 00:30:04.883 [2024-07-23 01:09:48.870827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.883 [2024-07-23 01:09:48.870991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.883 [2024-07-23 01:09:48.871016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.883 qpair failed and we were unable to recover it. 00:30:04.883 [2024-07-23 01:09:48.871154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.883 [2024-07-23 01:09:48.871316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.883 [2024-07-23 01:09:48.871340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.883 qpair failed and we were unable to recover it. 00:30:04.883 [2024-07-23 01:09:48.871524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.883 [2024-07-23 01:09:48.871693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.883 [2024-07-23 01:09:48.871718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.883 qpair failed and we were unable to recover it. 00:30:04.883 [2024-07-23 01:09:48.871849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.883 [2024-07-23 01:09:48.872013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.883 [2024-07-23 01:09:48.872038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.883 qpair failed and we were unable to recover it. 00:30:04.883 [2024-07-23 01:09:48.872208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.883 [2024-07-23 01:09:48.872358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.883 [2024-07-23 01:09:48.872382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.883 qpair failed and we were unable to recover it. 00:30:04.883 [2024-07-23 01:09:48.872568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.883 [2024-07-23 01:09:48.872706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.883 [2024-07-23 01:09:48.872732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.883 qpair failed and we were unable to recover it. 00:30:04.883 [2024-07-23 01:09:48.872874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.883 [2024-07-23 01:09:48.873031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.883 [2024-07-23 01:09:48.873055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.883 qpair failed and we were unable to recover it. 00:30:04.883 [2024-07-23 01:09:48.873219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.883 [2024-07-23 01:09:48.873353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.883 [2024-07-23 01:09:48.873376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.883 qpair failed and we were unable to recover it. 00:30:04.883 [2024-07-23 01:09:48.873540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.884 [2024-07-23 01:09:48.873708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.884 [2024-07-23 01:09:48.873733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.884 qpair failed and we were unable to recover it. 00:30:04.884 [2024-07-23 01:09:48.873900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.884 [2024-07-23 01:09:48.874067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.884 [2024-07-23 01:09:48.874090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.884 qpair failed and we were unable to recover it. 00:30:04.884 [2024-07-23 01:09:48.874220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.884 [2024-07-23 01:09:48.874358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.884 [2024-07-23 01:09:48.874381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.884 qpair failed and we were unable to recover it. 00:30:04.884 [2024-07-23 01:09:48.874555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.884 [2024-07-23 01:09:48.874722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.884 [2024-07-23 01:09:48.874749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.884 qpair failed and we were unable to recover it. 00:30:04.884 [2024-07-23 01:09:48.874917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.884 [2024-07-23 01:09:48.875081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.884 [2024-07-23 01:09:48.875106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.884 qpair failed and we were unable to recover it. 00:30:04.884 [2024-07-23 01:09:48.875298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.884 [2024-07-23 01:09:48.875429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.884 [2024-07-23 01:09:48.875453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.884 qpair failed and we were unable to recover it. 00:30:04.884 [2024-07-23 01:09:48.875626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.884 [2024-07-23 01:09:48.875767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.884 [2024-07-23 01:09:48.875792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.884 qpair failed and we were unable to recover it. 00:30:04.884 [2024-07-23 01:09:48.875958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.884 [2024-07-23 01:09:48.876140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.884 [2024-07-23 01:09:48.876164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.884 qpair failed and we were unable to recover it. 00:30:04.884 [2024-07-23 01:09:48.876331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.884 [2024-07-23 01:09:48.876478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.884 [2024-07-23 01:09:48.876503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.884 qpair failed and we were unable to recover it. 00:30:04.884 [2024-07-23 01:09:48.876640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.884 [2024-07-23 01:09:48.876776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.884 [2024-07-23 01:09:48.876800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.884 qpair failed and we were unable to recover it. 00:30:04.884 [2024-07-23 01:09:48.876969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.884 [2024-07-23 01:09:48.877135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.884 [2024-07-23 01:09:48.877159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.884 qpair failed and we were unable to recover it. 00:30:04.884 [2024-07-23 01:09:48.877349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.884 [2024-07-23 01:09:48.877511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.884 [2024-07-23 01:09:48.877535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.884 qpair failed and we were unable to recover it. 00:30:04.884 [2024-07-23 01:09:48.877696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.884 [2024-07-23 01:09:48.877853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.884 [2024-07-23 01:09:48.877877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.884 qpair failed and we were unable to recover it. 00:30:04.884 [2024-07-23 01:09:48.878062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.884 [2024-07-23 01:09:48.878248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.884 [2024-07-23 01:09:48.878272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.884 qpair failed and we were unable to recover it. 00:30:04.884 [2024-07-23 01:09:48.878437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.884 [2024-07-23 01:09:48.878639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.884 [2024-07-23 01:09:48.878665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.884 qpair failed and we were unable to recover it. 00:30:04.884 [2024-07-23 01:09:48.878805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.884 [2024-07-23 01:09:48.878969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.884 [2024-07-23 01:09:48.878993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.884 qpair failed and we were unable to recover it. 00:30:04.884 [2024-07-23 01:09:48.879160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.884 [2024-07-23 01:09:48.879326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.884 [2024-07-23 01:09:48.879351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.884 qpair failed and we were unable to recover it. 00:30:04.884 [2024-07-23 01:09:48.879520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.884 [2024-07-23 01:09:48.879659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.884 [2024-07-23 01:09:48.879684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.884 qpair failed and we were unable to recover it. 00:30:04.884 [2024-07-23 01:09:48.879851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.884 [2024-07-23 01:09:48.880019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.884 [2024-07-23 01:09:48.880044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.884 qpair failed and we were unable to recover it. 00:30:04.884 [2024-07-23 01:09:48.880209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.884 [2024-07-23 01:09:48.880397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.884 [2024-07-23 01:09:48.880421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.884 qpair failed and we were unable to recover it. 00:30:04.884 [2024-07-23 01:09:48.880561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.884 [2024-07-23 01:09:48.880729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.884 [2024-07-23 01:09:48.880754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.884 qpair failed and we were unable to recover it. 00:30:04.884 [2024-07-23 01:09:48.880918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.884 [2024-07-23 01:09:48.881105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.884 [2024-07-23 01:09:48.881130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.884 qpair failed and we were unable to recover it. 00:30:04.884 [2024-07-23 01:09:48.881270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.884 [2024-07-23 01:09:48.881408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.884 [2024-07-23 01:09:48.881434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.884 qpair failed and we were unable to recover it. 00:30:04.884 [2024-07-23 01:09:48.881597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.884 [2024-07-23 01:09:48.881770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.884 [2024-07-23 01:09:48.881795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.884 qpair failed and we were unable to recover it. 00:30:04.884 [2024-07-23 01:09:48.881976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.884 [2024-07-23 01:09:48.882112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.884 [2024-07-23 01:09:48.882138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.884 qpair failed and we were unable to recover it. 00:30:04.884 [2024-07-23 01:09:48.882300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.884 [2024-07-23 01:09:48.882434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.884 [2024-07-23 01:09:48.882459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.884 qpair failed and we were unable to recover it. 00:30:04.884 [2024-07-23 01:09:48.882658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.884 [2024-07-23 01:09:48.882800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.884 [2024-07-23 01:09:48.882825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.884 qpair failed and we were unable to recover it. 00:30:04.884 [2024-07-23 01:09:48.883012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.884 [2024-07-23 01:09:48.883170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.885 [2024-07-23 01:09:48.883194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.885 qpair failed and we were unable to recover it. 00:30:04.885 [2024-07-23 01:09:48.883327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.885 [2024-07-23 01:09:48.883485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.885 [2024-07-23 01:09:48.883509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.885 qpair failed and we were unable to recover it. 00:30:04.885 [2024-07-23 01:09:48.883668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.885 [2024-07-23 01:09:48.883834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.885 [2024-07-23 01:09:48.883858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.885 qpair failed and we were unable to recover it. 00:30:04.885 [2024-07-23 01:09:48.884052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.885 [2024-07-23 01:09:48.884236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.885 [2024-07-23 01:09:48.884260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.885 qpair failed and we were unable to recover it. 00:30:04.885 [2024-07-23 01:09:48.884426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.885 [2024-07-23 01:09:48.884601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.885 [2024-07-23 01:09:48.884633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.885 qpair failed and we were unable to recover it. 00:30:04.885 [2024-07-23 01:09:48.884778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.885 [2024-07-23 01:09:48.884970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.885 [2024-07-23 01:09:48.884994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.885 qpair failed and we were unable to recover it. 00:30:04.885 [2024-07-23 01:09:48.885128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.885 [2024-07-23 01:09:48.885297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.885 [2024-07-23 01:09:48.885321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.885 qpair failed and we were unable to recover it. 00:30:04.885 [2024-07-23 01:09:48.885488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.885 [2024-07-23 01:09:48.885630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.885 [2024-07-23 01:09:48.885656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.885 qpair failed and we were unable to recover it. 00:30:04.885 [2024-07-23 01:09:48.885798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.885 [2024-07-23 01:09:48.885984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.885 [2024-07-23 01:09:48.886008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.885 qpair failed and we were unable to recover it. 00:30:04.885 [2024-07-23 01:09:48.886154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.885 [2024-07-23 01:09:48.886291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.885 [2024-07-23 01:09:48.886315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.885 qpair failed and we were unable to recover it. 00:30:04.885 [2024-07-23 01:09:48.886475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.885 [2024-07-23 01:09:48.886610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.885 [2024-07-23 01:09:48.886650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.885 qpair failed and we were unable to recover it. 00:30:04.885 [2024-07-23 01:09:48.886786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.885 [2024-07-23 01:09:48.886945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.885 [2024-07-23 01:09:48.886969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.885 qpair failed and we were unable to recover it. 00:30:04.885 [2024-07-23 01:09:48.887133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.885 [2024-07-23 01:09:48.887266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.885 [2024-07-23 01:09:48.887290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.885 qpair failed and we were unable to recover it. 00:30:04.885 [2024-07-23 01:09:48.887450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.885 [2024-07-23 01:09:48.887607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.885 [2024-07-23 01:09:48.887639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.885 qpair failed and we were unable to recover it. 00:30:04.885 [2024-07-23 01:09:48.887800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.885 [2024-07-23 01:09:48.887964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.885 [2024-07-23 01:09:48.887988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.885 qpair failed and we were unable to recover it. 00:30:04.885 [2024-07-23 01:09:48.888148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.885 [2024-07-23 01:09:48.888308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.885 [2024-07-23 01:09:48.888334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.885 qpair failed and we were unable to recover it. 00:30:04.885 [2024-07-23 01:09:48.888499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.885 [2024-07-23 01:09:48.888638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.885 [2024-07-23 01:09:48.888664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.885 qpair failed and we were unable to recover it. 00:30:04.885 [2024-07-23 01:09:48.888856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.885 [2024-07-23 01:09:48.889023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.885 [2024-07-23 01:09:48.889046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.885 qpair failed and we were unable to recover it. 00:30:04.885 [2024-07-23 01:09:48.889215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.885 [2024-07-23 01:09:48.889373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.885 [2024-07-23 01:09:48.889397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.885 qpair failed and we were unable to recover it. 00:30:04.885 [2024-07-23 01:09:48.889548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.885 [2024-07-23 01:09:48.889720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.885 [2024-07-23 01:09:48.889745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.885 qpair failed and we were unable to recover it. 00:30:04.885 [2024-07-23 01:09:48.889903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.885 [2024-07-23 01:09:48.890070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.885 [2024-07-23 01:09:48.890094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.885 qpair failed and we were unable to recover it. 00:30:04.885 [2024-07-23 01:09:48.890247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.885 [2024-07-23 01:09:48.890410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.885 [2024-07-23 01:09:48.890436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.885 qpair failed and we were unable to recover it. 00:30:04.885 [2024-07-23 01:09:48.890630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.885 [2024-07-23 01:09:48.890791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.885 [2024-07-23 01:09:48.890815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.885 qpair failed and we were unable to recover it. 00:30:04.885 [2024-07-23 01:09:48.890954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.885 [2024-07-23 01:09:48.891147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.885 [2024-07-23 01:09:48.891171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.885 qpair failed and we were unable to recover it. 00:30:04.885 [2024-07-23 01:09:48.891335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.885 [2024-07-23 01:09:48.891512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.885 [2024-07-23 01:09:48.891536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.885 qpair failed and we were unable to recover it. 00:30:04.885 [2024-07-23 01:09:48.891669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.885 [2024-07-23 01:09:48.891811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.885 [2024-07-23 01:09:48.891834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.885 qpair failed and we were unable to recover it. 00:30:04.885 [2024-07-23 01:09:48.892001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.885 [2024-07-23 01:09:48.892193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.885 [2024-07-23 01:09:48.892217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.885 qpair failed and we were unable to recover it. 00:30:04.885 [2024-07-23 01:09:48.892400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.885 [2024-07-23 01:09:48.892568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.885 [2024-07-23 01:09:48.892592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.885 qpair failed and we were unable to recover it. 00:30:04.885 [2024-07-23 01:09:48.892736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.885 [2024-07-23 01:09:48.892901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.886 [2024-07-23 01:09:48.892925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.886 qpair failed and we were unable to recover it. 00:30:04.886 [2024-07-23 01:09:48.893057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.886 [2024-07-23 01:09:48.893219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.886 [2024-07-23 01:09:48.893243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.886 qpair failed and we were unable to recover it. 00:30:04.886 [2024-07-23 01:09:48.893430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.886 [2024-07-23 01:09:48.893624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.886 [2024-07-23 01:09:48.893648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.886 qpair failed and we were unable to recover it. 00:30:04.886 [2024-07-23 01:09:48.893774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.886 [2024-07-23 01:09:48.893907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.886 [2024-07-23 01:09:48.893933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.886 qpair failed and we were unable to recover it. 00:30:04.886 [2024-07-23 01:09:48.894069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.886 [2024-07-23 01:09:48.894257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.886 [2024-07-23 01:09:48.894283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.886 qpair failed and we were unable to recover it. 00:30:04.886 [2024-07-23 01:09:48.894422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.886 [2024-07-23 01:09:48.894640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.886 [2024-07-23 01:09:48.894666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.886 qpair failed and we were unable to recover it. 00:30:04.886 [2024-07-23 01:09:48.894811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.886 [2024-07-23 01:09:48.894946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.886 [2024-07-23 01:09:48.894971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.886 qpair failed and we were unable to recover it. 00:30:04.886 [2024-07-23 01:09:48.895135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.886 [2024-07-23 01:09:48.895322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.886 [2024-07-23 01:09:48.895346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.886 qpair failed and we were unable to recover it. 00:30:04.886 [2024-07-23 01:09:48.895485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.886 [2024-07-23 01:09:48.895631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.886 [2024-07-23 01:09:48.895656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.886 qpair failed and we were unable to recover it. 00:30:04.886 [2024-07-23 01:09:48.895849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.886 [2024-07-23 01:09:48.896014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.886 [2024-07-23 01:09:48.896039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.886 qpair failed and we were unable to recover it. 00:30:04.886 [2024-07-23 01:09:48.896205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.886 [2024-07-23 01:09:48.896367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.886 [2024-07-23 01:09:48.896391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.886 qpair failed and we were unable to recover it. 00:30:04.886 [2024-07-23 01:09:48.896540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.886 [2024-07-23 01:09:48.896708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.886 [2024-07-23 01:09:48.896732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.886 qpair failed and we were unable to recover it. 00:30:04.886 [2024-07-23 01:09:48.896866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.886 [2024-07-23 01:09:48.897058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.886 [2024-07-23 01:09:48.897082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.886 qpair failed and we were unable to recover it. 00:30:04.886 [2024-07-23 01:09:48.897269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.886 [2024-07-23 01:09:48.897407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.886 [2024-07-23 01:09:48.897431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.886 qpair failed and we were unable to recover it. 00:30:04.886 [2024-07-23 01:09:48.897568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.886 [2024-07-23 01:09:48.897711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.886 [2024-07-23 01:09:48.897736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.886 qpair failed and we were unable to recover it. 00:30:04.886 [2024-07-23 01:09:48.897869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.886 [2024-07-23 01:09:48.898032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.886 [2024-07-23 01:09:48.898056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.886 qpair failed and we were unable to recover it. 00:30:04.886 [2024-07-23 01:09:48.898221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.886 [2024-07-23 01:09:48.898409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.886 [2024-07-23 01:09:48.898434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.886 qpair failed and we were unable to recover it. 00:30:04.886 [2024-07-23 01:09:48.898587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.886 [2024-07-23 01:09:48.898768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.886 [2024-07-23 01:09:48.898792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.886 qpair failed and we were unable to recover it. 00:30:04.886 [2024-07-23 01:09:48.898931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.886 [2024-07-23 01:09:48.899064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.886 [2024-07-23 01:09:48.899087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.886 qpair failed and we were unable to recover it. 00:30:04.886 [2024-07-23 01:09:48.899228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.886 [2024-07-23 01:09:48.899430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.886 [2024-07-23 01:09:48.899455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.886 qpair failed and we were unable to recover it. 00:30:04.886 [2024-07-23 01:09:48.899628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.886 [2024-07-23 01:09:48.899793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.886 [2024-07-23 01:09:48.899817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.886 qpair failed and we were unable to recover it. 00:30:04.886 [2024-07-23 01:09:48.899980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.886 [2024-07-23 01:09:48.900147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.886 [2024-07-23 01:09:48.900172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.886 qpair failed and we were unable to recover it. 00:30:04.886 [2024-07-23 01:09:48.900345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.886 [2024-07-23 01:09:48.900486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.886 [2024-07-23 01:09:48.900513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.886 qpair failed and we were unable to recover it. 00:30:04.886 [2024-07-23 01:09:48.900682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.886 [2024-07-23 01:09:48.900847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.886 [2024-07-23 01:09:48.900871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.886 qpair failed and we were unable to recover it. 00:30:04.886 [2024-07-23 01:09:48.901035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.886 [2024-07-23 01:09:48.901198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.886 [2024-07-23 01:09:48.901224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.886 qpair failed and we were unable to recover it. 00:30:04.886 [2024-07-23 01:09:48.901359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.886 [2024-07-23 01:09:48.901516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.886 [2024-07-23 01:09:48.901541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.886 qpair failed and we were unable to recover it. 00:30:04.886 [2024-07-23 01:09:48.901687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.886 [2024-07-23 01:09:48.901830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.886 [2024-07-23 01:09:48.901856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.886 qpair failed and we were unable to recover it. 00:30:04.886 [2024-07-23 01:09:48.902028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.886 [2024-07-23 01:09:48.902220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.886 [2024-07-23 01:09:48.902245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.887 qpair failed and we were unable to recover it. 00:30:04.887 [2024-07-23 01:09:48.902376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.887 [2024-07-23 01:09:48.902501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.887 [2024-07-23 01:09:48.902525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.887 qpair failed and we were unable to recover it. 00:30:04.887 [2024-07-23 01:09:48.902712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.887 [2024-07-23 01:09:48.902850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.887 [2024-07-23 01:09:48.902874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.887 qpair failed and we were unable to recover it. 00:30:04.887 [2024-07-23 01:09:48.903008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.887 [2024-07-23 01:09:48.903145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.887 [2024-07-23 01:09:48.903169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.887 qpair failed and we were unable to recover it. 00:30:04.887 [2024-07-23 01:09:48.903332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.887 [2024-07-23 01:09:48.903473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.887 [2024-07-23 01:09:48.903496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.887 qpair failed and we were unable to recover it. 00:30:04.887 [2024-07-23 01:09:48.903667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.887 [2024-07-23 01:09:48.903840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.887 [2024-07-23 01:09:48.903865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.887 qpair failed and we were unable to recover it. 00:30:04.887 [2024-07-23 01:09:48.904052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.887 [2024-07-23 01:09:48.904210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.887 [2024-07-23 01:09:48.904234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.887 qpair failed and we were unable to recover it. 00:30:04.887 [2024-07-23 01:09:48.904391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.887 [2024-07-23 01:09:48.904552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.887 [2024-07-23 01:09:48.904576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.887 qpair failed and we were unable to recover it. 00:30:04.887 [2024-07-23 01:09:48.904720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.887 [2024-07-23 01:09:48.904908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.887 [2024-07-23 01:09:48.904933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.887 qpair failed and we were unable to recover it. 00:30:04.887 [2024-07-23 01:09:48.905095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.887 [2024-07-23 01:09:48.905261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.887 [2024-07-23 01:09:48.905284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.887 qpair failed and we were unable to recover it. 00:30:04.887 [2024-07-23 01:09:48.905422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.887 [2024-07-23 01:09:48.905610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.887 [2024-07-23 01:09:48.905640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.887 qpair failed and we were unable to recover it. 00:30:04.887 [2024-07-23 01:09:48.905784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.887 [2024-07-23 01:09:48.905943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.887 [2024-07-23 01:09:48.905968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.887 qpair failed and we were unable to recover it. 00:30:04.887 [2024-07-23 01:09:48.906129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.887 [2024-07-23 01:09:48.906300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.887 [2024-07-23 01:09:48.906324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.887 qpair failed and we were unable to recover it. 00:30:04.887 [2024-07-23 01:09:48.906514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.887 [2024-07-23 01:09:48.906679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.887 [2024-07-23 01:09:48.906704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.887 qpair failed and we were unable to recover it. 00:30:04.887 [2024-07-23 01:09:48.906839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.887 [2024-07-23 01:09:48.907023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.887 [2024-07-23 01:09:48.907051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.887 qpair failed and we were unable to recover it. 00:30:04.887 [2024-07-23 01:09:48.907192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.887 [2024-07-23 01:09:48.907359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.887 [2024-07-23 01:09:48.907385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.887 qpair failed and we were unable to recover it. 00:30:04.887 [2024-07-23 01:09:48.907579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.887 [2024-07-23 01:09:48.907755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.887 [2024-07-23 01:09:48.907779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.887 qpair failed and we were unable to recover it. 00:30:04.887 [2024-07-23 01:09:48.907940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.887 [2024-07-23 01:09:48.908129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.887 [2024-07-23 01:09:48.908154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.887 qpair failed and we were unable to recover it. 00:30:04.887 [2024-07-23 01:09:48.908317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.887 [2024-07-23 01:09:48.908445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.887 [2024-07-23 01:09:48.908469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.887 qpair failed and we were unable to recover it. 00:30:04.887 [2024-07-23 01:09:48.908638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.887 [2024-07-23 01:09:48.908800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.887 [2024-07-23 01:09:48.908825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.887 qpair failed and we were unable to recover it. 00:30:04.887 [2024-07-23 01:09:48.908965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.887 [2024-07-23 01:09:48.909141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.887 [2024-07-23 01:09:48.909165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.887 qpair failed and we were unable to recover it. 00:30:04.887 [2024-07-23 01:09:48.909329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.887 [2024-07-23 01:09:48.909471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.887 [2024-07-23 01:09:48.909495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.887 qpair failed and we were unable to recover it. 00:30:04.887 [2024-07-23 01:09:48.909662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.887 [2024-07-23 01:09:48.909820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.887 [2024-07-23 01:09:48.909845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.887 qpair failed and we were unable to recover it. 00:30:04.887 [2024-07-23 01:09:48.910004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.887 [2024-07-23 01:09:48.910190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.887 [2024-07-23 01:09:48.910214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.887 qpair failed and we were unable to recover it. 00:30:04.887 [2024-07-23 01:09:48.910342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.887 [2024-07-23 01:09:48.910502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.887 [2024-07-23 01:09:48.910530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.887 qpair failed and we were unable to recover it. 00:30:04.887 [2024-07-23 01:09:48.910693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.887 [2024-07-23 01:09:48.910880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.887 [2024-07-23 01:09:48.910905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.887 qpair failed and we were unable to recover it. 00:30:04.887 [2024-07-23 01:09:48.911065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.887 [2024-07-23 01:09:48.911224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.887 [2024-07-23 01:09:48.911249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.887 qpair failed and we were unable to recover it. 00:30:04.887 [2024-07-23 01:09:48.911409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.887 [2024-07-23 01:09:48.911539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.887 [2024-07-23 01:09:48.911564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.887 qpair failed and we were unable to recover it. 00:30:04.887 [2024-07-23 01:09:48.911722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.887 [2024-07-23 01:09:48.911887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.888 [2024-07-23 01:09:48.911912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.888 qpair failed and we were unable to recover it. 00:30:04.888 [2024-07-23 01:09:48.912050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.888 [2024-07-23 01:09:48.912186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.888 [2024-07-23 01:09:48.912210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.888 qpair failed and we were unable to recover it. 00:30:04.888 [2024-07-23 01:09:48.912343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.888 [2024-07-23 01:09:48.912511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.888 [2024-07-23 01:09:48.912536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.888 qpair failed and we were unable to recover it. 00:30:04.888 [2024-07-23 01:09:48.912679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.888 [2024-07-23 01:09:48.912866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.888 [2024-07-23 01:09:48.912890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.888 qpair failed and we were unable to recover it. 00:30:04.888 [2024-07-23 01:09:48.913047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.888 [2024-07-23 01:09:48.913188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.888 [2024-07-23 01:09:48.913212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.888 qpair failed and we were unable to recover it. 00:30:04.888 [2024-07-23 01:09:48.913360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.888 [2024-07-23 01:09:48.913484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.888 [2024-07-23 01:09:48.913509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.888 qpair failed and we were unable to recover it. 00:30:04.888 [2024-07-23 01:09:48.913678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.888 [2024-07-23 01:09:48.913845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.888 [2024-07-23 01:09:48.913873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.888 qpair failed and we were unable to recover it. 00:30:04.888 [2024-07-23 01:09:48.914017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.888 [2024-07-23 01:09:48.914180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.888 [2024-07-23 01:09:48.914204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.888 qpair failed and we were unable to recover it. 00:30:04.888 [2024-07-23 01:09:48.914373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.888 [2024-07-23 01:09:48.914563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.888 [2024-07-23 01:09:48.914588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.888 qpair failed and we were unable to recover it. 00:30:04.888 [2024-07-23 01:09:48.914725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.888 [2024-07-23 01:09:48.914884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.888 [2024-07-23 01:09:48.914908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.888 qpair failed and we were unable to recover it. 00:30:04.888 [2024-07-23 01:09:48.915051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.888 [2024-07-23 01:09:48.915211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.888 [2024-07-23 01:09:48.915237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.888 qpair failed and we were unable to recover it. 00:30:04.888 [2024-07-23 01:09:48.915429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.888 [2024-07-23 01:09:48.915591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.888 [2024-07-23 01:09:48.915622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.888 qpair failed and we were unable to recover it. 00:30:04.888 [2024-07-23 01:09:48.915790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.888 [2024-07-23 01:09:48.915953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.888 [2024-07-23 01:09:48.915977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.888 qpair failed and we were unable to recover it. 00:30:04.888 [2024-07-23 01:09:48.916167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.888 [2024-07-23 01:09:48.916298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.888 [2024-07-23 01:09:48.916322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.888 qpair failed and we were unable to recover it. 00:30:04.888 [2024-07-23 01:09:48.916461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.888 [2024-07-23 01:09:48.916628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.888 [2024-07-23 01:09:48.916653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.888 qpair failed and we were unable to recover it. 00:30:04.888 [2024-07-23 01:09:48.916791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.888 [2024-07-23 01:09:48.916979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.888 [2024-07-23 01:09:48.917003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.888 qpair failed and we were unable to recover it. 00:30:04.888 [2024-07-23 01:09:48.917145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.888 [2024-07-23 01:09:48.917277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.888 [2024-07-23 01:09:48.917307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.888 qpair failed and we were unable to recover it. 00:30:04.888 [2024-07-23 01:09:48.917471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.888 [2024-07-23 01:09:48.917633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.888 [2024-07-23 01:09:48.917658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.888 qpair failed and we were unable to recover it. 00:30:04.888 [2024-07-23 01:09:48.917817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.888 [2024-07-23 01:09:48.917977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.888 [2024-07-23 01:09:48.918002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.888 qpair failed and we were unable to recover it. 00:30:04.888 [2024-07-23 01:09:48.918160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.888 [2024-07-23 01:09:48.918297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.888 [2024-07-23 01:09:48.918323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.888 qpair failed and we were unable to recover it. 00:30:04.888 [2024-07-23 01:09:48.918513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.888 [2024-07-23 01:09:48.918710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.888 [2024-07-23 01:09:48.918746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.888 qpair failed and we were unable to recover it. 00:30:04.888 [2024-07-23 01:09:48.918908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.888 [2024-07-23 01:09:48.919071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.888 [2024-07-23 01:09:48.919095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.888 qpair failed and we were unable to recover it. 00:30:04.888 [2024-07-23 01:09:48.919282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.888 [2024-07-23 01:09:48.919418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.888 [2024-07-23 01:09:48.919444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.888 qpair failed and we were unable to recover it. 00:30:04.888 [2024-07-23 01:09:48.919619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.888 [2024-07-23 01:09:48.919758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.888 [2024-07-23 01:09:48.919783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.888 qpair failed and we were unable to recover it. 00:30:04.888 [2024-07-23 01:09:48.919926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.888 [2024-07-23 01:09:48.920092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.889 [2024-07-23 01:09:48.920116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.889 qpair failed and we were unable to recover it. 00:30:04.889 [2024-07-23 01:09:48.920311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.889 [2024-07-23 01:09:48.920480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.889 [2024-07-23 01:09:48.920505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.889 qpair failed and we were unable to recover it. 00:30:04.889 [2024-07-23 01:09:48.920671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.889 [2024-07-23 01:09:48.920825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.889 [2024-07-23 01:09:48.920849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.889 qpair failed and we were unable to recover it. 00:30:04.889 [2024-07-23 01:09:48.921017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.889 [2024-07-23 01:09:48.921184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.889 [2024-07-23 01:09:48.921209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.889 qpair failed and we were unable to recover it. 00:30:04.889 [2024-07-23 01:09:48.921367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.889 [2024-07-23 01:09:48.921536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.889 [2024-07-23 01:09:48.921560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.889 qpair failed and we were unable to recover it. 00:30:04.889 [2024-07-23 01:09:48.921758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.889 [2024-07-23 01:09:48.921903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.889 [2024-07-23 01:09:48.921927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.889 qpair failed and we were unable to recover it. 00:30:04.889 [2024-07-23 01:09:48.922126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.889 [2024-07-23 01:09:48.922260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.889 [2024-07-23 01:09:48.922283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.889 qpair failed and we were unable to recover it. 00:30:04.889 [2024-07-23 01:09:48.922448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.889 [2024-07-23 01:09:48.922640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.889 [2024-07-23 01:09:48.922665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.889 qpair failed and we were unable to recover it. 00:30:04.889 [2024-07-23 01:09:48.922824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.889 [2024-07-23 01:09:48.922980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.889 [2024-07-23 01:09:48.923004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.889 qpair failed and we were unable to recover it. 00:30:04.889 [2024-07-23 01:09:48.923150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.889 [2024-07-23 01:09:48.923337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.889 [2024-07-23 01:09:48.923361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.889 qpair failed and we were unable to recover it. 00:30:04.889 [2024-07-23 01:09:48.923524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.889 [2024-07-23 01:09:48.923710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.889 [2024-07-23 01:09:48.923735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.889 qpair failed and we were unable to recover it. 00:30:04.889 [2024-07-23 01:09:48.923896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.889 [2024-07-23 01:09:48.924034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.889 [2024-07-23 01:09:48.924059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.889 qpair failed and we were unable to recover it. 00:30:04.889 [2024-07-23 01:09:48.924221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.889 [2024-07-23 01:09:48.924387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.889 [2024-07-23 01:09:48.924411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.889 qpair failed and we were unable to recover it. 00:30:04.889 [2024-07-23 01:09:48.924579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.889 [2024-07-23 01:09:48.924730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.889 [2024-07-23 01:09:48.924754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.889 qpair failed and we were unable to recover it. 00:30:04.889 [2024-07-23 01:09:48.924945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.889 [2024-07-23 01:09:48.925106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.889 [2024-07-23 01:09:48.925129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.889 qpair failed and we were unable to recover it. 00:30:04.889 [2024-07-23 01:09:48.925263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.889 [2024-07-23 01:09:48.925424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.889 [2024-07-23 01:09:48.925448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.889 qpair failed and we were unable to recover it. 00:30:04.889 [2024-07-23 01:09:48.925639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.889 [2024-07-23 01:09:48.925777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.889 [2024-07-23 01:09:48.925802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.889 qpair failed and we were unable to recover it. 00:30:04.889 [2024-07-23 01:09:48.925965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.889 [2024-07-23 01:09:48.926127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.889 [2024-07-23 01:09:48.926150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.889 qpair failed and we were unable to recover it. 00:30:04.889 [2024-07-23 01:09:48.926310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.889 [2024-07-23 01:09:48.926474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.889 [2024-07-23 01:09:48.926499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.889 qpair failed and we were unable to recover it. 00:30:04.889 [2024-07-23 01:09:48.926631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.889 [2024-07-23 01:09:48.926772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.889 [2024-07-23 01:09:48.926796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.889 qpair failed and we were unable to recover it. 00:30:04.889 [2024-07-23 01:09:48.926957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.889 [2024-07-23 01:09:48.927091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.889 [2024-07-23 01:09:48.927117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.889 qpair failed and we were unable to recover it. 00:30:04.889 [2024-07-23 01:09:48.927305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.889 [2024-07-23 01:09:48.927444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.889 [2024-07-23 01:09:48.927468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.889 qpair failed and we were unable to recover it. 00:30:04.889 [2024-07-23 01:09:48.927632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.889 [2024-07-23 01:09:48.927797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.889 [2024-07-23 01:09:48.927822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.889 qpair failed and we were unable to recover it. 00:30:04.889 [2024-07-23 01:09:48.927991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.889 [2024-07-23 01:09:48.928160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.889 [2024-07-23 01:09:48.928184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.889 qpair failed and we were unable to recover it. 00:30:04.889 [2024-07-23 01:09:48.928344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.889 [2024-07-23 01:09:48.928535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.889 [2024-07-23 01:09:48.928559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.890 qpair failed and we were unable to recover it. 00:30:04.890 [2024-07-23 01:09:48.928726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.890 [2024-07-23 01:09:48.928893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.890 [2024-07-23 01:09:48.928918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.890 qpair failed and we were unable to recover it. 00:30:04.890 [2024-07-23 01:09:48.929081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.890 [2024-07-23 01:09:48.929241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.890 [2024-07-23 01:09:48.929267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.890 qpair failed and we were unable to recover it. 00:30:04.890 [2024-07-23 01:09:48.929409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.890 [2024-07-23 01:09:48.929551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.890 [2024-07-23 01:09:48.929575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.890 qpair failed and we were unable to recover it. 00:30:04.890 [2024-07-23 01:09:48.929742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.890 [2024-07-23 01:09:48.929881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.890 [2024-07-23 01:09:48.929907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.890 qpair failed and we were unable to recover it. 00:30:04.890 [2024-07-23 01:09:48.930068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.890 [2024-07-23 01:09:48.930256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.890 [2024-07-23 01:09:48.930279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.890 qpair failed and we were unable to recover it. 00:30:04.890 [2024-07-23 01:09:48.930416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.890 [2024-07-23 01:09:48.930577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.890 [2024-07-23 01:09:48.930602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.890 qpair failed and we were unable to recover it. 00:30:04.890 [2024-07-23 01:09:48.930790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.890 [2024-07-23 01:09:48.930939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.890 [2024-07-23 01:09:48.930963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.890 qpair failed and we were unable to recover it. 00:30:04.890 [2024-07-23 01:09:48.931131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.890 [2024-07-23 01:09:48.931294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.890 [2024-07-23 01:09:48.931318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.890 qpair failed and we were unable to recover it. 00:30:04.890 [2024-07-23 01:09:48.931458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.890 [2024-07-23 01:09:48.931627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.890 [2024-07-23 01:09:48.931653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.890 qpair failed and we were unable to recover it. 00:30:04.890 [2024-07-23 01:09:48.931789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.890 [2024-07-23 01:09:48.931958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.890 [2024-07-23 01:09:48.931981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.890 qpair failed and we were unable to recover it. 00:30:04.890 [2024-07-23 01:09:48.932127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.890 [2024-07-23 01:09:48.932270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.890 [2024-07-23 01:09:48.932294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.890 qpair failed and we were unable to recover it. 00:30:04.890 [2024-07-23 01:09:48.932457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.890 [2024-07-23 01:09:48.932623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.890 [2024-07-23 01:09:48.932649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.890 qpair failed and we were unable to recover it. 00:30:04.890 [2024-07-23 01:09:48.932817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.890 [2024-07-23 01:09:48.932983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.890 [2024-07-23 01:09:48.933008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.890 qpair failed and we were unable to recover it. 00:30:04.890 [2024-07-23 01:09:48.933206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.890 [2024-07-23 01:09:48.933372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.890 [2024-07-23 01:09:48.933396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.890 qpair failed and we were unable to recover it. 00:30:04.890 [2024-07-23 01:09:48.933553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.890 [2024-07-23 01:09:48.933697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.890 [2024-07-23 01:09:48.933722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.890 qpair failed and we were unable to recover it. 00:30:04.890 [2024-07-23 01:09:48.933882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.890 [2024-07-23 01:09:48.934014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.890 [2024-07-23 01:09:48.934038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.890 qpair failed and we were unable to recover it. 00:30:04.890 [2024-07-23 01:09:48.934202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.890 [2024-07-23 01:09:48.934366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.890 [2024-07-23 01:09:48.934390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.890 qpair failed and we were unable to recover it. 00:30:04.890 [2024-07-23 01:09:48.934577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.890 [2024-07-23 01:09:48.934747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.890 [2024-07-23 01:09:48.934772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.890 qpair failed and we were unable to recover it. 00:30:04.890 [2024-07-23 01:09:48.934964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.890 [2024-07-23 01:09:48.935101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.890 [2024-07-23 01:09:48.935126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.890 qpair failed and we were unable to recover it. 00:30:04.890 [2024-07-23 01:09:48.935264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.890 [2024-07-23 01:09:48.935427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.890 [2024-07-23 01:09:48.935451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.890 qpair failed and we were unable to recover it. 00:30:04.890 [2024-07-23 01:09:48.935626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.890 [2024-07-23 01:09:48.935764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.890 [2024-07-23 01:09:48.935790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.890 qpair failed and we were unable to recover it. 00:30:04.890 [2024-07-23 01:09:48.935976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.890 [2024-07-23 01:09:48.936114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.890 [2024-07-23 01:09:48.936137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.890 qpair failed and we were unable to recover it. 00:30:04.890 [2024-07-23 01:09:48.936278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.890 [2024-07-23 01:09:48.936441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.890 [2024-07-23 01:09:48.936466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.890 qpair failed and we were unable to recover it. 00:30:04.890 [2024-07-23 01:09:48.936608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.890 [2024-07-23 01:09:48.936759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.890 [2024-07-23 01:09:48.936784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.890 qpair failed and we were unable to recover it. 00:30:04.890 [2024-07-23 01:09:48.936971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.890 [2024-07-23 01:09:48.937162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.890 [2024-07-23 01:09:48.937186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.890 qpair failed and we were unable to recover it. 00:30:04.890 [2024-07-23 01:09:48.937351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.890 [2024-07-23 01:09:48.937491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.890 [2024-07-23 01:09:48.937515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.890 qpair failed and we were unable to recover it. 00:30:04.890 [2024-07-23 01:09:48.937652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.890 [2024-07-23 01:09:48.937794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.890 [2024-07-23 01:09:48.937818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.890 qpair failed and we were unable to recover it. 00:30:04.891 [2024-07-23 01:09:48.937985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.891 [2024-07-23 01:09:48.938147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.891 [2024-07-23 01:09:48.938171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.891 qpair failed and we were unable to recover it. 00:30:04.891 [2024-07-23 01:09:48.938338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.891 [2024-07-23 01:09:48.938495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.891 [2024-07-23 01:09:48.938519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.891 qpair failed and we were unable to recover it. 00:30:04.891 [2024-07-23 01:09:48.938697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.891 [2024-07-23 01:09:48.938861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.891 [2024-07-23 01:09:48.938885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.891 qpair failed and we were unable to recover it. 00:30:04.891 [2024-07-23 01:09:48.939051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.891 [2024-07-23 01:09:48.939213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.891 [2024-07-23 01:09:48.939238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.891 qpair failed and we were unable to recover it. 00:30:04.891 [2024-07-23 01:09:48.939374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.891 [2024-07-23 01:09:48.939540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.891 [2024-07-23 01:09:48.939565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.891 qpair failed and we were unable to recover it. 00:30:04.891 [2024-07-23 01:09:48.939699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.891 [2024-07-23 01:09:48.939861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.891 [2024-07-23 01:09:48.939885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.891 qpair failed and we were unable to recover it. 00:30:04.891 [2024-07-23 01:09:48.940049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.891 [2024-07-23 01:09:48.940239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.891 [2024-07-23 01:09:48.940263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.891 qpair failed and we were unable to recover it. 00:30:04.891 [2024-07-23 01:09:48.940426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.891 [2024-07-23 01:09:48.940594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.891 [2024-07-23 01:09:48.940625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.891 qpair failed and we were unable to recover it. 00:30:04.891 [2024-07-23 01:09:48.940820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.891 [2024-07-23 01:09:48.940983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.891 [2024-07-23 01:09:48.941007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.891 qpair failed and we were unable to recover it. 00:30:04.891 [2024-07-23 01:09:48.941173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.891 [2024-07-23 01:09:48.941359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.891 [2024-07-23 01:09:48.941384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.891 qpair failed and we were unable to recover it. 00:30:04.891 [2024-07-23 01:09:48.941571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.891 [2024-07-23 01:09:48.941717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.891 [2024-07-23 01:09:48.941741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.891 qpair failed and we were unable to recover it. 00:30:04.891 [2024-07-23 01:09:48.941892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.891 [2024-07-23 01:09:48.942026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.891 [2024-07-23 01:09:48.942050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.891 qpair failed and we were unable to recover it. 00:30:04.891 [2024-07-23 01:09:48.942186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.891 [2024-07-23 01:09:48.942324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.891 [2024-07-23 01:09:48.942350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.891 qpair failed and we were unable to recover it. 00:30:04.891 [2024-07-23 01:09:48.942514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.891 [2024-07-23 01:09:48.942688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.891 [2024-07-23 01:09:48.942714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.891 qpair failed and we were unable to recover it. 00:30:04.891 [2024-07-23 01:09:48.942876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.891 [2024-07-23 01:09:48.943032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.891 [2024-07-23 01:09:48.943056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.891 qpair failed and we were unable to recover it. 00:30:04.891 [2024-07-23 01:09:48.943221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.891 [2024-07-23 01:09:48.943360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.891 [2024-07-23 01:09:48.943386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.891 qpair failed and we were unable to recover it. 00:30:04.891 [2024-07-23 01:09:48.943552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.891 [2024-07-23 01:09:48.943744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.891 [2024-07-23 01:09:48.943769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.891 qpair failed and we were unable to recover it. 00:30:04.891 [2024-07-23 01:09:48.943957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.891 [2024-07-23 01:09:48.944088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.891 [2024-07-23 01:09:48.944111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.891 qpair failed and we were unable to recover it. 00:30:04.891 [2024-07-23 01:09:48.944276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.891 [2024-07-23 01:09:48.944450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.891 [2024-07-23 01:09:48.944474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.891 qpair failed and we were unable to recover it. 00:30:04.891 [2024-07-23 01:09:48.944622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.891 [2024-07-23 01:09:48.944788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.891 [2024-07-23 01:09:48.944813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.891 qpair failed and we were unable to recover it. 00:30:04.891 [2024-07-23 01:09:48.944977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.891 [2024-07-23 01:09:48.945114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.891 [2024-07-23 01:09:48.945137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.891 qpair failed and we were unable to recover it. 00:30:04.891 [2024-07-23 01:09:48.945280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.891 [2024-07-23 01:09:48.945481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.891 [2024-07-23 01:09:48.945506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.891 qpair failed and we were unable to recover it. 00:30:04.891 [2024-07-23 01:09:48.945672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.891 [2024-07-23 01:09:48.945810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.891 [2024-07-23 01:09:48.945836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.891 qpair failed and we were unable to recover it. 00:30:04.891 [2024-07-23 01:09:48.945974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.891 [2024-07-23 01:09:48.946115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.891 [2024-07-23 01:09:48.946139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.891 qpair failed and we were unable to recover it. 00:30:04.891 [2024-07-23 01:09:48.946277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.891 [2024-07-23 01:09:48.946471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.891 [2024-07-23 01:09:48.946495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.891 qpair failed and we were unable to recover it. 00:30:04.891 [2024-07-23 01:09:48.946664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.891 [2024-07-23 01:09:48.946859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.891 [2024-07-23 01:09:48.946884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.891 qpair failed and we were unable to recover it. 00:30:04.891 [2024-07-23 01:09:48.947076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.891 [2024-07-23 01:09:48.947240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.891 [2024-07-23 01:09:48.947266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.891 qpair failed and we were unable to recover it. 00:30:04.891 [2024-07-23 01:09:48.947437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.891 [2024-07-23 01:09:48.947599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.891 [2024-07-23 01:09:48.947633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.891 qpair failed and we were unable to recover it. 00:30:04.892 [2024-07-23 01:09:48.947783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.892 [2024-07-23 01:09:48.947939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.892 [2024-07-23 01:09:48.947963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.892 qpair failed and we were unable to recover it. 00:30:04.892 [2024-07-23 01:09:48.948114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.892 [2024-07-23 01:09:48.948311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.892 [2024-07-23 01:09:48.948335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.892 qpair failed and we were unable to recover it. 00:30:04.892 [2024-07-23 01:09:48.948475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.892 [2024-07-23 01:09:48.948611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.892 [2024-07-23 01:09:48.948642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.892 qpair failed and we were unable to recover it. 00:30:04.892 [2024-07-23 01:09:48.948786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.892 [2024-07-23 01:09:48.948930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.892 [2024-07-23 01:09:48.948956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.892 qpair failed and we were unable to recover it. 00:30:04.892 [2024-07-23 01:09:48.949118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.892 [2024-07-23 01:09:48.949280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.892 [2024-07-23 01:09:48.949304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.892 qpair failed and we were unable to recover it. 00:30:04.892 [2024-07-23 01:09:48.949441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.892 [2024-07-23 01:09:48.949606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.892 [2024-07-23 01:09:48.949638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.892 qpair failed and we were unable to recover it. 00:30:04.892 [2024-07-23 01:09:48.949774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.892 [2024-07-23 01:09:48.949905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.892 [2024-07-23 01:09:48.949931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.892 qpair failed and we were unable to recover it. 00:30:04.892 [2024-07-23 01:09:48.950101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.892 [2024-07-23 01:09:48.950265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.892 [2024-07-23 01:09:48.950289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.892 qpair failed and we were unable to recover it. 00:30:04.892 [2024-07-23 01:09:48.950456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.892 [2024-07-23 01:09:48.950625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.892 [2024-07-23 01:09:48.950651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.892 qpair failed and we were unable to recover it. 00:30:04.892 [2024-07-23 01:09:48.950817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.892 [2024-07-23 01:09:48.950971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.892 [2024-07-23 01:09:48.950995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.892 qpair failed and we were unable to recover it. 00:30:04.892 [2024-07-23 01:09:48.951183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.892 [2024-07-23 01:09:48.951375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.892 [2024-07-23 01:09:48.951398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.892 qpair failed and we were unable to recover it. 00:30:04.892 [2024-07-23 01:09:48.951533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.892 [2024-07-23 01:09:48.951692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.892 [2024-07-23 01:09:48.951718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.892 qpair failed and we were unable to recover it. 00:30:04.892 [2024-07-23 01:09:48.951862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.892 [2024-07-23 01:09:48.952023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.892 [2024-07-23 01:09:48.952047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.892 qpair failed and we were unable to recover it. 00:30:04.892 [2024-07-23 01:09:48.952186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.892 [2024-07-23 01:09:48.952353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.892 [2024-07-23 01:09:48.952377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.892 qpair failed and we were unable to recover it. 00:30:04.892 [2024-07-23 01:09:48.952541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.892 [2024-07-23 01:09:48.952677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.892 [2024-07-23 01:09:48.952701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.892 qpair failed and we were unable to recover it. 00:30:04.892 [2024-07-23 01:09:48.952863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.892 [2024-07-23 01:09:48.953050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.892 [2024-07-23 01:09:48.953075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.892 qpair failed and we were unable to recover it. 00:30:04.892 [2024-07-23 01:09:48.953228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.892 [2024-07-23 01:09:48.953365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.892 [2024-07-23 01:09:48.953390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.892 qpair failed and we were unable to recover it. 00:30:04.892 [2024-07-23 01:09:48.953548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.892 [2024-07-23 01:09:48.953717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.892 [2024-07-23 01:09:48.953742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.892 qpair failed and we were unable to recover it. 00:30:04.892 [2024-07-23 01:09:48.953885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.892 [2024-07-23 01:09:48.954049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.892 [2024-07-23 01:09:48.954074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.892 qpair failed and we were unable to recover it. 00:30:04.892 [2024-07-23 01:09:48.954234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.892 [2024-07-23 01:09:48.954390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.892 [2024-07-23 01:09:48.954414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.892 qpair failed and we were unable to recover it. 00:30:04.892 [2024-07-23 01:09:48.954601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.892 [2024-07-23 01:09:48.954775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.892 [2024-07-23 01:09:48.954799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.892 qpair failed and we were unable to recover it. 00:30:04.892 [2024-07-23 01:09:48.954990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.892 [2024-07-23 01:09:48.955176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.892 [2024-07-23 01:09:48.955199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.892 qpair failed and we were unable to recover it. 00:30:04.892 [2024-07-23 01:09:48.955368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.892 [2024-07-23 01:09:48.955533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.892 [2024-07-23 01:09:48.955557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.892 qpair failed and we were unable to recover it. 00:30:04.892 [2024-07-23 01:09:48.955704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.892 [2024-07-23 01:09:48.955873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.892 [2024-07-23 01:09:48.955898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.892 qpair failed and we were unable to recover it. 00:30:04.892 [2024-07-23 01:09:48.956041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.892 [2024-07-23 01:09:48.956228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.892 [2024-07-23 01:09:48.956252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.892 qpair failed and we were unable to recover it. 00:30:04.892 [2024-07-23 01:09:48.956417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.892 [2024-07-23 01:09:48.956602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.892 [2024-07-23 01:09:48.956634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.892 qpair failed and we were unable to recover it. 00:30:04.892 [2024-07-23 01:09:48.956800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.892 [2024-07-23 01:09:48.956961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.892 [2024-07-23 01:09:48.956988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.892 qpair failed and we were unable to recover it. 00:30:04.892 [2024-07-23 01:09:48.957156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.892 [2024-07-23 01:09:48.957293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.892 [2024-07-23 01:09:48.957317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.892 qpair failed and we were unable to recover it. 00:30:04.893 [2024-07-23 01:09:48.957454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.893 [2024-07-23 01:09:48.957623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.893 [2024-07-23 01:09:48.957648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.893 qpair failed and we were unable to recover it. 00:30:04.893 [2024-07-23 01:09:48.957813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.893 [2024-07-23 01:09:48.957969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.893 [2024-07-23 01:09:48.957993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.893 qpair failed and we were unable to recover it. 00:30:04.893 [2024-07-23 01:09:48.958130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.893 [2024-07-23 01:09:48.958324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.893 [2024-07-23 01:09:48.958349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.893 qpair failed and we were unable to recover it. 00:30:04.893 [2024-07-23 01:09:48.958482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.893 [2024-07-23 01:09:48.958650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.893 [2024-07-23 01:09:48.958677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.893 qpair failed and we were unable to recover it. 00:30:04.893 [2024-07-23 01:09:48.958808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.893 [2024-07-23 01:09:48.958998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.893 [2024-07-23 01:09:48.959022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.893 qpair failed and we were unable to recover it. 00:30:04.893 [2024-07-23 01:09:48.959182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.893 [2024-07-23 01:09:48.959374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.893 [2024-07-23 01:09:48.959403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.893 qpair failed and we were unable to recover it. 00:30:04.893 [2024-07-23 01:09:48.959567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.893 [2024-07-23 01:09:48.959759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.893 [2024-07-23 01:09:48.959785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.893 qpair failed and we were unable to recover it. 00:30:04.893 [2024-07-23 01:09:48.959948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.893 [2024-07-23 01:09:48.960088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.893 [2024-07-23 01:09:48.960113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.893 qpair failed and we were unable to recover it. 00:30:04.893 [2024-07-23 01:09:48.960277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.893 [2024-07-23 01:09:48.960445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.893 [2024-07-23 01:09:48.960470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.893 qpair failed and we were unable to recover it. 00:30:04.893 [2024-07-23 01:09:48.960667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.893 [2024-07-23 01:09:48.960803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.893 [2024-07-23 01:09:48.960830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.893 qpair failed and we were unable to recover it. 00:30:04.893 [2024-07-23 01:09:48.960994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.893 [2024-07-23 01:09:48.961128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.893 [2024-07-23 01:09:48.961153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.893 qpair failed and we were unable to recover it. 00:30:04.893 [2024-07-23 01:09:48.961323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.893 [2024-07-23 01:09:48.961455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.893 [2024-07-23 01:09:48.961481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.893 qpair failed and we were unable to recover it. 00:30:04.893 [2024-07-23 01:09:48.961641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.893 [2024-07-23 01:09:48.961804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.893 [2024-07-23 01:09:48.961830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.893 qpair failed and we were unable to recover it. 00:30:04.893 [2024-07-23 01:09:48.961965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.893 [2024-07-23 01:09:48.962105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.893 [2024-07-23 01:09:48.962129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.893 qpair failed and we were unable to recover it. 00:30:04.893 [2024-07-23 01:09:48.962296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.893 [2024-07-23 01:09:48.962453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.893 [2024-07-23 01:09:48.962477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.893 qpair failed and we were unable to recover it. 00:30:04.893 [2024-07-23 01:09:48.962619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.893 [2024-07-23 01:09:48.962790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.893 [2024-07-23 01:09:48.962819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.893 qpair failed and we were unable to recover it. 00:30:04.893 [2024-07-23 01:09:48.962987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.893 [2024-07-23 01:09:48.963142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.893 [2024-07-23 01:09:48.963166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.893 qpair failed and we were unable to recover it. 00:30:04.893 [2024-07-23 01:09:48.963350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.893 [2024-07-23 01:09:48.963507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.893 [2024-07-23 01:09:48.963530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.893 qpair failed and we were unable to recover it. 00:30:04.893 [2024-07-23 01:09:48.963663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.893 [2024-07-23 01:09:48.963855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.893 [2024-07-23 01:09:48.963879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.893 qpair failed and we were unable to recover it. 00:30:04.893 [2024-07-23 01:09:48.964067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.893 [2024-07-23 01:09:48.964226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.893 [2024-07-23 01:09:48.964250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.893 qpair failed and we were unable to recover it. 00:30:04.893 [2024-07-23 01:09:48.964409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.893 [2024-07-23 01:09:48.964545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.893 [2024-07-23 01:09:48.964568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.893 qpair failed and we were unable to recover it. 00:30:04.893 [2024-07-23 01:09:48.964743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.893 [2024-07-23 01:09:48.964885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.893 [2024-07-23 01:09:48.964911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.893 qpair failed and we were unable to recover it. 00:30:04.893 [2024-07-23 01:09:48.965106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.893 [2024-07-23 01:09:48.965269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.893 [2024-07-23 01:09:48.965293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.893 qpair failed and we were unable to recover it. 00:30:04.893 [2024-07-23 01:09:48.965432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.893 [2024-07-23 01:09:48.965594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.893 [2024-07-23 01:09:48.965625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.894 qpair failed and we were unable to recover it. 00:30:04.894 [2024-07-23 01:09:48.965822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.894 [2024-07-23 01:09:48.965954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.894 [2024-07-23 01:09:48.965980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.894 qpair failed and we were unable to recover it. 00:30:04.894 [2024-07-23 01:09:48.966143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.894 [2024-07-23 01:09:48.966304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.894 [2024-07-23 01:09:48.966332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.894 qpair failed and we were unable to recover it. 00:30:04.894 [2024-07-23 01:09:48.966526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.894 [2024-07-23 01:09:48.966670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.894 [2024-07-23 01:09:48.966695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.894 qpair failed and we were unable to recover it. 00:30:04.894 [2024-07-23 01:09:48.966888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.894 [2024-07-23 01:09:48.967023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.894 [2024-07-23 01:09:48.967049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.894 qpair failed and we were unable to recover it. 00:30:04.894 [2024-07-23 01:09:48.967184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.894 [2024-07-23 01:09:48.967324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.894 [2024-07-23 01:09:48.967348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.894 qpair failed and we were unable to recover it. 00:30:04.894 [2024-07-23 01:09:48.967510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.894 [2024-07-23 01:09:48.967667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.894 [2024-07-23 01:09:48.967693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.894 qpair failed and we were unable to recover it. 00:30:04.894 [2024-07-23 01:09:48.967837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.894 [2024-07-23 01:09:48.967996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.894 [2024-07-23 01:09:48.968021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.894 qpair failed and we were unable to recover it. 00:30:04.894 [2024-07-23 01:09:48.968187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.894 [2024-07-23 01:09:48.968374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.894 [2024-07-23 01:09:48.968398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.894 qpair failed and we were unable to recover it. 00:30:04.894 [2024-07-23 01:09:48.968576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.894 [2024-07-23 01:09:48.968724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.894 [2024-07-23 01:09:48.968749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.894 qpair failed and we were unable to recover it. 00:30:04.894 [2024-07-23 01:09:48.968887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.894 [2024-07-23 01:09:48.969049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.894 [2024-07-23 01:09:48.969074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.894 qpair failed and we were unable to recover it. 00:30:04.894 [2024-07-23 01:09:48.969241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.894 [2024-07-23 01:09:48.969403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.894 [2024-07-23 01:09:48.969427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.894 qpair failed and we were unable to recover it. 00:30:04.894 [2024-07-23 01:09:48.969598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.894 [2024-07-23 01:09:48.969768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.894 [2024-07-23 01:09:48.969797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.894 qpair failed and we were unable to recover it. 00:30:04.894 [2024-07-23 01:09:48.969959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.894 [2024-07-23 01:09:48.970095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.894 [2024-07-23 01:09:48.970120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.894 qpair failed and we were unable to recover it. 00:30:04.894 [2024-07-23 01:09:48.970263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.894 [2024-07-23 01:09:48.970424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.894 [2024-07-23 01:09:48.970448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.894 qpair failed and we were unable to recover it. 00:30:04.894 [2024-07-23 01:09:48.970606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.894 [2024-07-23 01:09:48.970776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.894 [2024-07-23 01:09:48.970803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.894 qpair failed and we were unable to recover it. 00:30:04.894 [2024-07-23 01:09:48.970945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.894 [2024-07-23 01:09:48.971139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.894 [2024-07-23 01:09:48.971164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.894 qpair failed and we were unable to recover it. 00:30:04.894 [2024-07-23 01:09:48.971332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.894 [2024-07-23 01:09:48.971475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.894 [2024-07-23 01:09:48.971499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.894 qpair failed and we were unable to recover it. 00:30:04.894 [2024-07-23 01:09:48.971643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.894 [2024-07-23 01:09:48.971809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.894 [2024-07-23 01:09:48.971835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.894 qpair failed and we were unable to recover it. 00:30:04.894 [2024-07-23 01:09:48.972002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.894 [2024-07-23 01:09:48.972179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.894 [2024-07-23 01:09:48.972204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.894 qpair failed and we were unable to recover it. 00:30:04.894 [2024-07-23 01:09:48.972370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.894 [2024-07-23 01:09:48.972532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.894 [2024-07-23 01:09:48.972555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.894 qpair failed and we were unable to recover it. 00:30:04.894 [2024-07-23 01:09:48.972737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.894 [2024-07-23 01:09:48.972928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.894 [2024-07-23 01:09:48.972953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.894 qpair failed and we were unable to recover it. 00:30:04.894 [2024-07-23 01:09:48.973094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.894 [2024-07-23 01:09:48.973259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.894 [2024-07-23 01:09:48.973283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.894 qpair failed and we were unable to recover it. 00:30:04.894 [2024-07-23 01:09:48.973438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.894 [2024-07-23 01:09:48.973601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.894 [2024-07-23 01:09:48.973632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.894 qpair failed and we were unable to recover it. 00:30:04.894 [2024-07-23 01:09:48.973822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.894 [2024-07-23 01:09:48.973962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.894 [2024-07-23 01:09:48.973988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.894 qpair failed and we were unable to recover it. 00:30:04.894 [2024-07-23 01:09:48.974155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.894 [2024-07-23 01:09:48.974338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.894 [2024-07-23 01:09:48.974363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.894 qpair failed and we were unable to recover it. 00:30:04.894 [2024-07-23 01:09:48.974530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.894 [2024-07-23 01:09:48.974667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.894 [2024-07-23 01:09:48.974691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.894 qpair failed and we were unable to recover it. 00:30:04.894 [2024-07-23 01:09:48.974857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.894 [2024-07-23 01:09:48.975024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.894 [2024-07-23 01:09:48.975047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.894 qpair failed and we were unable to recover it. 00:30:04.894 [2024-07-23 01:09:48.975213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.894 [2024-07-23 01:09:48.975379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.895 [2024-07-23 01:09:48.975404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.895 qpair failed and we were unable to recover it. 00:30:04.895 [2024-07-23 01:09:48.975594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.895 [2024-07-23 01:09:48.975737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.895 [2024-07-23 01:09:48.975761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.895 qpair failed and we were unable to recover it. 00:30:04.895 [2024-07-23 01:09:48.975889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.895 [2024-07-23 01:09:48.976023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.895 [2024-07-23 01:09:48.976046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.895 qpair failed and we were unable to recover it. 00:30:04.895 [2024-07-23 01:09:48.976210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.895 [2024-07-23 01:09:48.976341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.895 [2024-07-23 01:09:48.976365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.895 qpair failed and we were unable to recover it. 00:30:04.895 [2024-07-23 01:09:48.976529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.895 [2024-07-23 01:09:48.976684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.895 [2024-07-23 01:09:48.976709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.895 qpair failed and we were unable to recover it. 00:30:04.895 [2024-07-23 01:09:48.976881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.895 [2024-07-23 01:09:48.977069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.895 [2024-07-23 01:09:48.977093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.895 qpair failed and we were unable to recover it. 00:30:04.895 [2024-07-23 01:09:48.977240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.895 [2024-07-23 01:09:48.977422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.895 [2024-07-23 01:09:48.977446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.895 qpair failed and we were unable to recover it. 00:30:04.895 [2024-07-23 01:09:48.977592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.895 [2024-07-23 01:09:48.977731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.895 [2024-07-23 01:09:48.977756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.895 qpair failed and we were unable to recover it. 00:30:04.895 [2024-07-23 01:09:48.977922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.895 [2024-07-23 01:09:48.978084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.895 [2024-07-23 01:09:48.978109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.895 qpair failed and we were unable to recover it. 00:30:04.895 [2024-07-23 01:09:48.978275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.895 [2024-07-23 01:09:48.978444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.895 [2024-07-23 01:09:48.978468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.895 qpair failed and we were unable to recover it. 00:30:04.895 [2024-07-23 01:09:48.978633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.895 [2024-07-23 01:09:48.978770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.895 [2024-07-23 01:09:48.978795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.895 qpair failed and we were unable to recover it. 00:30:04.895 [2024-07-23 01:09:48.978923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.895 [2024-07-23 01:09:48.979092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.895 [2024-07-23 01:09:48.979116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.895 qpair failed and we were unable to recover it. 00:30:04.895 [2024-07-23 01:09:48.979283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.895 [2024-07-23 01:09:48.979463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.895 [2024-07-23 01:09:48.979487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.895 qpair failed and we were unable to recover it. 00:30:04.895 [2024-07-23 01:09:48.979640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.895 [2024-07-23 01:09:48.979781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.895 [2024-07-23 01:09:48.979805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.895 qpair failed and we were unable to recover it. 00:30:04.895 [2024-07-23 01:09:48.979994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.895 [2024-07-23 01:09:48.980137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.895 [2024-07-23 01:09:48.980163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.895 qpair failed and we were unable to recover it. 00:30:04.895 [2024-07-23 01:09:48.980311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.895 [2024-07-23 01:09:48.980499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.895 [2024-07-23 01:09:48.980523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.895 qpair failed and we were unable to recover it. 00:30:04.895 [2024-07-23 01:09:48.980662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.895 [2024-07-23 01:09:48.980832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.895 [2024-07-23 01:09:48.980856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.895 qpair failed and we were unable to recover it. 00:30:04.895 [2024-07-23 01:09:48.980999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.895 [2024-07-23 01:09:48.981134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.895 [2024-07-23 01:09:48.981159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.895 qpair failed and we were unable to recover it. 00:30:04.895 [2024-07-23 01:09:48.981326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.895 [2024-07-23 01:09:48.981491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.895 [2024-07-23 01:09:48.981515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.895 qpair failed and we were unable to recover it. 00:30:04.895 [2024-07-23 01:09:48.981701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.895 [2024-07-23 01:09:48.981865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.895 [2024-07-23 01:09:48.981890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.895 qpair failed and we were unable to recover it. 00:30:04.895 [2024-07-23 01:09:48.982050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.895 [2024-07-23 01:09:48.982244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.895 [2024-07-23 01:09:48.982269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.895 qpair failed and we were unable to recover it. 00:30:04.895 [2024-07-23 01:09:48.982456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.895 [2024-07-23 01:09:48.982646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.895 [2024-07-23 01:09:48.982671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.895 qpair failed and we were unable to recover it. 00:30:04.895 [2024-07-23 01:09:48.982838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.895 [2024-07-23 01:09:48.982996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.895 [2024-07-23 01:09:48.983021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.895 qpair failed and we were unable to recover it. 00:30:04.895 [2024-07-23 01:09:48.983159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.895 [2024-07-23 01:09:48.983319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.895 [2024-07-23 01:09:48.983343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.895 qpair failed and we were unable to recover it. 00:30:04.895 [2024-07-23 01:09:48.983487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.895 [2024-07-23 01:09:48.983625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.895 [2024-07-23 01:09:48.983651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.895 qpair failed and we were unable to recover it. 00:30:04.895 [2024-07-23 01:09:48.983798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.895 [2024-07-23 01:09:48.983962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.895 [2024-07-23 01:09:48.983985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.895 qpair failed and we were unable to recover it. 00:30:04.895 [2024-07-23 01:09:48.984124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.895 [2024-07-23 01:09:48.984281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.895 [2024-07-23 01:09:48.984305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.895 qpair failed and we were unable to recover it. 00:30:04.895 [2024-07-23 01:09:48.984448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.895 [2024-07-23 01:09:48.984607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.895 [2024-07-23 01:09:48.984639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.895 qpair failed and we were unable to recover it. 00:30:04.895 [2024-07-23 01:09:48.984772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.895 [2024-07-23 01:09:48.984936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.895 [2024-07-23 01:09:48.984960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.896 qpair failed and we were unable to recover it. 00:30:04.896 [2024-07-23 01:09:48.985126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.896 [2024-07-23 01:09:48.985264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.896 [2024-07-23 01:09:48.985290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.896 qpair failed and we were unable to recover it. 00:30:04.896 [2024-07-23 01:09:48.985459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.896 [2024-07-23 01:09:48.985596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.896 [2024-07-23 01:09:48.985629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.896 qpair failed and we were unable to recover it. 00:30:04.896 [2024-07-23 01:09:48.985798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.896 [2024-07-23 01:09:48.985934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.896 [2024-07-23 01:09:48.985957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.896 qpair failed and we were unable to recover it. 00:30:04.896 [2024-07-23 01:09:48.986094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.896 [2024-07-23 01:09:48.986259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.896 [2024-07-23 01:09:48.986282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.896 qpair failed and we were unable to recover it. 00:30:04.896 [2024-07-23 01:09:48.986449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.896 [2024-07-23 01:09:48.986624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.896 [2024-07-23 01:09:48.986649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.896 qpair failed and we were unable to recover it. 00:30:04.896 [2024-07-23 01:09:48.986815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.896 [2024-07-23 01:09:48.986946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.896 [2024-07-23 01:09:48.986971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.896 qpair failed and we were unable to recover it. 00:30:04.896 [2024-07-23 01:09:48.987140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.896 [2024-07-23 01:09:48.987304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.896 [2024-07-23 01:09:48.987327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.896 qpair failed and we were unable to recover it. 00:30:04.896 [2024-07-23 01:09:48.987494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.896 [2024-07-23 01:09:48.987651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.896 [2024-07-23 01:09:48.987676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.896 qpair failed and we were unable to recover it. 00:30:04.896 [2024-07-23 01:09:48.987847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.896 [2024-07-23 01:09:48.987986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.896 [2024-07-23 01:09:48.988010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.896 qpair failed and we were unable to recover it. 00:30:04.896 [2024-07-23 01:09:48.988177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.896 [2024-07-23 01:09:48.988338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.896 [2024-07-23 01:09:48.988362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.896 qpair failed and we were unable to recover it. 00:30:04.896 [2024-07-23 01:09:48.988551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.896 [2024-07-23 01:09:48.988709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.896 [2024-07-23 01:09:48.988734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.896 qpair failed and we were unable to recover it. 00:30:04.896 [2024-07-23 01:09:48.988900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.896 [2024-07-23 01:09:48.989057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.896 [2024-07-23 01:09:48.989082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.896 qpair failed and we were unable to recover it. 00:30:04.896 [2024-07-23 01:09:48.989217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.896 [2024-07-23 01:09:48.989373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.896 [2024-07-23 01:09:48.989397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.896 qpair failed and we were unable to recover it. 00:30:04.896 [2024-07-23 01:09:48.989564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.896 [2024-07-23 01:09:48.989732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.896 [2024-07-23 01:09:48.989759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.896 qpair failed and we were unable to recover it. 00:30:04.896 [2024-07-23 01:09:48.989925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.896 [2024-07-23 01:09:48.990110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.896 [2024-07-23 01:09:48.990135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.896 qpair failed and we were unable to recover it. 00:30:04.896 [2024-07-23 01:09:48.990302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.896 [2024-07-23 01:09:48.990465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.896 [2024-07-23 01:09:48.990489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.896 qpair failed and we were unable to recover it. 00:30:04.896 [2024-07-23 01:09:48.990670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.896 [2024-07-23 01:09:48.990808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.896 [2024-07-23 01:09:48.990833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.896 qpair failed and we were unable to recover it. 00:30:04.896 [2024-07-23 01:09:48.991002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.896 [2024-07-23 01:09:48.991169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.896 [2024-07-23 01:09:48.991194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.896 qpair failed and we were unable to recover it. 00:30:04.896 [2024-07-23 01:09:48.991360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.896 [2024-07-23 01:09:48.991496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.896 [2024-07-23 01:09:48.991521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.896 qpair failed and we were unable to recover it. 00:30:04.896 [2024-07-23 01:09:48.991727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.896 [2024-07-23 01:09:48.991863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.896 [2024-07-23 01:09:48.991887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.896 qpair failed and we were unable to recover it. 00:30:04.896 [2024-07-23 01:09:48.992025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.896 [2024-07-23 01:09:48.992191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.896 [2024-07-23 01:09:48.992215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.896 qpair failed and we were unable to recover it. 00:30:04.896 [2024-07-23 01:09:48.992379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.896 [2024-07-23 01:09:48.992578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.896 [2024-07-23 01:09:48.992602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.896 qpair failed and we were unable to recover it. 00:30:04.896 [2024-07-23 01:09:48.992755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.896 [2024-07-23 01:09:48.992888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.896 [2024-07-23 01:09:48.992913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.896 qpair failed and we were unable to recover it. 00:30:04.896 [2024-07-23 01:09:48.993076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.896 [2024-07-23 01:09:48.993220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.896 [2024-07-23 01:09:48.993244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.896 qpair failed and we were unable to recover it. 00:30:04.896 [2024-07-23 01:09:48.993409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.896 [2024-07-23 01:09:48.993599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.896 [2024-07-23 01:09:48.993630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.896 qpair failed and we were unable to recover it. 00:30:04.896 [2024-07-23 01:09:48.993807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.896 [2024-07-23 01:09:48.994003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.896 [2024-07-23 01:09:48.994027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.896 qpair failed and we were unable to recover it. 00:30:04.896 [2024-07-23 01:09:48.994168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.896 [2024-07-23 01:09:48.994353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.896 [2024-07-23 01:09:48.994378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.896 qpair failed and we were unable to recover it. 00:30:04.896 [2024-07-23 01:09:48.994540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.896 [2024-07-23 01:09:48.994723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.896 [2024-07-23 01:09:48.994749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.896 qpair failed and we were unable to recover it. 00:30:04.897 [2024-07-23 01:09:48.994908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.897 [2024-07-23 01:09:48.995070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.897 [2024-07-23 01:09:48.995096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.897 qpair failed and we were unable to recover it. 00:30:04.897 [2024-07-23 01:09:48.995285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.897 [2024-07-23 01:09:48.995449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.897 [2024-07-23 01:09:48.995474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.897 qpair failed and we were unable to recover it. 00:30:04.897 [2024-07-23 01:09:48.995611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.897 [2024-07-23 01:09:48.995819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.897 [2024-07-23 01:09:48.995844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.897 qpair failed and we were unable to recover it. 00:30:04.897 [2024-07-23 01:09:48.995983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.897 [2024-07-23 01:09:48.996116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.897 [2024-07-23 01:09:48.996142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.897 qpair failed and we were unable to recover it. 00:30:04.897 [2024-07-23 01:09:48.996328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.897 [2024-07-23 01:09:48.996492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.897 [2024-07-23 01:09:48.996516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.897 qpair failed and we were unable to recover it. 00:30:04.897 [2024-07-23 01:09:48.996714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.897 [2024-07-23 01:09:48.996879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.897 [2024-07-23 01:09:48.996903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.897 qpair failed and we were unable to recover it. 00:30:04.897 [2024-07-23 01:09:48.997048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.897 [2024-07-23 01:09:48.997235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.897 [2024-07-23 01:09:48.997260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.897 qpair failed and we were unable to recover it. 00:30:04.897 [2024-07-23 01:09:48.997393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.897 [2024-07-23 01:09:48.997558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.897 [2024-07-23 01:09:48.997582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.897 qpair failed and we were unable to recover it. 00:30:04.897 [2024-07-23 01:09:48.997757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.897 [2024-07-23 01:09:48.997895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.897 [2024-07-23 01:09:48.997919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.897 qpair failed and we were unable to recover it. 00:30:04.897 [2024-07-23 01:09:48.998083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.897 [2024-07-23 01:09:48.998217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.897 [2024-07-23 01:09:48.998241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.897 qpair failed and we were unable to recover it. 00:30:04.897 [2024-07-23 01:09:48.998405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.897 [2024-07-23 01:09:48.998572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.897 [2024-07-23 01:09:48.998597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.897 qpair failed and we were unable to recover it. 00:30:04.897 [2024-07-23 01:09:48.998745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.897 [2024-07-23 01:09:48.998878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.897 [2024-07-23 01:09:48.998903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.897 qpair failed and we were unable to recover it. 00:30:04.897 [2024-07-23 01:09:48.999070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.897 [2024-07-23 01:09:48.999231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.897 [2024-07-23 01:09:48.999256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.897 qpair failed and we were unable to recover it. 00:30:04.897 [2024-07-23 01:09:48.999420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.897 [2024-07-23 01:09:48.999605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.897 [2024-07-23 01:09:48.999638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.897 qpair failed and we were unable to recover it. 00:30:04.897 [2024-07-23 01:09:48.999800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.897 [2024-07-23 01:09:48.999988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.897 [2024-07-23 01:09:49.000012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.897 qpair failed and we were unable to recover it. 00:30:04.897 [2024-07-23 01:09:49.000177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.897 [2024-07-23 01:09:49.000344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.897 [2024-07-23 01:09:49.000369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.897 qpair failed and we were unable to recover it. 00:30:04.897 [2024-07-23 01:09:49.000513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.897 [2024-07-23 01:09:49.000689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.897 [2024-07-23 01:09:49.000715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.897 qpair failed and we were unable to recover it. 00:30:04.897 [2024-07-23 01:09:49.000877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.897 [2024-07-23 01:09:49.001008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.897 [2024-07-23 01:09:49.001033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.897 qpair failed and we were unable to recover it. 00:30:04.897 [2024-07-23 01:09:49.001196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.897 [2024-07-23 01:09:49.001361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.897 [2024-07-23 01:09:49.001386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.897 qpair failed and we were unable to recover it. 00:30:04.897 [2024-07-23 01:09:49.001574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.897 [2024-07-23 01:09:49.001738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.897 [2024-07-23 01:09:49.001764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.897 qpair failed and we were unable to recover it. 00:30:04.897 [2024-07-23 01:09:49.001926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.897 [2024-07-23 01:09:49.002092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.897 [2024-07-23 01:09:49.002116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.897 qpair failed and we were unable to recover it. 00:30:04.897 [2024-07-23 01:09:49.002252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.897 [2024-07-23 01:09:49.002414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.897 [2024-07-23 01:09:49.002438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.897 qpair failed and we were unable to recover it. 00:30:04.897 [2024-07-23 01:09:49.002629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.897 [2024-07-23 01:09:49.002754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.897 [2024-07-23 01:09:49.002778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.897 qpair failed and we were unable to recover it. 00:30:04.897 [2024-07-23 01:09:49.002945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.897 [2024-07-23 01:09:49.003106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.897 [2024-07-23 01:09:49.003131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.897 qpair failed and we were unable to recover it. 00:30:04.897 [2024-07-23 01:09:49.003295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.897 [2024-07-23 01:09:49.003482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.897 [2024-07-23 01:09:49.003507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.897 qpair failed and we were unable to recover it. 00:30:04.897 [2024-07-23 01:09:49.003697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.897 [2024-07-23 01:09:49.003827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.897 [2024-07-23 01:09:49.003851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.897 qpair failed and we were unable to recover it. 00:30:04.897 [2024-07-23 01:09:49.004011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.897 [2024-07-23 01:09:49.004182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.897 [2024-07-23 01:09:49.004206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.897 qpair failed and we were unable to recover it. 00:30:04.897 [2024-07-23 01:09:49.004345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.897 [2024-07-23 01:09:49.004506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.897 [2024-07-23 01:09:49.004530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.898 qpair failed and we were unable to recover it. 00:30:04.898 [2024-07-23 01:09:49.004694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.898 [2024-07-23 01:09:49.004866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.898 [2024-07-23 01:09:49.004891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.898 qpair failed and we were unable to recover it. 00:30:04.898 [2024-07-23 01:09:49.005052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.898 [2024-07-23 01:09:49.005189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.898 [2024-07-23 01:09:49.005213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.898 qpair failed and we were unable to recover it. 00:30:04.898 [2024-07-23 01:09:49.005382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.898 [2024-07-23 01:09:49.005544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.898 [2024-07-23 01:09:49.005568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.898 qpair failed and we were unable to recover it. 00:30:04.898 [2024-07-23 01:09:49.005737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.898 [2024-07-23 01:09:49.005897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.898 [2024-07-23 01:09:49.005920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.898 qpair failed and we were unable to recover it. 00:30:04.898 [2024-07-23 01:09:49.006057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.898 [2024-07-23 01:09:49.006193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.898 [2024-07-23 01:09:49.006217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.898 qpair failed and we were unable to recover it. 00:30:04.898 [2024-07-23 01:09:49.006379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.898 [2024-07-23 01:09:49.006542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.898 [2024-07-23 01:09:49.006566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.898 qpair failed and we were unable to recover it. 00:30:04.898 [2024-07-23 01:09:49.006727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.898 [2024-07-23 01:09:49.006902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.898 [2024-07-23 01:09:49.006926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.898 qpair failed and we were unable to recover it. 00:30:04.898 [2024-07-23 01:09:49.007091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.898 [2024-07-23 01:09:49.007226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.898 [2024-07-23 01:09:49.007250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.898 qpair failed and we were unable to recover it. 00:30:04.898 [2024-07-23 01:09:49.007382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.898 [2024-07-23 01:09:49.007542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.898 [2024-07-23 01:09:49.007566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.898 qpair failed and we were unable to recover it. 00:30:04.898 [2024-07-23 01:09:49.007759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.898 [2024-07-23 01:09:49.007924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.898 [2024-07-23 01:09:49.007948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.898 qpair failed and we were unable to recover it. 00:30:04.898 [2024-07-23 01:09:49.008080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.898 [2024-07-23 01:09:49.008244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.898 [2024-07-23 01:09:49.008270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.898 qpair failed and we were unable to recover it. 00:30:04.898 [2024-07-23 01:09:49.008414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.898 [2024-07-23 01:09:49.008577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.898 [2024-07-23 01:09:49.008601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.898 qpair failed and we were unable to recover it. 00:30:04.898 [2024-07-23 01:09:49.008750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.898 [2024-07-23 01:09:49.008914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.898 [2024-07-23 01:09:49.008939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.898 qpair failed and we were unable to recover it. 00:30:04.898 [2024-07-23 01:09:49.009097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.898 [2024-07-23 01:09:49.009231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.898 [2024-07-23 01:09:49.009254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.898 qpair failed and we were unable to recover it. 00:30:04.898 [2024-07-23 01:09:49.009393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.898 [2024-07-23 01:09:49.009559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.898 [2024-07-23 01:09:49.009583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.898 qpair failed and we were unable to recover it. 00:30:04.898 [2024-07-23 01:09:49.009755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.898 [2024-07-23 01:09:49.009897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.898 [2024-07-23 01:09:49.009922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.898 qpair failed and we were unable to recover it. 00:30:04.898 [2024-07-23 01:09:49.010078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.898 [2024-07-23 01:09:49.010239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.898 [2024-07-23 01:09:49.010262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.898 qpair failed and we were unable to recover it. 00:30:04.898 [2024-07-23 01:09:49.010428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.898 [2024-07-23 01:09:49.010570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.898 [2024-07-23 01:09:49.010593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.898 qpair failed and we were unable to recover it. 00:30:04.898 [2024-07-23 01:09:49.010730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.898 [2024-07-23 01:09:49.010895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.898 [2024-07-23 01:09:49.010920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.898 qpair failed and we were unable to recover it. 00:30:04.898 [2024-07-23 01:09:49.011061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.898 [2024-07-23 01:09:49.011222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.898 [2024-07-23 01:09:49.011246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.898 qpair failed and we were unable to recover it. 00:30:04.898 [2024-07-23 01:09:49.011378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.898 [2024-07-23 01:09:49.011521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.898 [2024-07-23 01:09:49.011545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.898 qpair failed and we were unable to recover it. 00:30:04.898 [2024-07-23 01:09:49.011709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.898 [2024-07-23 01:09:49.011864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.898 [2024-07-23 01:09:49.011888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.898 qpair failed and we were unable to recover it. 00:30:04.898 [2024-07-23 01:09:49.012049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.898 [2024-07-23 01:09:49.012185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.898 [2024-07-23 01:09:49.012210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.898 qpair failed and we were unable to recover it. 00:30:04.898 [2024-07-23 01:09:49.012372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.898 [2024-07-23 01:09:49.012539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.898 [2024-07-23 01:09:49.012564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.898 qpair failed and we were unable to recover it. 00:30:04.898 [2024-07-23 01:09:49.012734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.899 [2024-07-23 01:09:49.012880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.899 [2024-07-23 01:09:49.012904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.899 qpair failed and we were unable to recover it. 00:30:04.899 [2024-07-23 01:09:49.013040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.899 [2024-07-23 01:09:49.013170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.899 [2024-07-23 01:09:49.013194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.899 qpair failed and we were unable to recover it. 00:30:04.899 [2024-07-23 01:09:49.013354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.899 [2024-07-23 01:09:49.013514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.899 [2024-07-23 01:09:49.013539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.899 qpair failed and we were unable to recover it. 00:30:04.899 [2024-07-23 01:09:49.013695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.899 [2024-07-23 01:09:49.013832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.899 [2024-07-23 01:09:49.013857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.899 qpair failed and we were unable to recover it. 00:30:04.899 [2024-07-23 01:09:49.014023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.899 [2024-07-23 01:09:49.014213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.899 [2024-07-23 01:09:49.014237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.899 qpair failed and we were unable to recover it. 00:30:04.899 [2024-07-23 01:09:49.014398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.899 [2024-07-23 01:09:49.014586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.899 [2024-07-23 01:09:49.014611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.899 qpair failed and we were unable to recover it. 00:30:04.899 [2024-07-23 01:09:49.014758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.899 [2024-07-23 01:09:49.014894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.899 [2024-07-23 01:09:49.014924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.899 qpair failed and we were unable to recover it. 00:30:04.899 [2024-07-23 01:09:49.015120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.899 [2024-07-23 01:09:49.015246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.899 [2024-07-23 01:09:49.015271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.899 qpair failed and we were unable to recover it. 00:30:04.899 [2024-07-23 01:09:49.015462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.899 [2024-07-23 01:09:49.015642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.899 [2024-07-23 01:09:49.015667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.899 qpair failed and we were unable to recover it. 00:30:04.899 [2024-07-23 01:09:49.015819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.899 [2024-07-23 01:09:49.015965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.899 [2024-07-23 01:09:49.015989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.899 qpair failed and we were unable to recover it. 00:30:04.899 [2024-07-23 01:09:49.016148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.899 [2024-07-23 01:09:49.016311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.899 [2024-07-23 01:09:49.016336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.899 qpair failed and we were unable to recover it. 00:30:04.899 [2024-07-23 01:09:49.016501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.899 [2024-07-23 01:09:49.016671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.899 [2024-07-23 01:09:49.016695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.899 qpair failed and we were unable to recover it. 00:30:04.899 [2024-07-23 01:09:49.016828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.899 [2024-07-23 01:09:49.017015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.899 [2024-07-23 01:09:49.017040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.899 qpair failed and we were unable to recover it. 00:30:04.899 [2024-07-23 01:09:49.017201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.899 [2024-07-23 01:09:49.017337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.899 [2024-07-23 01:09:49.017361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.899 qpair failed and we were unable to recover it. 00:30:04.899 [2024-07-23 01:09:49.017522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.899 [2024-07-23 01:09:49.017657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.899 [2024-07-23 01:09:49.017681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.899 qpair failed and we were unable to recover it. 00:30:04.899 [2024-07-23 01:09:49.017818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.899 [2024-07-23 01:09:49.017955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.899 [2024-07-23 01:09:49.017980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.899 qpair failed and we were unable to recover it. 00:30:04.899 [2024-07-23 01:09:49.018138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.899 [2024-07-23 01:09:49.018307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.899 [2024-07-23 01:09:49.018336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.899 qpair failed and we were unable to recover it. 00:30:04.899 [2024-07-23 01:09:49.018499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.899 [2024-07-23 01:09:49.018639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.899 [2024-07-23 01:09:49.018665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.899 qpair failed and we were unable to recover it. 00:30:04.899 [2024-07-23 01:09:49.018830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.899 [2024-07-23 01:09:49.019019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.899 [2024-07-23 01:09:49.019043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.899 qpair failed and we were unable to recover it. 00:30:04.899 [2024-07-23 01:09:49.019205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.899 [2024-07-23 01:09:49.019366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.899 [2024-07-23 01:09:49.019392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.899 qpair failed and we were unable to recover it. 00:30:04.899 [2024-07-23 01:09:49.019556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.899 [2024-07-23 01:09:49.019700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.899 [2024-07-23 01:09:49.019726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.899 qpair failed and we were unable to recover it. 00:30:04.899 [2024-07-23 01:09:49.019894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.899 [2024-07-23 01:09:49.020056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.899 [2024-07-23 01:09:49.020079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.899 qpair failed and we were unable to recover it. 00:30:04.899 [2024-07-23 01:09:49.020238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.899 [2024-07-23 01:09:49.020400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.899 [2024-07-23 01:09:49.020424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.899 qpair failed and we were unable to recover it. 00:30:04.899 [2024-07-23 01:09:49.020567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.899 [2024-07-23 01:09:49.020734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.899 [2024-07-23 01:09:49.020759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.899 qpair failed and we were unable to recover it. 00:30:04.899 [2024-07-23 01:09:49.020898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.899 [2024-07-23 01:09:49.021064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.899 [2024-07-23 01:09:49.021089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.899 qpair failed and we were unable to recover it. 00:30:04.899 [2024-07-23 01:09:49.021251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.899 [2024-07-23 01:09:49.021438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.899 [2024-07-23 01:09:49.021462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.899 qpair failed and we were unable to recover it. 00:30:04.899 [2024-07-23 01:09:49.021624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.899 [2024-07-23 01:09:49.021788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.899 [2024-07-23 01:09:49.021817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.899 qpair failed and we were unable to recover it. 00:30:04.899 [2024-07-23 01:09:49.021962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.899 [2024-07-23 01:09:49.022125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.899 [2024-07-23 01:09:49.022150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.899 qpair failed and we were unable to recover it. 00:30:04.899 [2024-07-23 01:09:49.022279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.900 [2024-07-23 01:09:49.022421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.900 [2024-07-23 01:09:49.022445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.900 qpair failed and we were unable to recover it. 00:30:04.900 [2024-07-23 01:09:49.022611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.900 [2024-07-23 01:09:49.022796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.900 [2024-07-23 01:09:49.022821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.900 qpair failed and we were unable to recover it. 00:30:04.900 [2024-07-23 01:09:49.022985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.900 [2024-07-23 01:09:49.023144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.900 [2024-07-23 01:09:49.023168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.900 qpair failed and we were unable to recover it. 00:30:04.900 [2024-07-23 01:09:49.023357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.900 [2024-07-23 01:09:49.023523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.900 [2024-07-23 01:09:49.023548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.900 qpair failed and we were unable to recover it. 00:30:04.900 [2024-07-23 01:09:49.023711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.900 [2024-07-23 01:09:49.023881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.900 [2024-07-23 01:09:49.023906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.900 qpair failed and we were unable to recover it. 00:30:04.900 [2024-07-23 01:09:49.024071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.900 [2024-07-23 01:09:49.024236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.900 [2024-07-23 01:09:49.024259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.900 qpair failed and we were unable to recover it. 00:30:04.900 [2024-07-23 01:09:49.024426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.900 [2024-07-23 01:09:49.024588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.900 [2024-07-23 01:09:49.024620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.900 qpair failed and we were unable to recover it. 00:30:04.900 [2024-07-23 01:09:49.024810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.900 [2024-07-23 01:09:49.024973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.900 [2024-07-23 01:09:49.024997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.900 qpair failed and we were unable to recover it. 00:30:04.900 [2024-07-23 01:09:49.025198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.900 [2024-07-23 01:09:49.025362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.900 [2024-07-23 01:09:49.025392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.900 qpair failed and we were unable to recover it. 00:30:04.900 [2024-07-23 01:09:49.025555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.900 [2024-07-23 01:09:49.025716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.900 [2024-07-23 01:09:49.025741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.900 qpair failed and we were unable to recover it. 00:30:04.900 [2024-07-23 01:09:49.025907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.900 [2024-07-23 01:09:49.026069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.900 [2024-07-23 01:09:49.026092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.900 qpair failed and we were unable to recover it. 00:30:04.900 [2024-07-23 01:09:49.026282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.900 [2024-07-23 01:09:49.026418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.900 [2024-07-23 01:09:49.026443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.900 qpair failed and we were unable to recover it. 00:30:04.900 [2024-07-23 01:09:49.026608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.900 [2024-07-23 01:09:49.026759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.900 [2024-07-23 01:09:49.026782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.900 qpair failed and we were unable to recover it. 00:30:04.900 [2024-07-23 01:09:49.026946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.900 [2024-07-23 01:09:49.027106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.900 [2024-07-23 01:09:49.027130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.900 qpair failed and we were unable to recover it. 00:30:04.900 [2024-07-23 01:09:49.027278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.900 [2024-07-23 01:09:49.027445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.900 [2024-07-23 01:09:49.027469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.900 qpair failed and we were unable to recover it. 00:30:04.900 [2024-07-23 01:09:49.027639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.900 [2024-07-23 01:09:49.027774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.900 [2024-07-23 01:09:49.027799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.900 qpair failed and we were unable to recover it. 00:30:04.900 [2024-07-23 01:09:49.027967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.900 [2024-07-23 01:09:49.028143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.900 [2024-07-23 01:09:49.028168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.900 qpair failed and we were unable to recover it. 00:30:04.900 [2024-07-23 01:09:49.028348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.900 [2024-07-23 01:09:49.028514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.900 [2024-07-23 01:09:49.028538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.900 qpair failed and we were unable to recover it. 00:30:04.900 [2024-07-23 01:09:49.028705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.900 [2024-07-23 01:09:49.028866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.900 [2024-07-23 01:09:49.028889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.900 qpair failed and we were unable to recover it. 00:30:04.900 [2024-07-23 01:09:49.029024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.900 [2024-07-23 01:09:49.029160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.900 [2024-07-23 01:09:49.029186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.900 qpair failed and we were unable to recover it. 00:30:04.900 [2024-07-23 01:09:49.029383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.900 [2024-07-23 01:09:49.029520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.900 [2024-07-23 01:09:49.029544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.900 qpair failed and we were unable to recover it. 00:30:04.900 [2024-07-23 01:09:49.029733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.900 [2024-07-23 01:09:49.029890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.900 [2024-07-23 01:09:49.029914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.900 qpair failed and we were unable to recover it. 00:30:04.900 [2024-07-23 01:09:49.030052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.900 [2024-07-23 01:09:49.030218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.900 [2024-07-23 01:09:49.030244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.900 qpair failed and we were unable to recover it. 00:30:04.900 [2024-07-23 01:09:49.030400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.900 [2024-07-23 01:09:49.030559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.900 [2024-07-23 01:09:49.030584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.900 qpair failed and we were unable to recover it. 00:30:04.900 [2024-07-23 01:09:49.030757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.900 [2024-07-23 01:09:49.030914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.900 [2024-07-23 01:09:49.030938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.900 qpair failed and we were unable to recover it. 00:30:04.900 [2024-07-23 01:09:49.031101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.900 [2024-07-23 01:09:49.031261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.900 [2024-07-23 01:09:49.031285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.900 qpair failed and we were unable to recover it. 00:30:04.900 [2024-07-23 01:09:49.031446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.900 [2024-07-23 01:09:49.031608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.900 [2024-07-23 01:09:49.031642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.900 qpair failed and we were unable to recover it. 00:30:04.900 [2024-07-23 01:09:49.031783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.900 [2024-07-23 01:09:49.031950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.901 [2024-07-23 01:09:49.031975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.901 qpair failed and we were unable to recover it. 00:30:04.901 [2024-07-23 01:09:49.032120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.901 [2024-07-23 01:09:49.032280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.901 [2024-07-23 01:09:49.032304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.901 qpair failed and we were unable to recover it. 00:30:04.901 [2024-07-23 01:09:49.032470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.901 [2024-07-23 01:09:49.032642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.901 [2024-07-23 01:09:49.032668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.901 qpair failed and we were unable to recover it. 00:30:04.901 [2024-07-23 01:09:49.032834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.901 [2024-07-23 01:09:49.033024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.901 [2024-07-23 01:09:49.033048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.901 qpair failed and we were unable to recover it. 00:30:04.901 [2024-07-23 01:09:49.033185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.901 [2024-07-23 01:09:49.033324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.901 [2024-07-23 01:09:49.033348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.901 qpair failed and we were unable to recover it. 00:30:04.901 [2024-07-23 01:09:49.033510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.901 [2024-07-23 01:09:49.033670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.901 [2024-07-23 01:09:49.033695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.901 qpair failed and we were unable to recover it. 00:30:04.901 [2024-07-23 01:09:49.033885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.901 [2024-07-23 01:09:49.034072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.901 [2024-07-23 01:09:49.034096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.901 qpair failed and we were unable to recover it. 00:30:04.901 [2024-07-23 01:09:49.034260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.901 [2024-07-23 01:09:49.034451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.901 [2024-07-23 01:09:49.034475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.901 qpair failed and we were unable to recover it. 00:30:04.901 [2024-07-23 01:09:49.034621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.901 [2024-07-23 01:09:49.034785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.901 [2024-07-23 01:09:49.034809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.901 qpair failed and we were unable to recover it. 00:30:04.901 [2024-07-23 01:09:49.034944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.901 [2024-07-23 01:09:49.035107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.901 [2024-07-23 01:09:49.035131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.901 qpair failed and we were unable to recover it. 00:30:04.901 [2024-07-23 01:09:49.035268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.901 [2024-07-23 01:09:49.035434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.901 [2024-07-23 01:09:49.035461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.901 qpair failed and we were unable to recover it. 00:30:04.901 [2024-07-23 01:09:49.035662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.901 [2024-07-23 01:09:49.035798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.901 [2024-07-23 01:09:49.035822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.901 qpair failed and we were unable to recover it. 00:30:04.901 [2024-07-23 01:09:49.035965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.901 [2024-07-23 01:09:49.036096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.901 [2024-07-23 01:09:49.036120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.901 qpair failed and we were unable to recover it. 00:30:04.901 [2024-07-23 01:09:49.036292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.901 [2024-07-23 01:09:49.036457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.901 [2024-07-23 01:09:49.036482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.901 qpair failed and we were unable to recover it. 00:30:04.901 [2024-07-23 01:09:49.036650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.901 [2024-07-23 01:09:49.036810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.901 [2024-07-23 01:09:49.036835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.901 qpair failed and we were unable to recover it. 00:30:04.901 [2024-07-23 01:09:49.037000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.901 [2024-07-23 01:09:49.037166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.901 [2024-07-23 01:09:49.037191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.901 qpair failed and we were unable to recover it. 00:30:04.901 [2024-07-23 01:09:49.037353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.901 [2024-07-23 01:09:49.037484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.901 [2024-07-23 01:09:49.037511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.901 qpair failed and we were unable to recover it. 00:30:04.901 [2024-07-23 01:09:49.037658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.901 [2024-07-23 01:09:49.037823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.901 [2024-07-23 01:09:49.037850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.901 qpair failed and we were unable to recover it. 00:30:04.901 [2024-07-23 01:09:49.038006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.901 [2024-07-23 01:09:49.038192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.901 [2024-07-23 01:09:49.038217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.901 qpair failed and we were unable to recover it. 00:30:04.901 [2024-07-23 01:09:49.038384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.901 [2024-07-23 01:09:49.038543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.901 [2024-07-23 01:09:49.038567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.901 qpair failed and we were unable to recover it. 00:30:04.901 [2024-07-23 01:09:49.038712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.901 [2024-07-23 01:09:49.038847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.901 [2024-07-23 01:09:49.038872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.901 qpair failed and we were unable to recover it. 00:30:04.901 [2024-07-23 01:09:49.039016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.901 [2024-07-23 01:09:49.039182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.901 [2024-07-23 01:09:49.039207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.901 qpair failed and we were unable to recover it. 00:30:04.901 [2024-07-23 01:09:49.039372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.901 [2024-07-23 01:09:49.039510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.901 [2024-07-23 01:09:49.039537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.901 qpair failed and we were unable to recover it. 00:30:04.901 [2024-07-23 01:09:49.039704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.901 [2024-07-23 01:09:49.039870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.901 [2024-07-23 01:09:49.039894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.901 qpair failed and we were unable to recover it. 00:30:04.901 [2024-07-23 01:09:49.040065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.901 [2024-07-23 01:09:49.040251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.901 [2024-07-23 01:09:49.040275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.901 qpair failed and we were unable to recover it. 00:30:04.901 [2024-07-23 01:09:49.040440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.901 [2024-07-23 01:09:49.040605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.901 [2024-07-23 01:09:49.040648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.901 qpair failed and we were unable to recover it. 00:30:04.901 [2024-07-23 01:09:49.040812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.901 [2024-07-23 01:09:49.040943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.901 [2024-07-23 01:09:49.040968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.901 qpair failed and we were unable to recover it. 00:30:04.901 [2024-07-23 01:09:49.041130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.901 [2024-07-23 01:09:49.041322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.901 [2024-07-23 01:09:49.041347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.901 qpair failed and we were unable to recover it. 00:30:04.901 [2024-07-23 01:09:49.041510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.901 [2024-07-23 01:09:49.041664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.901 [2024-07-23 01:09:49.041690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.902 qpair failed and we were unable to recover it. 00:30:04.902 [2024-07-23 01:09:49.041841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.902 [2024-07-23 01:09:49.042027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.902 [2024-07-23 01:09:49.042051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.902 qpair failed and we were unable to recover it. 00:30:04.902 [2024-07-23 01:09:49.042237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.902 [2024-07-23 01:09:49.042397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.902 [2024-07-23 01:09:49.042423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.902 qpair failed and we were unable to recover it. 00:30:04.902 [2024-07-23 01:09:49.042555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.902 [2024-07-23 01:09:49.042716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.902 [2024-07-23 01:09:49.042743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.902 qpair failed and we were unable to recover it. 00:30:04.902 [2024-07-23 01:09:49.042888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.902 [2024-07-23 01:09:49.043048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.902 [2024-07-23 01:09:49.043073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.902 qpair failed and we were unable to recover it. 00:30:04.902 [2024-07-23 01:09:49.043234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.902 [2024-07-23 01:09:49.043421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.902 [2024-07-23 01:09:49.043446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.902 qpair failed and we were unable to recover it. 00:30:04.902 [2024-07-23 01:09:49.043582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.902 [2024-07-23 01:09:49.043751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.902 [2024-07-23 01:09:49.043777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.902 qpair failed and we were unable to recover it. 00:30:04.902 [2024-07-23 01:09:49.043914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.902 [2024-07-23 01:09:49.044052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.902 [2024-07-23 01:09:49.044076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.902 qpair failed and we were unable to recover it. 00:30:04.902 [2024-07-23 01:09:49.044217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.902 [2024-07-23 01:09:49.044376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.902 [2024-07-23 01:09:49.044401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.902 qpair failed and we were unable to recover it. 00:30:04.902 [2024-07-23 01:09:49.044592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.902 [2024-07-23 01:09:49.044736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.902 [2024-07-23 01:09:49.044762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.902 qpair failed and we were unable to recover it. 00:30:04.902 [2024-07-23 01:09:49.044954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.902 [2024-07-23 01:09:49.045095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.902 [2024-07-23 01:09:49.045121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.902 qpair failed and we were unable to recover it. 00:30:04.902 [2024-07-23 01:09:49.045292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.902 [2024-07-23 01:09:49.045481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.902 [2024-07-23 01:09:49.045505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.902 qpair failed and we were unable to recover it. 00:30:04.902 [2024-07-23 01:09:49.045661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.902 [2024-07-23 01:09:49.045826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.902 [2024-07-23 01:09:49.045850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.902 qpair failed and we were unable to recover it. 00:30:04.902 [2024-07-23 01:09:49.046017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.902 [2024-07-23 01:09:49.046158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.902 [2024-07-23 01:09:49.046185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.902 qpair failed and we were unable to recover it. 00:30:04.902 [2024-07-23 01:09:49.046357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.902 [2024-07-23 01:09:49.046493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.902 [2024-07-23 01:09:49.046518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.902 qpair failed and we were unable to recover it. 00:30:04.902 [2024-07-23 01:09:49.046659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.902 [2024-07-23 01:09:49.046832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.902 [2024-07-23 01:09:49.046861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.902 qpair failed and we were unable to recover it. 00:30:04.902 [2024-07-23 01:09:49.047055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.902 [2024-07-23 01:09:49.047208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.902 [2024-07-23 01:09:49.047238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.902 qpair failed and we were unable to recover it. 00:30:04.902 [2024-07-23 01:09:49.047423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.902 [2024-07-23 01:09:49.047597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.902 [2024-07-23 01:09:49.047636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.902 qpair failed and we were unable to recover it. 00:30:04.902 [2024-07-23 01:09:49.047826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.902 [2024-07-23 01:09:49.047964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.902 [2024-07-23 01:09:49.047994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.902 qpair failed and we were unable to recover it. 00:30:04.902 [2024-07-23 01:09:49.048152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.902 [2024-07-23 01:09:49.048328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.902 [2024-07-23 01:09:49.048357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.902 qpair failed and we were unable to recover it. 00:30:04.902 [2024-07-23 01:09:49.048569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.902 [2024-07-23 01:09:49.048778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.902 [2024-07-23 01:09:49.048808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.902 qpair failed and we were unable to recover it. 00:30:04.902 [2024-07-23 01:09:49.048962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.902 [2024-07-23 01:09:49.049158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.902 [2024-07-23 01:09:49.049186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.902 qpair failed and we were unable to recover it. 00:30:04.902 [2024-07-23 01:09:49.049371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.902 [2024-07-23 01:09:49.049546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.902 [2024-07-23 01:09:49.049574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.902 qpair failed and we were unable to recover it. 00:30:04.902 [2024-07-23 01:09:49.049778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.902 [2024-07-23 01:09:49.049952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.902 [2024-07-23 01:09:49.049981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.902 qpair failed and we were unable to recover it. 00:30:04.902 [2024-07-23 01:09:49.050199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.902 [2024-07-23 01:09:49.050376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.902 [2024-07-23 01:09:49.050404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.902 qpair failed and we were unable to recover it. 00:30:04.902 [2024-07-23 01:09:49.050558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.902 [2024-07-23 01:09:49.050731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.902 [2024-07-23 01:09:49.050759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.902 qpair failed and we were unable to recover it. 00:30:04.902 [2024-07-23 01:09:49.050950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.902 [2024-07-23 01:09:49.051127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.902 [2024-07-23 01:09:49.051156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.902 qpair failed and we were unable to recover it. 00:30:04.902 [2024-07-23 01:09:49.051342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.902 [2024-07-23 01:09:49.051494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.902 [2024-07-23 01:09:49.051521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.902 qpair failed and we were unable to recover it. 00:30:04.902 [2024-07-23 01:09:49.051706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.902 [2024-07-23 01:09:49.051877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.902 [2024-07-23 01:09:49.051905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.902 qpair failed and we were unable to recover it. 00:30:04.902 [2024-07-23 01:09:49.052115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.903 [2024-07-23 01:09:49.052290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.903 [2024-07-23 01:09:49.052317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.903 qpair failed and we were unable to recover it. 00:30:04.903 [2024-07-23 01:09:49.052537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.903 [2024-07-23 01:09:49.052715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.903 [2024-07-23 01:09:49.052744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.903 qpair failed and we were unable to recover it. 00:30:04.903 [2024-07-23 01:09:49.052945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.903 [2024-07-23 01:09:49.053144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.903 [2024-07-23 01:09:49.053172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.903 qpair failed and we were unable to recover it. 00:30:04.903 [2024-07-23 01:09:49.053329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.903 [2024-07-23 01:09:49.053502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.903 [2024-07-23 01:09:49.053531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.903 qpair failed and we were unable to recover it. 00:30:04.903 [2024-07-23 01:09:49.053729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.903 [2024-07-23 01:09:49.053882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.903 [2024-07-23 01:09:49.053910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.903 qpair failed and we were unable to recover it. 00:30:04.903 [2024-07-23 01:09:49.054132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.903 [2024-07-23 01:09:49.054326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.903 [2024-07-23 01:09:49.054356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.903 qpair failed and we were unable to recover it. 00:30:04.903 [2024-07-23 01:09:49.054538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.903 [2024-07-23 01:09:49.054701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.903 [2024-07-23 01:09:49.054729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.903 qpair failed and we were unable to recover it. 00:30:04.903 [2024-07-23 01:09:49.054923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.903 [2024-07-23 01:09:49.055107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.903 [2024-07-23 01:09:49.055136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.903 qpair failed and we were unable to recover it. 00:30:04.903 [2024-07-23 01:09:49.055297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.903 [2024-07-23 01:09:49.055445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.903 [2024-07-23 01:09:49.055474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.903 qpair failed and we were unable to recover it. 00:30:04.903 [2024-07-23 01:09:49.055665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.903 [2024-07-23 01:09:49.055840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.903 [2024-07-23 01:09:49.055868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.903 qpair failed and we were unable to recover it. 00:30:04.903 [2024-07-23 01:09:49.056056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.903 [2024-07-23 01:09:49.056226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.903 [2024-07-23 01:09:49.056254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.903 qpair failed and we were unable to recover it. 00:30:04.903 [2024-07-23 01:09:49.056446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.903 [2024-07-23 01:09:49.056626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.903 [2024-07-23 01:09:49.056654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.903 qpair failed and we were unable to recover it. 00:30:04.903 [2024-07-23 01:09:49.056816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.903 [2024-07-23 01:09:49.056987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.903 [2024-07-23 01:09:49.057016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.903 qpair failed and we were unable to recover it. 00:30:04.903 [2024-07-23 01:09:49.057203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.903 [2024-07-23 01:09:49.057406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.903 [2024-07-23 01:09:49.057433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.903 qpair failed and we were unable to recover it. 00:30:04.903 [2024-07-23 01:09:49.057587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.903 [2024-07-23 01:09:49.057779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.903 [2024-07-23 01:09:49.057808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.903 qpair failed and we were unable to recover it. 00:30:04.903 [2024-07-23 01:09:49.057996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.903 [2024-07-23 01:09:49.058152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.903 [2024-07-23 01:09:49.058179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.903 qpair failed and we were unable to recover it. 00:30:04.903 [2024-07-23 01:09:49.058397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.903 [2024-07-23 01:09:49.058596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.903 [2024-07-23 01:09:49.058634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.903 qpair failed and we were unable to recover it. 00:30:04.903 [2024-07-23 01:09:49.058823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.903 [2024-07-23 01:09:49.059029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.903 [2024-07-23 01:09:49.059057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.903 qpair failed and we were unable to recover it. 00:30:04.903 [2024-07-23 01:09:49.059244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.903 [2024-07-23 01:09:49.059444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.903 [2024-07-23 01:09:49.059473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.903 qpair failed and we were unable to recover it. 00:30:04.903 [2024-07-23 01:09:49.059656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.903 [2024-07-23 01:09:49.059856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.903 [2024-07-23 01:09:49.059884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.903 qpair failed and we were unable to recover it. 00:30:04.903 [2024-07-23 01:09:49.060099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.903 [2024-07-23 01:09:49.060261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.903 [2024-07-23 01:09:49.060289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.903 qpair failed and we were unable to recover it. 00:30:04.903 [2024-07-23 01:09:49.060457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.903 [2024-07-23 01:09:49.060607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.903 [2024-07-23 01:09:49.060642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.903 qpair failed and we were unable to recover it. 00:30:04.903 [2024-07-23 01:09:49.060802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.903 [2024-07-23 01:09:49.060973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.903 [2024-07-23 01:09:49.061002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.903 qpair failed and we were unable to recover it. 00:30:04.903 [2024-07-23 01:09:49.061196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.903 [2024-07-23 01:09:49.061339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.903 [2024-07-23 01:09:49.061366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.903 qpair failed and we were unable to recover it. 00:30:04.903 [2024-07-23 01:09:49.061578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.903 [2024-07-23 01:09:49.061767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.904 [2024-07-23 01:09:49.061796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.904 qpair failed and we were unable to recover it. 00:30:04.904 [2024-07-23 01:09:49.061985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.904 [2024-07-23 01:09:49.062140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.904 [2024-07-23 01:09:49.062169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.904 qpair failed and we were unable to recover it. 00:30:04.904 [2024-07-23 01:09:49.062354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.904 [2024-07-23 01:09:49.062525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.904 [2024-07-23 01:09:49.062554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.904 qpair failed and we were unable to recover it. 00:30:04.904 [2024-07-23 01:09:49.062752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.904 [2024-07-23 01:09:49.062939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.904 [2024-07-23 01:09:49.062968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.904 qpair failed and we were unable to recover it. 00:30:04.904 [2024-07-23 01:09:49.063155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.904 [2024-07-23 01:09:49.063329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.904 [2024-07-23 01:09:49.063356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.904 qpair failed and we were unable to recover it. 00:30:04.904 [2024-07-23 01:09:49.063547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.904 [2024-07-23 01:09:49.063691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.904 [2024-07-23 01:09:49.063721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.904 qpair failed and we were unable to recover it. 00:30:04.904 [2024-07-23 01:09:49.063926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.904 [2024-07-23 01:09:49.064080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.904 [2024-07-23 01:09:49.064109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.904 qpair failed and we were unable to recover it. 00:30:04.904 [2024-07-23 01:09:49.064268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.904 [2024-07-23 01:09:49.064449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.904 [2024-07-23 01:09:49.064477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.904 qpair failed and we were unable to recover it. 00:30:04.904 [2024-07-23 01:09:49.064679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.904 [2024-07-23 01:09:49.064829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.904 [2024-07-23 01:09:49.064857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.904 qpair failed and we were unable to recover it. 00:30:04.904 [2024-07-23 01:09:49.065012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.904 [2024-07-23 01:09:49.065208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.904 [2024-07-23 01:09:49.065236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.904 qpair failed and we were unable to recover it. 00:30:04.904 [2024-07-23 01:09:49.065409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.904 [2024-07-23 01:09:49.065583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.904 [2024-07-23 01:09:49.065619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.904 qpair failed and we were unable to recover it. 00:30:04.904 [2024-07-23 01:09:49.065797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.904 [2024-07-23 01:09:49.065974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.904 [2024-07-23 01:09:49.066003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.904 qpair failed and we were unable to recover it. 00:30:04.904 [2024-07-23 01:09:49.066212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.904 [2024-07-23 01:09:49.066386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.904 [2024-07-23 01:09:49.066415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.904 qpair failed and we were unable to recover it. 00:30:04.904 [2024-07-23 01:09:49.066594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.904 [2024-07-23 01:09:49.066779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.904 [2024-07-23 01:09:49.066808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.904 qpair failed and we were unable to recover it. 00:30:04.904 [2024-07-23 01:09:49.066999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.904 [2024-07-23 01:09:49.067160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.904 [2024-07-23 01:09:49.067189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:04.904 qpair failed and we were unable to recover it. 00:30:05.177 [2024-07-23 01:09:49.067351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.177 [2024-07-23 01:09:49.067510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.177 [2024-07-23 01:09:49.067539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.177 qpair failed and we were unable to recover it. 00:30:05.177 [2024-07-23 01:09:49.067706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.177 [2024-07-23 01:09:49.067856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.177 [2024-07-23 01:09:49.067885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.177 qpair failed and we were unable to recover it. 00:30:05.177 [2024-07-23 01:09:49.068076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.177 [2024-07-23 01:09:49.068229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.177 [2024-07-23 01:09:49.068258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.177 qpair failed and we were unable to recover it. 00:30:05.177 [2024-07-23 01:09:49.068444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.177 [2024-07-23 01:09:49.068621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.177 [2024-07-23 01:09:49.068651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.177 qpair failed and we were unable to recover it. 00:30:05.177 [2024-07-23 01:09:49.068815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.177 [2024-07-23 01:09:49.069011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.177 [2024-07-23 01:09:49.069040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.177 qpair failed and we were unable to recover it. 00:30:05.177 [2024-07-23 01:09:49.069189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.177 [2024-07-23 01:09:49.069341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.177 [2024-07-23 01:09:49.069370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.177 qpair failed and we were unable to recover it. 00:30:05.177 [2024-07-23 01:09:49.069541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.177 [2024-07-23 01:09:49.069710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.177 [2024-07-23 01:09:49.069740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.177 qpair failed and we were unable to recover it. 00:30:05.177 [2024-07-23 01:09:49.069925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.177 [2024-07-23 01:09:49.070097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.177 [2024-07-23 01:09:49.070125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.177 qpair failed and we were unable to recover it. 00:30:05.177 [2024-07-23 01:09:49.070285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.177 [2024-07-23 01:09:49.070464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.177 [2024-07-23 01:09:49.070493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.177 qpair failed and we were unable to recover it. 00:30:05.177 [2024-07-23 01:09:49.070685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.177 [2024-07-23 01:09:49.070831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.177 [2024-07-23 01:09:49.070859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.177 qpair failed and we were unable to recover it. 00:30:05.177 [2024-07-23 01:09:49.071074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.177 [2024-07-23 01:09:49.071272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.177 [2024-07-23 01:09:49.071301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.177 qpair failed and we were unable to recover it. 00:30:05.177 [2024-07-23 01:09:49.071450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.177 [2024-07-23 01:09:49.071651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.177 [2024-07-23 01:09:49.071680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.177 qpair failed and we were unable to recover it. 00:30:05.177 [2024-07-23 01:09:49.071863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.177 [2024-07-23 01:09:49.072040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.177 [2024-07-23 01:09:49.072070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.177 qpair failed and we were unable to recover it. 00:30:05.177 [2024-07-23 01:09:49.072278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.177 [2024-07-23 01:09:49.072487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.177 [2024-07-23 01:09:49.072516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.177 qpair failed and we were unable to recover it. 00:30:05.177 [2024-07-23 01:09:49.072739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.177 [2024-07-23 01:09:49.072891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.177 [2024-07-23 01:09:49.072927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.177 qpair failed and we were unable to recover it. 00:30:05.177 [2024-07-23 01:09:49.073097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.177 [2024-07-23 01:09:49.073258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.177 [2024-07-23 01:09:49.073283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.177 qpair failed and we were unable to recover it. 00:30:05.177 [2024-07-23 01:09:49.073446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.177 [2024-07-23 01:09:49.073577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.177 [2024-07-23 01:09:49.073606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.177 qpair failed and we were unable to recover it. 00:30:05.177 [2024-07-23 01:09:49.073757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.177 [2024-07-23 01:09:49.073943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.177 [2024-07-23 01:09:49.073967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.177 qpair failed and we were unable to recover it. 00:30:05.177 [2024-07-23 01:09:49.074126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.177 [2024-07-23 01:09:49.074291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.177 [2024-07-23 01:09:49.074316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.177 qpair failed and we were unable to recover it. 00:30:05.177 [2024-07-23 01:09:49.074455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.177 [2024-07-23 01:09:49.074637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.177 [2024-07-23 01:09:49.074662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.177 qpair failed and we were unable to recover it. 00:30:05.177 [2024-07-23 01:09:49.074833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.177 [2024-07-23 01:09:49.074997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.177 [2024-07-23 01:09:49.075021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.177 qpair failed and we were unable to recover it. 00:30:05.177 [2024-07-23 01:09:49.075188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.177 [2024-07-23 01:09:49.075379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.177 [2024-07-23 01:09:49.075404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.177 qpair failed and we were unable to recover it. 00:30:05.177 [2024-07-23 01:09:49.075548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.177 [2024-07-23 01:09:49.075718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.177 [2024-07-23 01:09:49.075744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.177 qpair failed and we were unable to recover it. 00:30:05.177 [2024-07-23 01:09:49.075936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.177 [2024-07-23 01:09:49.076104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.177 [2024-07-23 01:09:49.076127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.177 qpair failed and we were unable to recover it. 00:30:05.177 [2024-07-23 01:09:49.076289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.177 [2024-07-23 01:09:49.076426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.177 [2024-07-23 01:09:49.076450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.177 qpair failed and we were unable to recover it. 00:30:05.178 [2024-07-23 01:09:49.076623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.178 [2024-07-23 01:09:49.076789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.178 [2024-07-23 01:09:49.076813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.178 qpair failed and we were unable to recover it. 00:30:05.178 [2024-07-23 01:09:49.076980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.178 [2024-07-23 01:09:49.077144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.178 [2024-07-23 01:09:49.077173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.178 qpair failed and we were unable to recover it. 00:30:05.178 [2024-07-23 01:09:49.077338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.178 [2024-07-23 01:09:49.077524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.178 [2024-07-23 01:09:49.077549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.178 qpair failed and we were unable to recover it. 00:30:05.178 [2024-07-23 01:09:49.077711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.178 [2024-07-23 01:09:49.077902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.178 [2024-07-23 01:09:49.077927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.178 qpair failed and we were unable to recover it. 00:30:05.178 [2024-07-23 01:09:49.078085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.178 [2024-07-23 01:09:49.078223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.178 [2024-07-23 01:09:49.078248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.178 qpair failed and we were unable to recover it. 00:30:05.178 [2024-07-23 01:09:49.078400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.178 [2024-07-23 01:09:49.078588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.178 [2024-07-23 01:09:49.078620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.178 qpair failed and we were unable to recover it. 00:30:05.178 [2024-07-23 01:09:49.078814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.178 [2024-07-23 01:09:49.078978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.178 [2024-07-23 01:09:49.079003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.178 qpair failed and we were unable to recover it. 00:30:05.178 [2024-07-23 01:09:49.079144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.178 [2024-07-23 01:09:49.079306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.178 [2024-07-23 01:09:49.079329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.178 qpair failed and we were unable to recover it. 00:30:05.178 [2024-07-23 01:09:49.079471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.178 [2024-07-23 01:09:49.079632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.178 [2024-07-23 01:09:49.079657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.178 qpair failed and we were unable to recover it. 00:30:05.178 [2024-07-23 01:09:49.079820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.178 [2024-07-23 01:09:49.079981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.178 [2024-07-23 01:09:49.080005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.178 qpair failed and we were unable to recover it. 00:30:05.178 [2024-07-23 01:09:49.080164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.178 [2024-07-23 01:09:49.080351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.178 [2024-07-23 01:09:49.080375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.178 qpair failed and we were unable to recover it. 00:30:05.178 [2024-07-23 01:09:49.080509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.178 [2024-07-23 01:09:49.080673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.178 [2024-07-23 01:09:49.080705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.178 qpair failed and we were unable to recover it. 00:30:05.178 [2024-07-23 01:09:49.080854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.178 [2024-07-23 01:09:49.081019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.178 [2024-07-23 01:09:49.081045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.178 qpair failed and we were unable to recover it. 00:30:05.178 [2024-07-23 01:09:49.081183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.178 [2024-07-23 01:09:49.081340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.178 [2024-07-23 01:09:49.081364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.178 qpair failed and we were unable to recover it. 00:30:05.178 [2024-07-23 01:09:49.081492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.178 [2024-07-23 01:09:49.081652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.178 [2024-07-23 01:09:49.081677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.178 qpair failed and we were unable to recover it. 00:30:05.178 [2024-07-23 01:09:49.081820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.178 [2024-07-23 01:09:49.081986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.178 [2024-07-23 01:09:49.082011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.178 qpair failed and we were unable to recover it. 00:30:05.178 [2024-07-23 01:09:49.082171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.178 [2024-07-23 01:09:49.082306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.178 [2024-07-23 01:09:49.082332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.178 qpair failed and we were unable to recover it. 00:30:05.178 [2024-07-23 01:09:49.082498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.178 [2024-07-23 01:09:49.082639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.178 [2024-07-23 01:09:49.082663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.178 qpair failed and we were unable to recover it. 00:30:05.178 [2024-07-23 01:09:49.082811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.178 [2024-07-23 01:09:49.082952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.178 [2024-07-23 01:09:49.082979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.178 qpair failed and we were unable to recover it. 00:30:05.178 [2024-07-23 01:09:49.083169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.178 [2024-07-23 01:09:49.083313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.178 [2024-07-23 01:09:49.083340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.178 qpair failed and we were unable to recover it. 00:30:05.178 [2024-07-23 01:09:49.083515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.178 [2024-07-23 01:09:49.083655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.178 [2024-07-23 01:09:49.083681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.178 qpair failed and we were unable to recover it. 00:30:05.178 [2024-07-23 01:09:49.083874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.178 [2024-07-23 01:09:49.084013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.178 [2024-07-23 01:09:49.084038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.178 qpair failed and we were unable to recover it. 00:30:05.178 [2024-07-23 01:09:49.084175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.178 [2024-07-23 01:09:49.084365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.178 [2024-07-23 01:09:49.084391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.178 qpair failed and we were unable to recover it. 00:30:05.178 [2024-07-23 01:09:49.084560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.178 [2024-07-23 01:09:49.084722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.178 [2024-07-23 01:09:49.084747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.178 qpair failed and we were unable to recover it. 00:30:05.178 [2024-07-23 01:09:49.084891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.178 [2024-07-23 01:09:49.085086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.178 [2024-07-23 01:09:49.085111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.178 qpair failed and we were unable to recover it. 00:30:05.178 [2024-07-23 01:09:49.085285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.178 [2024-07-23 01:09:49.085451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.178 [2024-07-23 01:09:49.085475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.178 qpair failed and we were unable to recover it. 00:30:05.178 [2024-07-23 01:09:49.085639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.178 [2024-07-23 01:09:49.085827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.179 [2024-07-23 01:09:49.085852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.179 qpair failed and we were unable to recover it. 00:30:05.179 [2024-07-23 01:09:49.086007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.179 [2024-07-23 01:09:49.086146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.179 [2024-07-23 01:09:49.086172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.179 qpair failed and we were unable to recover it. 00:30:05.179 [2024-07-23 01:09:49.086336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.179 [2024-07-23 01:09:49.086475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.179 [2024-07-23 01:09:49.086500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.179 qpair failed and we were unable to recover it. 00:30:05.179 [2024-07-23 01:09:49.086642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.179 [2024-07-23 01:09:49.086807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.179 [2024-07-23 01:09:49.086831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.179 qpair failed and we were unable to recover it. 00:30:05.179 [2024-07-23 01:09:49.086996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.179 [2024-07-23 01:09:49.087158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.179 [2024-07-23 01:09:49.087184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.179 qpair failed and we were unable to recover it. 00:30:05.179 [2024-07-23 01:09:49.087356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.179 [2024-07-23 01:09:49.087546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.179 [2024-07-23 01:09:49.087570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.179 qpair failed and we were unable to recover it. 00:30:05.179 [2024-07-23 01:09:49.087717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.179 [2024-07-23 01:09:49.087884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.179 [2024-07-23 01:09:49.087909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.179 qpair failed and we were unable to recover it. 00:30:05.179 [2024-07-23 01:09:49.088046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.179 [2024-07-23 01:09:49.088237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.179 [2024-07-23 01:09:49.088261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.179 qpair failed and we were unable to recover it. 00:30:05.179 [2024-07-23 01:09:49.088451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.179 [2024-07-23 01:09:49.088611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.179 [2024-07-23 01:09:49.088642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.179 qpair failed and we were unable to recover it. 00:30:05.179 [2024-07-23 01:09:49.088829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.179 [2024-07-23 01:09:49.088984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.179 [2024-07-23 01:09:49.089009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.179 qpair failed and we were unable to recover it. 00:30:05.179 [2024-07-23 01:09:49.089172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.179 [2024-07-23 01:09:49.089305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.179 [2024-07-23 01:09:49.089329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.179 qpair failed and we were unable to recover it. 00:30:05.179 [2024-07-23 01:09:49.089495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.179 [2024-07-23 01:09:49.089637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.179 [2024-07-23 01:09:49.089662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.179 qpair failed and we were unable to recover it. 00:30:05.179 [2024-07-23 01:09:49.089830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.179 [2024-07-23 01:09:49.089992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.179 [2024-07-23 01:09:49.090017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.179 qpair failed and we were unable to recover it. 00:30:05.179 [2024-07-23 01:09:49.090191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.179 [2024-07-23 01:09:49.090329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.179 [2024-07-23 01:09:49.090354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.179 qpair failed and we were unable to recover it. 00:30:05.179 [2024-07-23 01:09:49.090519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.179 [2024-07-23 01:09:49.090661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.179 [2024-07-23 01:09:49.090688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.179 qpair failed and we were unable to recover it. 00:30:05.179 [2024-07-23 01:09:49.090876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.179 [2024-07-23 01:09:49.091037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.179 [2024-07-23 01:09:49.091062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.179 qpair failed and we were unable to recover it. 00:30:05.179 [2024-07-23 01:09:49.091201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.179 [2024-07-23 01:09:49.091339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.179 [2024-07-23 01:09:49.091362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.179 qpair failed and we were unable to recover it. 00:30:05.179 [2024-07-23 01:09:49.091523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.179 [2024-07-23 01:09:49.091661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.179 [2024-07-23 01:09:49.091686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.179 qpair failed and we were unable to recover it. 00:30:05.179 [2024-07-23 01:09:49.091841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.179 [2024-07-23 01:09:49.091971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.179 [2024-07-23 01:09:49.091996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.179 qpair failed and we were unable to recover it. 00:30:05.179 [2024-07-23 01:09:49.092130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.179 [2024-07-23 01:09:49.092314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.179 [2024-07-23 01:09:49.092338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.179 qpair failed and we were unable to recover it. 00:30:05.179 [2024-07-23 01:09:49.092498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.179 [2024-07-23 01:09:49.092663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.179 [2024-07-23 01:09:49.092687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.179 qpair failed and we were unable to recover it. 00:30:05.179 [2024-07-23 01:09:49.092827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.179 [2024-07-23 01:09:49.092990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.179 [2024-07-23 01:09:49.093014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.179 qpair failed and we were unable to recover it. 00:30:05.179 [2024-07-23 01:09:49.093155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.179 [2024-07-23 01:09:49.093320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.179 [2024-07-23 01:09:49.093345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.179 qpair failed and we were unable to recover it. 00:30:05.179 [2024-07-23 01:09:49.093502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.179 [2024-07-23 01:09:49.093640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.179 [2024-07-23 01:09:49.093665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.179 qpair failed and we were unable to recover it. 00:30:05.179 [2024-07-23 01:09:49.093862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.179 [2024-07-23 01:09:49.094006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.179 [2024-07-23 01:09:49.094030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.179 qpair failed and we were unable to recover it. 00:30:05.179 [2024-07-23 01:09:49.094193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.179 [2024-07-23 01:09:49.094334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.179 [2024-07-23 01:09:49.094358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.179 qpair failed and we were unable to recover it. 00:30:05.179 [2024-07-23 01:09:49.094505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.179 [2024-07-23 01:09:49.094663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.179 [2024-07-23 01:09:49.094689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.179 qpair failed and we were unable to recover it. 00:30:05.179 [2024-07-23 01:09:49.094840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.179 [2024-07-23 01:09:49.094992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.179 [2024-07-23 01:09:49.095016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.179 qpair failed and we were unable to recover it. 00:30:05.179 [2024-07-23 01:09:49.095179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.179 [2024-07-23 01:09:49.095338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.179 [2024-07-23 01:09:49.095362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.180 qpair failed and we were unable to recover it. 00:30:05.180 [2024-07-23 01:09:49.095485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.180 [2024-07-23 01:09:49.095625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.180 [2024-07-23 01:09:49.095649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.180 qpair failed and we were unable to recover it. 00:30:05.180 [2024-07-23 01:09:49.095789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.180 [2024-07-23 01:09:49.095927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.180 [2024-07-23 01:09:49.095951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.180 qpair failed and we were unable to recover it. 00:30:05.180 [2024-07-23 01:09:49.096143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.180 [2024-07-23 01:09:49.096307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.180 [2024-07-23 01:09:49.096331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.180 qpair failed and we were unable to recover it. 00:30:05.180 [2024-07-23 01:09:49.096492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.180 [2024-07-23 01:09:49.096679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.180 [2024-07-23 01:09:49.096703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.180 qpair failed and we were unable to recover it. 00:30:05.180 [2024-07-23 01:09:49.096862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.180 [2024-07-23 01:09:49.097004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.180 [2024-07-23 01:09:49.097030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.180 qpair failed and we were unable to recover it. 00:30:05.180 [2024-07-23 01:09:49.097223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.180 [2024-07-23 01:09:49.097365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.180 [2024-07-23 01:09:49.097390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.180 qpair failed and we were unable to recover it. 00:30:05.180 [2024-07-23 01:09:49.097551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.180 [2024-07-23 01:09:49.097702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.180 [2024-07-23 01:09:49.097728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.180 qpair failed and we were unable to recover it. 00:30:05.180 [2024-07-23 01:09:49.097923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.180 [2024-07-23 01:09:49.098055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.180 [2024-07-23 01:09:49.098082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.180 qpair failed and we were unable to recover it. 00:30:05.180 [2024-07-23 01:09:49.098249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.180 [2024-07-23 01:09:49.098412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.180 [2024-07-23 01:09:49.098436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.180 qpair failed and we were unable to recover it. 00:30:05.180 [2024-07-23 01:09:49.098573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.180 [2024-07-23 01:09:49.098746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.180 [2024-07-23 01:09:49.098772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.180 qpair failed and we were unable to recover it. 00:30:05.180 [2024-07-23 01:09:49.098933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.180 [2024-07-23 01:09:49.099067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.180 [2024-07-23 01:09:49.099092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.180 qpair failed and we were unable to recover it. 00:30:05.180 [2024-07-23 01:09:49.099253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.180 [2024-07-23 01:09:49.099391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.180 [2024-07-23 01:09:49.099416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.180 qpair failed and we were unable to recover it. 00:30:05.180 [2024-07-23 01:09:49.099554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.180 [2024-07-23 01:09:49.099693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.180 [2024-07-23 01:09:49.099717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.180 qpair failed and we were unable to recover it. 00:30:05.180 [2024-07-23 01:09:49.099880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.180 [2024-07-23 01:09:49.100012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.180 [2024-07-23 01:09:49.100035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.180 qpair failed and we were unable to recover it. 00:30:05.180 [2024-07-23 01:09:49.100165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.180 [2024-07-23 01:09:49.100326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.180 [2024-07-23 01:09:49.100349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.180 qpair failed and we were unable to recover it. 00:30:05.180 [2024-07-23 01:09:49.100512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.180 [2024-07-23 01:09:49.100676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.180 [2024-07-23 01:09:49.100701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.180 qpair failed and we were unable to recover it. 00:30:05.180 [2024-07-23 01:09:49.100837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.180 [2024-07-23 01:09:49.101003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.180 [2024-07-23 01:09:49.101028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.180 qpair failed and we were unable to recover it. 00:30:05.180 [2024-07-23 01:09:49.101228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.180 [2024-07-23 01:09:49.101371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.180 [2024-07-23 01:09:49.101395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.180 qpair failed and we were unable to recover it. 00:30:05.180 [2024-07-23 01:09:49.101556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.180 [2024-07-23 01:09:49.101722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.180 [2024-07-23 01:09:49.101749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.180 qpair failed and we were unable to recover it. 00:30:05.180 [2024-07-23 01:09:49.101917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.180 [2024-07-23 01:09:49.102079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.180 [2024-07-23 01:09:49.102103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.180 qpair failed and we were unable to recover it. 00:30:05.180 [2024-07-23 01:09:49.102268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.180 [2024-07-23 01:09:49.102405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.180 [2024-07-23 01:09:49.102431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.180 qpair failed and we were unable to recover it. 00:30:05.180 [2024-07-23 01:09:49.102571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.180 [2024-07-23 01:09:49.102768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.180 [2024-07-23 01:09:49.102793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.180 qpair failed and we were unable to recover it. 00:30:05.180 [2024-07-23 01:09:49.102935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.180 [2024-07-23 01:09:49.103099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.180 [2024-07-23 01:09:49.103125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.180 qpair failed and we were unable to recover it. 00:30:05.180 [2024-07-23 01:09:49.103293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.180 [2024-07-23 01:09:49.103472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.180 [2024-07-23 01:09:49.103495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.180 qpair failed and we were unable to recover it. 00:30:05.180 [2024-07-23 01:09:49.103665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.180 [2024-07-23 01:09:49.103856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.180 [2024-07-23 01:09:49.103880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.180 qpair failed and we were unable to recover it. 00:30:05.180 [2024-07-23 01:09:49.104051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.180 [2024-07-23 01:09:49.104215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.180 [2024-07-23 01:09:49.104241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.180 qpair failed and we were unable to recover it. 00:30:05.180 [2024-07-23 01:09:49.104409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.180 [2024-07-23 01:09:49.104569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.180 [2024-07-23 01:09:49.104593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.180 qpair failed and we were unable to recover it. 00:30:05.180 [2024-07-23 01:09:49.104795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.180 [2024-07-23 01:09:49.104940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.180 [2024-07-23 01:09:49.104966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.180 qpair failed and we were unable to recover it. 00:30:05.181 [2024-07-23 01:09:49.105151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.181 [2024-07-23 01:09:49.105298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.181 [2024-07-23 01:09:49.105326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.181 qpair failed and we were unable to recover it. 00:30:05.181 [2024-07-23 01:09:49.105514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.181 [2024-07-23 01:09:49.105670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.181 [2024-07-23 01:09:49.105699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.181 qpair failed and we were unable to recover it. 00:30:05.181 [2024-07-23 01:09:49.105891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.181 [2024-07-23 01:09:49.106041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.181 [2024-07-23 01:09:49.106069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.181 qpair failed and we were unable to recover it. 00:30:05.181 [2024-07-23 01:09:49.106252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.181 [2024-07-23 01:09:49.106428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.181 [2024-07-23 01:09:49.106458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.181 qpair failed and we were unable to recover it. 00:30:05.181 [2024-07-23 01:09:49.106671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.181 [2024-07-23 01:09:49.106844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.181 [2024-07-23 01:09:49.106871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.181 qpair failed and we were unable to recover it. 00:30:05.181 [2024-07-23 01:09:49.107082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.181 [2024-07-23 01:09:49.107264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.181 [2024-07-23 01:09:49.107293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.181 qpair failed and we were unable to recover it. 00:30:05.181 [2024-07-23 01:09:49.107478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.181 [2024-07-23 01:09:49.107666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.181 [2024-07-23 01:09:49.107695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.181 qpair failed and we were unable to recover it. 00:30:05.181 [2024-07-23 01:09:49.107854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.181 [2024-07-23 01:09:49.108053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.181 [2024-07-23 01:09:49.108083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.181 qpair failed and we were unable to recover it. 00:30:05.181 [2024-07-23 01:09:49.108245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.181 [2024-07-23 01:09:49.108391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.181 [2024-07-23 01:09:49.108420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.181 qpair failed and we were unable to recover it. 00:30:05.181 [2024-07-23 01:09:49.108641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.181 [2024-07-23 01:09:49.108818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.181 [2024-07-23 01:09:49.108847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.181 qpair failed and we were unable to recover it. 00:30:05.181 [2024-07-23 01:09:49.109012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.181 [2024-07-23 01:09:49.109188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.181 [2024-07-23 01:09:49.109216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.181 qpair failed and we were unable to recover it. 00:30:05.181 [2024-07-23 01:09:49.109370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.181 [2024-07-23 01:09:49.109547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.181 [2024-07-23 01:09:49.109576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.181 qpair failed and we were unable to recover it. 00:30:05.181 [2024-07-23 01:09:49.109785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.181 [2024-07-23 01:09:49.109960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.181 [2024-07-23 01:09:49.109989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.181 qpair failed and we were unable to recover it. 00:30:05.181 [2024-07-23 01:09:49.110159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.181 [2024-07-23 01:09:49.110358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.181 [2024-07-23 01:09:49.110387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.181 qpair failed and we were unable to recover it. 00:30:05.181 [2024-07-23 01:09:49.110576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.181 [2024-07-23 01:09:49.110787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.181 [2024-07-23 01:09:49.110817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.181 qpair failed and we were unable to recover it. 00:30:05.181 [2024-07-23 01:09:49.111044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.181 [2024-07-23 01:09:49.111197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.181 [2024-07-23 01:09:49.111225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.181 qpair failed and we were unable to recover it. 00:30:05.181 [2024-07-23 01:09:49.111416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.181 [2024-07-23 01:09:49.111561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.181 [2024-07-23 01:09:49.111590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.181 qpair failed and we were unable to recover it. 00:30:05.181 [2024-07-23 01:09:49.111778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.181 [2024-07-23 01:09:49.111955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.181 [2024-07-23 01:09:49.111984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.181 qpair failed and we were unable to recover it. 00:30:05.181 [2024-07-23 01:09:49.112169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.181 [2024-07-23 01:09:49.112316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.181 [2024-07-23 01:09:49.112344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.181 qpair failed and we were unable to recover it. 00:30:05.181 [2024-07-23 01:09:49.112536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.181 [2024-07-23 01:09:49.112730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.181 [2024-07-23 01:09:49.112759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.181 qpair failed and we were unable to recover it. 00:30:05.181 [2024-07-23 01:09:49.112942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.181 [2024-07-23 01:09:49.113116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.181 [2024-07-23 01:09:49.113144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.181 qpair failed and we were unable to recover it. 00:30:05.181 [2024-07-23 01:09:49.113322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.181 [2024-07-23 01:09:49.113500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.181 [2024-07-23 01:09:49.113528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.181 qpair failed and we were unable to recover it. 00:30:05.181 [2024-07-23 01:09:49.113732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.181 [2024-07-23 01:09:49.113908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.181 [2024-07-23 01:09:49.113936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.181 qpair failed and we were unable to recover it. 00:30:05.181 [2024-07-23 01:09:49.114156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.181 [2024-07-23 01:09:49.114359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.181 [2024-07-23 01:09:49.114388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.181 qpair failed and we were unable to recover it. 00:30:05.182 [2024-07-23 01:09:49.114548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.182 [2024-07-23 01:09:49.114751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.182 [2024-07-23 01:09:49.114780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.182 qpair failed and we were unable to recover it. 00:30:05.182 [2024-07-23 01:09:49.114996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.182 [2024-07-23 01:09:49.115197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.182 [2024-07-23 01:09:49.115226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.182 qpair failed and we were unable to recover it. 00:30:05.182 [2024-07-23 01:09:49.115405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.182 [2024-07-23 01:09:49.115607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.182 [2024-07-23 01:09:49.115642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.182 qpair failed and we were unable to recover it. 00:30:05.182 [2024-07-23 01:09:49.115803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.182 [2024-07-23 01:09:49.115986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.182 [2024-07-23 01:09:49.116014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.182 qpair failed and we were unable to recover it. 00:30:05.182 [2024-07-23 01:09:49.116187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.182 [2024-07-23 01:09:49.116363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.182 [2024-07-23 01:09:49.116392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.182 qpair failed and we were unable to recover it. 00:30:05.182 [2024-07-23 01:09:49.116587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.182 [2024-07-23 01:09:49.116760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.182 [2024-07-23 01:09:49.116789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.182 qpair failed and we were unable to recover it. 00:30:05.182 [2024-07-23 01:09:49.116983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.182 [2024-07-23 01:09:49.117170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.182 [2024-07-23 01:09:49.117198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.182 qpair failed and we were unable to recover it. 00:30:05.182 [2024-07-23 01:09:49.117408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.182 [2024-07-23 01:09:49.117556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.182 [2024-07-23 01:09:49.117584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.182 qpair failed and we were unable to recover it. 00:30:05.182 [2024-07-23 01:09:49.117815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.182 [2024-07-23 01:09:49.117979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.182 [2024-07-23 01:09:49.118008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.182 qpair failed and we were unable to recover it. 00:30:05.182 [2024-07-23 01:09:49.118198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.182 [2024-07-23 01:09:49.118363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.182 [2024-07-23 01:09:49.118391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.182 qpair failed and we were unable to recover it. 00:30:05.182 [2024-07-23 01:09:49.118574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.182 [2024-07-23 01:09:49.118765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.182 [2024-07-23 01:09:49.118795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.182 qpair failed and we were unable to recover it. 00:30:05.182 [2024-07-23 01:09:49.118978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.182 [2024-07-23 01:09:49.119156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.182 [2024-07-23 01:09:49.119185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.182 qpair failed and we were unable to recover it. 00:30:05.182 [2024-07-23 01:09:49.119365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.182 [2024-07-23 01:09:49.119564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.182 [2024-07-23 01:09:49.119593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.182 qpair failed and we were unable to recover it. 00:30:05.182 [2024-07-23 01:09:49.119806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.182 [2024-07-23 01:09:49.119973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.182 [2024-07-23 01:09:49.120002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.182 qpair failed and we were unable to recover it. 00:30:05.182 [2024-07-23 01:09:49.120211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.182 [2024-07-23 01:09:49.120414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.182 [2024-07-23 01:09:49.120443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.182 qpair failed and we were unable to recover it. 00:30:05.182 [2024-07-23 01:09:49.120630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.182 [2024-07-23 01:09:49.120814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.182 [2024-07-23 01:09:49.120843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.182 qpair failed and we were unable to recover it. 00:30:05.182 [2024-07-23 01:09:49.121045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.182 [2024-07-23 01:09:49.121247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.182 [2024-07-23 01:09:49.121274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.182 qpair failed and we were unable to recover it. 00:30:05.182 [2024-07-23 01:09:49.121430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.182 [2024-07-23 01:09:49.121602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.182 [2024-07-23 01:09:49.121648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.182 qpair failed and we were unable to recover it. 00:30:05.182 [2024-07-23 01:09:49.121811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.182 [2024-07-23 01:09:49.122020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.182 [2024-07-23 01:09:49.122047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.182 qpair failed and we were unable to recover it. 00:30:05.182 [2024-07-23 01:09:49.122235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.182 [2024-07-23 01:09:49.122408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.182 [2024-07-23 01:09:49.122437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.182 qpair failed and we were unable to recover it. 00:30:05.182 [2024-07-23 01:09:49.122603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.182 [2024-07-23 01:09:49.122780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.182 [2024-07-23 01:09:49.122807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.182 qpair failed and we were unable to recover it. 00:30:05.182 [2024-07-23 01:09:49.123009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.182 [2024-07-23 01:09:49.123158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.182 [2024-07-23 01:09:49.123187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.182 qpair failed and we were unable to recover it. 00:30:05.182 [2024-07-23 01:09:49.123378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.182 [2024-07-23 01:09:49.123551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.182 [2024-07-23 01:09:49.123579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.182 qpair failed and we were unable to recover it. 00:30:05.182 [2024-07-23 01:09:49.123757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.182 [2024-07-23 01:09:49.123928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.182 [2024-07-23 01:09:49.123957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.182 qpair failed and we were unable to recover it. 00:30:05.182 [2024-07-23 01:09:49.124145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.182 [2024-07-23 01:09:49.124320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.182 [2024-07-23 01:09:49.124347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.182 qpair failed and we were unable to recover it. 00:30:05.182 [2024-07-23 01:09:49.124532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.182 [2024-07-23 01:09:49.124750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.182 [2024-07-23 01:09:49.124779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.182 qpair failed and we were unable to recover it. 00:30:05.182 [2024-07-23 01:09:49.124934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.182 [2024-07-23 01:09:49.125079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.182 [2024-07-23 01:09:49.125106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.182 qpair failed and we were unable to recover it. 00:30:05.182 [2024-07-23 01:09:49.125316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.182 [2024-07-23 01:09:49.125494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.183 [2024-07-23 01:09:49.125522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.183 qpair failed and we were unable to recover it. 00:30:05.183 [2024-07-23 01:09:49.125739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.183 [2024-07-23 01:09:49.125893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.183 [2024-07-23 01:09:49.125921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.183 qpair failed and we were unable to recover it. 00:30:05.183 [2024-07-23 01:09:49.126101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.183 [2024-07-23 01:09:49.126252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.183 [2024-07-23 01:09:49.126281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.183 qpair failed and we were unable to recover it. 00:30:05.183 [2024-07-23 01:09:49.126463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.183 [2024-07-23 01:09:49.126630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.183 [2024-07-23 01:09:49.126659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.183 qpair failed and we were unable to recover it. 00:30:05.183 [2024-07-23 01:09:49.126853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.183 [2024-07-23 01:09:49.127014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.183 [2024-07-23 01:09:49.127043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.183 qpair failed and we were unable to recover it. 00:30:05.183 [2024-07-23 01:09:49.127228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.183 [2024-07-23 01:09:49.127400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.183 [2024-07-23 01:09:49.127427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.183 qpair failed and we were unable to recover it. 00:30:05.183 [2024-07-23 01:09:49.127620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.183 [2024-07-23 01:09:49.127794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.183 [2024-07-23 01:09:49.127823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.183 qpair failed and we were unable to recover it. 00:30:05.183 [2024-07-23 01:09:49.128028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.183 [2024-07-23 01:09:49.128203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.183 [2024-07-23 01:09:49.128230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.183 qpair failed and we were unable to recover it. 00:30:05.183 [2024-07-23 01:09:49.128419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.183 [2024-07-23 01:09:49.128568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.183 [2024-07-23 01:09:49.128602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.183 qpair failed and we were unable to recover it. 00:30:05.183 [2024-07-23 01:09:49.128834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.183 [2024-07-23 01:09:49.129030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.183 [2024-07-23 01:09:49.129059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.183 qpair failed and we were unable to recover it. 00:30:05.183 [2024-07-23 01:09:49.129244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.183 [2024-07-23 01:09:49.129450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.183 [2024-07-23 01:09:49.129478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.183 qpair failed and we were unable to recover it. 00:30:05.183 [2024-07-23 01:09:49.129665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.183 [2024-07-23 01:09:49.129828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.183 [2024-07-23 01:09:49.129857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.183 qpair failed and we were unable to recover it. 00:30:05.183 [2024-07-23 01:09:49.130049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.183 [2024-07-23 01:09:49.130195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.183 [2024-07-23 01:09:49.130222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.183 qpair failed and we were unable to recover it. 00:30:05.183 [2024-07-23 01:09:49.130433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.183 [2024-07-23 01:09:49.130605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.183 [2024-07-23 01:09:49.130642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.183 qpair failed and we were unable to recover it. 00:30:05.183 [2024-07-23 01:09:49.130827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.183 [2024-07-23 01:09:49.131013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.183 [2024-07-23 01:09:49.131041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.183 qpair failed and we were unable to recover it. 00:30:05.183 [2024-07-23 01:09:49.131224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.183 [2024-07-23 01:09:49.131394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.183 [2024-07-23 01:09:49.131422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.183 qpair failed and we were unable to recover it. 00:30:05.183 [2024-07-23 01:09:49.131608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.183 [2024-07-23 01:09:49.131803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.183 [2024-07-23 01:09:49.131832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.183 qpair failed and we were unable to recover it. 00:30:05.183 [2024-07-23 01:09:49.132025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.183 [2024-07-23 01:09:49.132196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.183 [2024-07-23 01:09:49.132225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.183 qpair failed and we were unable to recover it. 00:30:05.183 [2024-07-23 01:09:49.132434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.183 [2024-07-23 01:09:49.132603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.183 [2024-07-23 01:09:49.132642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.183 qpair failed and we were unable to recover it. 00:30:05.183 [2024-07-23 01:09:49.132839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.183 [2024-07-23 01:09:49.133011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.183 [2024-07-23 01:09:49.133039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.183 qpair failed and we were unable to recover it. 00:30:05.183 [2024-07-23 01:09:49.133215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.183 [2024-07-23 01:09:49.133389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.183 [2024-07-23 01:09:49.133416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.183 qpair failed and we were unable to recover it. 00:30:05.183 [2024-07-23 01:09:49.133598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.183 [2024-07-23 01:09:49.133770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.183 [2024-07-23 01:09:49.133799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.183 qpair failed and we were unable to recover it. 00:30:05.183 [2024-07-23 01:09:49.133964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.183 [2024-07-23 01:09:49.134163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.183 [2024-07-23 01:09:49.134191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.183 qpair failed and we were unable to recover it. 00:30:05.183 [2024-07-23 01:09:49.134405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.183 [2024-07-23 01:09:49.134604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.183 [2024-07-23 01:09:49.134643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.183 qpair failed and we were unable to recover it. 00:30:05.183 [2024-07-23 01:09:49.134828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.183 [2024-07-23 01:09:49.135005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.183 [2024-07-23 01:09:49.135034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.183 qpair failed and we were unable to recover it. 00:30:05.183 [2024-07-23 01:09:49.135222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.183 [2024-07-23 01:09:49.135403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.183 [2024-07-23 01:09:49.135431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.183 qpair failed and we were unable to recover it. 00:30:05.183 [2024-07-23 01:09:49.135594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.183 [2024-07-23 01:09:49.135814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.183 [2024-07-23 01:09:49.135843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.183 qpair failed and we were unable to recover it. 00:30:05.183 [2024-07-23 01:09:49.136036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.183 [2024-07-23 01:09:49.136208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.183 [2024-07-23 01:09:49.136237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.183 qpair failed and we were unable to recover it. 00:30:05.184 [2024-07-23 01:09:49.136395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.184 [2024-07-23 01:09:49.136570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.184 [2024-07-23 01:09:49.136605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.184 qpair failed and we were unable to recover it. 00:30:05.184 [2024-07-23 01:09:49.136840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.184 [2024-07-23 01:09:49.137020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.184 [2024-07-23 01:09:49.137048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.184 qpair failed and we were unable to recover it. 00:30:05.184 [2024-07-23 01:09:49.137210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.184 [2024-07-23 01:09:49.137362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.184 [2024-07-23 01:09:49.137394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.184 qpair failed and we were unable to recover it. 00:30:05.184 [2024-07-23 01:09:49.137580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.184 [2024-07-23 01:09:49.137783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.184 [2024-07-23 01:09:49.137813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.184 qpair failed and we were unable to recover it. 00:30:05.184 [2024-07-23 01:09:49.137973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.184 [2024-07-23 01:09:49.138159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.184 [2024-07-23 01:09:49.138187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.184 qpair failed and we were unable to recover it. 00:30:05.184 [2024-07-23 01:09:49.138379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.184 [2024-07-23 01:09:49.138551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.184 [2024-07-23 01:09:49.138580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.184 qpair failed and we were unable to recover it. 00:30:05.184 [2024-07-23 01:09:49.138778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.184 [2024-07-23 01:09:49.138954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.184 [2024-07-23 01:09:49.138982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.184 qpair failed and we were unable to recover it. 00:30:05.184 [2024-07-23 01:09:49.139167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.184 [2024-07-23 01:09:49.139375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.184 [2024-07-23 01:09:49.139404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.184 qpair failed and we were unable to recover it. 00:30:05.184 [2024-07-23 01:09:49.139587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.184 [2024-07-23 01:09:49.139763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.184 [2024-07-23 01:09:49.139791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.184 qpair failed and we were unable to recover it. 00:30:05.184 [2024-07-23 01:09:49.139979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.184 [2024-07-23 01:09:49.140152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.184 [2024-07-23 01:09:49.140182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.184 qpair failed and we were unable to recover it. 00:30:05.184 [2024-07-23 01:09:49.140362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.184 [2024-07-23 01:09:49.140508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.184 [2024-07-23 01:09:49.140540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.184 qpair failed and we were unable to recover it. 00:30:05.184 [2024-07-23 01:09:49.140733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.184 [2024-07-23 01:09:49.140907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.184 [2024-07-23 01:09:49.140936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.184 qpair failed and we were unable to recover it. 00:30:05.184 [2024-07-23 01:09:49.141118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.184 [2024-07-23 01:09:49.141292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.184 [2024-07-23 01:09:49.141319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.184 qpair failed and we were unable to recover it. 00:30:05.184 [2024-07-23 01:09:49.141511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.184 [2024-07-23 01:09:49.141708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.184 [2024-07-23 01:09:49.141738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.184 qpair failed and we were unable to recover it. 00:30:05.184 [2024-07-23 01:09:49.141912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.184 [2024-07-23 01:09:49.142088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.184 [2024-07-23 01:09:49.142117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.184 qpair failed and we were unable to recover it. 00:30:05.184 [2024-07-23 01:09:49.142333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.184 [2024-07-23 01:09:49.142504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.184 [2024-07-23 01:09:49.142533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.184 qpair failed and we were unable to recover it. 00:30:05.184 [2024-07-23 01:09:49.142753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.184 [2024-07-23 01:09:49.142923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.184 [2024-07-23 01:09:49.142952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.184 qpair failed and we were unable to recover it. 00:30:05.184 [2024-07-23 01:09:49.143142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.184 [2024-07-23 01:09:49.143338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.184 [2024-07-23 01:09:49.143365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.184 qpair failed and we were unable to recover it. 00:30:05.184 [2024-07-23 01:09:49.143547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.184 [2024-07-23 01:09:49.143718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.184 [2024-07-23 01:09:49.143748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.184 qpair failed and we were unable to recover it. 00:30:05.184 [2024-07-23 01:09:49.143912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.184 [2024-07-23 01:09:49.144118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.184 [2024-07-23 01:09:49.144144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.184 qpair failed and we were unable to recover it. 00:30:05.184 [2024-07-23 01:09:49.144333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.184 [2024-07-23 01:09:49.144507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.184 [2024-07-23 01:09:49.144535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.184 qpair failed and we were unable to recover it. 00:30:05.184 [2024-07-23 01:09:49.144714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.184 [2024-07-23 01:09:49.144890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.184 [2024-07-23 01:09:49.144918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.184 qpair failed and we were unable to recover it. 00:30:05.184 [2024-07-23 01:09:49.145079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.184 [2024-07-23 01:09:49.145222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.184 [2024-07-23 01:09:49.145251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.184 qpair failed and we were unable to recover it. 00:30:05.184 [2024-07-23 01:09:49.145460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.184 [2024-07-23 01:09:49.145601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.184 [2024-07-23 01:09:49.145649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.184 qpair failed and we were unable to recover it. 00:30:05.184 [2024-07-23 01:09:49.145869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.184 [2024-07-23 01:09:49.146046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.184 [2024-07-23 01:09:49.146075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.184 qpair failed and we were unable to recover it. 00:30:05.184 [2024-07-23 01:09:49.146267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.184 [2024-07-23 01:09:49.146440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.184 [2024-07-23 01:09:49.146468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.184 qpair failed and we were unable to recover it. 00:30:05.184 [2024-07-23 01:09:49.146668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.184 [2024-07-23 01:09:49.146840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.184 [2024-07-23 01:09:49.146869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.184 qpair failed and we were unable to recover it. 00:30:05.184 [2024-07-23 01:09:49.147032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.184 [2024-07-23 01:09:49.147225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.184 [2024-07-23 01:09:49.147252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.184 qpair failed and we were unable to recover it. 00:30:05.184 [2024-07-23 01:09:49.147435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.185 [2024-07-23 01:09:49.147608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.185 [2024-07-23 01:09:49.147647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.185 qpair failed and we were unable to recover it. 00:30:05.185 [2024-07-23 01:09:49.147832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.185 [2024-07-23 01:09:49.148006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.185 [2024-07-23 01:09:49.148033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.185 qpair failed and we were unable to recover it. 00:30:05.185 [2024-07-23 01:09:49.148243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.185 [2024-07-23 01:09:49.148443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.185 [2024-07-23 01:09:49.148471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.185 qpair failed and we were unable to recover it. 00:30:05.185 [2024-07-23 01:09:49.148665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.185 [2024-07-23 01:09:49.148863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.185 [2024-07-23 01:09:49.148892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.185 qpair failed and we were unable to recover it. 00:30:05.185 [2024-07-23 01:09:49.149105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.185 [2024-07-23 01:09:49.149279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.185 [2024-07-23 01:09:49.149307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.185 qpair failed and we were unable to recover it. 00:30:05.185 [2024-07-23 01:09:49.149498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.185 [2024-07-23 01:09:49.149670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.185 [2024-07-23 01:09:49.149699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.185 qpair failed and we were unable to recover it. 00:30:05.185 [2024-07-23 01:09:49.149907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.185 [2024-07-23 01:09:49.150081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.185 [2024-07-23 01:09:49.150108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.185 qpair failed and we were unable to recover it. 00:30:05.185 [2024-07-23 01:09:49.150318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.185 [2024-07-23 01:09:49.150490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.185 [2024-07-23 01:09:49.150518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.185 qpair failed and we were unable to recover it. 00:30:05.185 [2024-07-23 01:09:49.150715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.185 [2024-07-23 01:09:49.150905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.185 [2024-07-23 01:09:49.150934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.185 qpair failed and we were unable to recover it. 00:30:05.185 [2024-07-23 01:09:49.151118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.185 [2024-07-23 01:09:49.151314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.185 [2024-07-23 01:09:49.151341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.185 qpair failed and we were unable to recover it. 00:30:05.185 [2024-07-23 01:09:49.151556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.185 [2024-07-23 01:09:49.151737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.185 [2024-07-23 01:09:49.151767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.185 qpair failed and we were unable to recover it. 00:30:05.185 [2024-07-23 01:09:49.151957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.185 [2024-07-23 01:09:49.152131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.185 [2024-07-23 01:09:49.152158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.185 qpair failed and we were unable to recover it. 00:30:05.185 [2024-07-23 01:09:49.152371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.185 [2024-07-23 01:09:49.152548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.185 [2024-07-23 01:09:49.152577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.185 qpair failed and we were unable to recover it. 00:30:05.185 [2024-07-23 01:09:49.152768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.185 [2024-07-23 01:09:49.152921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.185 [2024-07-23 01:09:49.152948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.185 qpair failed and we were unable to recover it. 00:30:05.185 [2024-07-23 01:09:49.153140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.185 [2024-07-23 01:09:49.153318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.185 [2024-07-23 01:09:49.153347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.185 qpair failed and we were unable to recover it. 00:30:05.185 [2024-07-23 01:09:49.153531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.185 [2024-07-23 01:09:49.153692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.185 [2024-07-23 01:09:49.153728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.185 qpair failed and we were unable to recover it. 00:30:05.185 [2024-07-23 01:09:49.153934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.185 [2024-07-23 01:09:49.154129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.185 [2024-07-23 01:09:49.154157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.185 qpair failed and we were unable to recover it. 00:30:05.185 [2024-07-23 01:09:49.154343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.185 [2024-07-23 01:09:49.154513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.185 [2024-07-23 01:09:49.154541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.185 qpair failed and we were unable to recover it. 00:30:05.185 [2024-07-23 01:09:49.154735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.185 [2024-07-23 01:09:49.154912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.185 [2024-07-23 01:09:49.154940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.185 qpair failed and we were unable to recover it. 00:30:05.185 [2024-07-23 01:09:49.155147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.185 [2024-07-23 01:09:49.155298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.185 [2024-07-23 01:09:49.155333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.185 qpair failed and we were unable to recover it. 00:30:05.185 [2024-07-23 01:09:49.155545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.185 [2024-07-23 01:09:49.155696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.185 [2024-07-23 01:09:49.155726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.185 qpair failed and we were unable to recover it. 00:30:05.185 [2024-07-23 01:09:49.155905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.185 [2024-07-23 01:09:49.156060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.185 [2024-07-23 01:09:49.156089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.185 qpair failed and we were unable to recover it. 00:30:05.185 [2024-07-23 01:09:49.156275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.185 [2024-07-23 01:09:49.156457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.185 [2024-07-23 01:09:49.156486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.185 qpair failed and we were unable to recover it. 00:30:05.185 [2024-07-23 01:09:49.156680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.185 [2024-07-23 01:09:49.156827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.185 [2024-07-23 01:09:49.156856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.185 qpair failed and we were unable to recover it. 00:30:05.185 [2024-07-23 01:09:49.157064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.185 [2024-07-23 01:09:49.157213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.185 [2024-07-23 01:09:49.157242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.185 qpair failed and we were unable to recover it. 00:30:05.185 [2024-07-23 01:09:49.157449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.185 [2024-07-23 01:09:49.157603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.185 [2024-07-23 01:09:49.157644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.185 qpair failed and we were unable to recover it. 00:30:05.185 [2024-07-23 01:09:49.157840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.185 [2024-07-23 01:09:49.157988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.185 [2024-07-23 01:09:49.158016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.185 qpair failed and we were unable to recover it. 00:30:05.185 [2024-07-23 01:09:49.158203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.185 [2024-07-23 01:09:49.158384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.185 [2024-07-23 01:09:49.158413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.185 qpair failed and we were unable to recover it. 00:30:05.185 [2024-07-23 01:09:49.158598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.185 [2024-07-23 01:09:49.158781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.185 [2024-07-23 01:09:49.158810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.186 qpair failed and we were unable to recover it. 00:30:05.186 [2024-07-23 01:09:49.158994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.186 [2024-07-23 01:09:49.159148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.186 [2024-07-23 01:09:49.159177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.186 qpair failed and we were unable to recover it. 00:30:05.186 [2024-07-23 01:09:49.159343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.186 [2024-07-23 01:09:49.159521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.186 [2024-07-23 01:09:49.159550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.186 qpair failed and we were unable to recover it. 00:30:05.186 [2024-07-23 01:09:49.159711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.186 [2024-07-23 01:09:49.159891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.186 [2024-07-23 01:09:49.159920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.186 qpair failed and we were unable to recover it. 00:30:05.186 [2024-07-23 01:09:49.160101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.186 [2024-07-23 01:09:49.160303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.186 [2024-07-23 01:09:49.160332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.186 qpair failed and we were unable to recover it. 00:30:05.186 [2024-07-23 01:09:49.160534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.186 [2024-07-23 01:09:49.160679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.186 [2024-07-23 01:09:49.160709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.186 qpair failed and we were unable to recover it. 00:30:05.186 [2024-07-23 01:09:49.160896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.186 [2024-07-23 01:09:49.161072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.186 [2024-07-23 01:09:49.161101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.186 qpair failed and we were unable to recover it. 00:30:05.186 [2024-07-23 01:09:49.161283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.186 [2024-07-23 01:09:49.161485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.186 [2024-07-23 01:09:49.161513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.186 qpair failed and we were unable to recover it. 00:30:05.186 [2024-07-23 01:09:49.161727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.186 [2024-07-23 01:09:49.161935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.186 [2024-07-23 01:09:49.161963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.186 qpair failed and we were unable to recover it. 00:30:05.186 [2024-07-23 01:09:49.162127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.186 [2024-07-23 01:09:49.162300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.186 [2024-07-23 01:09:49.162329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.186 qpair failed and we were unable to recover it. 00:30:05.186 [2024-07-23 01:09:49.162489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.186 [2024-07-23 01:09:49.162663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.186 [2024-07-23 01:09:49.162693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.186 qpair failed and we were unable to recover it. 00:30:05.186 [2024-07-23 01:09:49.162890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.186 [2024-07-23 01:09:49.163065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.186 [2024-07-23 01:09:49.163093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.186 qpair failed and we were unable to recover it. 00:30:05.186 [2024-07-23 01:09:49.163317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.186 [2024-07-23 01:09:49.163490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.186 [2024-07-23 01:09:49.163518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.186 qpair failed and we were unable to recover it. 00:30:05.186 [2024-07-23 01:09:49.163675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.186 [2024-07-23 01:09:49.163824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.186 [2024-07-23 01:09:49.163853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.186 qpair failed and we were unable to recover it. 00:30:05.186 [2024-07-23 01:09:49.164012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.186 [2024-07-23 01:09:49.164183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.186 [2024-07-23 01:09:49.164212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.186 qpair failed and we were unable to recover it. 00:30:05.186 [2024-07-23 01:09:49.164403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.186 [2024-07-23 01:09:49.164603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.186 [2024-07-23 01:09:49.164642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.186 qpair failed and we were unable to recover it. 00:30:05.186 [2024-07-23 01:09:49.164851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.186 [2024-07-23 01:09:49.165050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.186 [2024-07-23 01:09:49.165079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.186 qpair failed and we were unable to recover it. 00:30:05.186 [2024-07-23 01:09:49.165291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.186 [2024-07-23 01:09:49.165465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.186 [2024-07-23 01:09:49.165494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.186 qpair failed and we were unable to recover it. 00:30:05.186 [2024-07-23 01:09:49.165706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.186 [2024-07-23 01:09:49.165856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.186 [2024-07-23 01:09:49.165885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.186 qpair failed and we were unable to recover it. 00:30:05.186 [2024-07-23 01:09:49.166069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.186 [2024-07-23 01:09:49.166218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.186 [2024-07-23 01:09:49.166247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.186 qpair failed and we were unable to recover it. 00:30:05.186 [2024-07-23 01:09:49.166425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.186 [2024-07-23 01:09:49.166595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.186 [2024-07-23 01:09:49.166633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.186 qpair failed and we were unable to recover it. 00:30:05.187 [2024-07-23 01:09:49.166786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.187 [2024-07-23 01:09:49.166998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.187 [2024-07-23 01:09:49.167026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.187 qpair failed and we were unable to recover it. 00:30:05.187 [2024-07-23 01:09:49.167236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.187 [2024-07-23 01:09:49.167413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.187 [2024-07-23 01:09:49.167441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.187 qpair failed and we were unable to recover it. 00:30:05.187 [2024-07-23 01:09:49.167636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.187 [2024-07-23 01:09:49.167779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.187 [2024-07-23 01:09:49.167807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.187 qpair failed and we were unable to recover it. 00:30:05.187 [2024-07-23 01:09:49.167993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.187 [2024-07-23 01:09:49.168194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.187 [2024-07-23 01:09:49.168222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.187 qpair failed and we were unable to recover it. 00:30:05.187 [2024-07-23 01:09:49.168413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.187 [2024-07-23 01:09:49.168566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.187 [2024-07-23 01:09:49.168595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.187 qpair failed and we were unable to recover it. 00:30:05.187 [2024-07-23 01:09:49.168788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.187 [2024-07-23 01:09:49.168942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.187 [2024-07-23 01:09:49.168971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.187 qpair failed and we were unable to recover it. 00:30:05.187 [2024-07-23 01:09:49.169138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.187 [2024-07-23 01:09:49.169314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.187 [2024-07-23 01:09:49.169342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.187 qpair failed and we were unable to recover it. 00:30:05.187 [2024-07-23 01:09:49.169560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.187 [2024-07-23 01:09:49.169718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.187 [2024-07-23 01:09:49.169748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.187 qpair failed and we were unable to recover it. 00:30:05.187 [2024-07-23 01:09:49.169938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.187 [2024-07-23 01:09:49.170077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.187 [2024-07-23 01:09:49.170106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.187 qpair failed and we were unable to recover it. 00:30:05.187 [2024-07-23 01:09:49.170279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.187 [2024-07-23 01:09:49.170478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.187 [2024-07-23 01:09:49.170506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.187 qpair failed and we were unable to recover it. 00:30:05.187 [2024-07-23 01:09:49.170669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.187 [2024-07-23 01:09:49.170839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.187 [2024-07-23 01:09:49.170868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.187 qpair failed and we were unable to recover it. 00:30:05.187 [2024-07-23 01:09:49.171058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.187 [2024-07-23 01:09:49.171259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.187 [2024-07-23 01:09:49.171288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.187 qpair failed and we were unable to recover it. 00:30:05.187 [2024-07-23 01:09:49.171474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.187 [2024-07-23 01:09:49.171649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.187 [2024-07-23 01:09:49.171679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.187 qpair failed and we were unable to recover it. 00:30:05.187 [2024-07-23 01:09:49.171863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.187 [2024-07-23 01:09:49.172009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.187 [2024-07-23 01:09:49.172037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.187 qpair failed and we were unable to recover it. 00:30:05.187 [2024-07-23 01:09:49.172232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.187 [2024-07-23 01:09:49.172403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.187 [2024-07-23 01:09:49.172432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.187 qpair failed and we were unable to recover it. 00:30:05.187 [2024-07-23 01:09:49.172621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.187 [2024-07-23 01:09:49.172766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.187 [2024-07-23 01:09:49.172795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.187 qpair failed and we were unable to recover it. 00:30:05.187 [2024-07-23 01:09:49.172963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.187 [2024-07-23 01:09:49.173167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.187 [2024-07-23 01:09:49.173195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.187 qpair failed and we were unable to recover it. 00:30:05.187 [2024-07-23 01:09:49.173385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.187 [2024-07-23 01:09:49.173532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.187 [2024-07-23 01:09:49.173561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.187 qpair failed and we were unable to recover it. 00:30:05.187 [2024-07-23 01:09:49.173771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.187 [2024-07-23 01:09:49.173968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.187 [2024-07-23 01:09:49.173997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.187 qpair failed and we were unable to recover it. 00:30:05.187 [2024-07-23 01:09:49.174183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.187 [2024-07-23 01:09:49.174360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.187 [2024-07-23 01:09:49.174389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.187 qpair failed and we were unable to recover it. 00:30:05.187 [2024-07-23 01:09:49.174582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.187 [2024-07-23 01:09:49.174740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.187 [2024-07-23 01:09:49.174769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.187 qpair failed and we were unable to recover it. 00:30:05.187 [2024-07-23 01:09:49.174978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.187 [2024-07-23 01:09:49.175175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.187 [2024-07-23 01:09:49.175204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.187 qpair failed and we were unable to recover it. 00:30:05.187 [2024-07-23 01:09:49.175395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.187 [2024-07-23 01:09:49.175548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.187 [2024-07-23 01:09:49.175577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.187 qpair failed and we were unable to recover it. 00:30:05.187 [2024-07-23 01:09:49.175779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.187 [2024-07-23 01:09:49.175950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.187 [2024-07-23 01:09:49.175979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.187 qpair failed and we were unable to recover it. 00:30:05.187 [2024-07-23 01:09:49.176171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.187 [2024-07-23 01:09:49.176352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.187 [2024-07-23 01:09:49.176381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.187 qpair failed and we were unable to recover it. 00:30:05.187 [2024-07-23 01:09:49.176566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.187 [2024-07-23 01:09:49.176779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.187 [2024-07-23 01:09:49.176808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.187 qpair failed and we were unable to recover it. 00:30:05.187 [2024-07-23 01:09:49.176972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.187 [2024-07-23 01:09:49.177122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.187 [2024-07-23 01:09:49.177151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.187 qpair failed and we were unable to recover it. 00:30:05.187 [2024-07-23 01:09:49.177336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.187 [2024-07-23 01:09:49.177510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.187 [2024-07-23 01:09:49.177538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.187 qpair failed and we were unable to recover it. 00:30:05.188 [2024-07-23 01:09:49.177751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.188 [2024-07-23 01:09:49.177955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.188 [2024-07-23 01:09:49.177983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.188 qpair failed and we were unable to recover it. 00:30:05.188 [2024-07-23 01:09:49.178172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.188 [2024-07-23 01:09:49.178350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.188 [2024-07-23 01:09:49.178379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.188 qpair failed and we were unable to recover it. 00:30:05.188 [2024-07-23 01:09:49.178542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.188 [2024-07-23 01:09:49.178719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.188 [2024-07-23 01:09:49.178749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.188 qpair failed and we were unable to recover it. 00:30:05.188 [2024-07-23 01:09:49.178935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.188 [2024-07-23 01:09:49.179081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.188 [2024-07-23 01:09:49.179110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.188 qpair failed and we were unable to recover it. 00:30:05.188 [2024-07-23 01:09:49.179290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.188 [2024-07-23 01:09:49.179467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.188 [2024-07-23 01:09:49.179495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.188 qpair failed and we were unable to recover it. 00:30:05.188 [2024-07-23 01:09:49.179678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.188 [2024-07-23 01:09:49.179890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.188 [2024-07-23 01:09:49.179919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.188 qpair failed and we were unable to recover it. 00:30:05.188 [2024-07-23 01:09:49.180128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.188 [2024-07-23 01:09:49.180300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.188 [2024-07-23 01:09:49.180329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.188 qpair failed and we were unable to recover it. 00:30:05.188 [2024-07-23 01:09:49.180541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.188 [2024-07-23 01:09:49.180710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.188 [2024-07-23 01:09:49.180739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.188 qpair failed and we were unable to recover it. 00:30:05.188 [2024-07-23 01:09:49.180927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.188 [2024-07-23 01:09:49.181101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.188 [2024-07-23 01:09:49.181130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.188 qpair failed and we were unable to recover it. 00:30:05.188 [2024-07-23 01:09:49.181340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.188 [2024-07-23 01:09:49.181544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.188 [2024-07-23 01:09:49.181572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.188 qpair failed and we were unable to recover it. 00:30:05.188 [2024-07-23 01:09:49.181780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.188 [2024-07-23 01:09:49.181977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.188 [2024-07-23 01:09:49.182005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.188 qpair failed and we were unable to recover it. 00:30:05.188 [2024-07-23 01:09:49.182182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.188 [2024-07-23 01:09:49.182354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.188 [2024-07-23 01:09:49.182382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.188 qpair failed and we were unable to recover it. 00:30:05.188 [2024-07-23 01:09:49.182564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.188 [2024-07-23 01:09:49.182745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.188 [2024-07-23 01:09:49.182775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.188 qpair failed and we were unable to recover it. 00:30:05.188 [2024-07-23 01:09:49.182936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.188 [2024-07-23 01:09:49.183137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.188 [2024-07-23 01:09:49.183166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.188 qpair failed and we were unable to recover it. 00:30:05.188 [2024-07-23 01:09:49.183344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.188 [2024-07-23 01:09:49.183495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.188 [2024-07-23 01:09:49.183524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.188 qpair failed and we were unable to recover it. 00:30:05.188 [2024-07-23 01:09:49.183710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.188 [2024-07-23 01:09:49.183888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.188 [2024-07-23 01:09:49.183917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.188 qpair failed and we were unable to recover it. 00:30:05.188 [2024-07-23 01:09:49.184131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.188 [2024-07-23 01:09:49.184308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.188 [2024-07-23 01:09:49.184336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.188 qpair failed and we were unable to recover it. 00:30:05.188 [2024-07-23 01:09:49.184546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.188 [2024-07-23 01:09:49.184701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.188 [2024-07-23 01:09:49.184730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.188 qpair failed and we were unable to recover it. 00:30:05.188 [2024-07-23 01:09:49.184891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.188 [2024-07-23 01:09:49.185101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.188 [2024-07-23 01:09:49.185129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.188 qpair failed and we were unable to recover it. 00:30:05.188 [2024-07-23 01:09:49.185306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.188 [2024-07-23 01:09:49.185448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.188 [2024-07-23 01:09:49.185477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.188 qpair failed and we were unable to recover it. 00:30:05.188 [2024-07-23 01:09:49.185665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.188 [2024-07-23 01:09:49.185838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.188 [2024-07-23 01:09:49.185867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.188 qpair failed and we were unable to recover it. 00:30:05.188 [2024-07-23 01:09:49.186051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.188 [2024-07-23 01:09:49.186203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.188 [2024-07-23 01:09:49.186235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.188 qpair failed and we were unable to recover it. 00:30:05.188 [2024-07-23 01:09:49.186436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.188 [2024-07-23 01:09:49.186588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.188 [2024-07-23 01:09:49.186623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.188 qpair failed and we were unable to recover it. 00:30:05.188 [2024-07-23 01:09:49.186812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.188 [2024-07-23 01:09:49.187017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.188 [2024-07-23 01:09:49.187045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.188 qpair failed and we were unable to recover it. 00:30:05.188 [2024-07-23 01:09:49.187256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.188 [2024-07-23 01:09:49.187425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.188 [2024-07-23 01:09:49.187454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.188 qpair failed and we were unable to recover it. 00:30:05.188 [2024-07-23 01:09:49.187637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.188 [2024-07-23 01:09:49.187816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.188 [2024-07-23 01:09:49.187845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.188 qpair failed and we were unable to recover it. 00:30:05.188 [2024-07-23 01:09:49.188025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.188 [2024-07-23 01:09:49.188207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.188 [2024-07-23 01:09:49.188236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.188 qpair failed and we were unable to recover it. 00:30:05.188 [2024-07-23 01:09:49.188418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.188 [2024-07-23 01:09:49.188591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.188 [2024-07-23 01:09:49.188627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.188 qpair failed and we were unable to recover it. 00:30:05.188 [2024-07-23 01:09:49.188816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.188 [2024-07-23 01:09:49.188978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.189 [2024-07-23 01:09:49.189007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.189 qpair failed and we were unable to recover it. 00:30:05.189 [2024-07-23 01:09:49.189194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.189 [2024-07-23 01:09:49.189366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.189 [2024-07-23 01:09:49.189395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.189 qpair failed and we were unable to recover it. 00:30:05.189 [2024-07-23 01:09:49.189578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.189 [2024-07-23 01:09:49.189797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.189 [2024-07-23 01:09:49.189827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.189 qpair failed and we were unable to recover it. 00:30:05.189 [2024-07-23 01:09:49.190038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.189 [2024-07-23 01:09:49.190189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.189 [2024-07-23 01:09:49.190218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.189 qpair failed and we were unable to recover it. 00:30:05.189 [2024-07-23 01:09:49.190371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.189 [2024-07-23 01:09:49.190548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.189 [2024-07-23 01:09:49.190577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.189 qpair failed and we were unable to recover it. 00:30:05.189 [2024-07-23 01:09:49.190776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.189 [2024-07-23 01:09:49.190943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.189 [2024-07-23 01:09:49.190971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.189 qpair failed and we were unable to recover it. 00:30:05.189 [2024-07-23 01:09:49.191132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.189 [2024-07-23 01:09:49.191309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.189 [2024-07-23 01:09:49.191338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.189 qpair failed and we were unable to recover it. 00:30:05.189 [2024-07-23 01:09:49.191522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.189 [2024-07-23 01:09:49.191693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.189 [2024-07-23 01:09:49.191722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.189 qpair failed and we were unable to recover it. 00:30:05.189 [2024-07-23 01:09:49.191903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.189 [2024-07-23 01:09:49.192067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.189 [2024-07-23 01:09:49.192101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.189 qpair failed and we were unable to recover it. 00:30:05.189 [2024-07-23 01:09:49.192310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.189 [2024-07-23 01:09:49.192479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.189 [2024-07-23 01:09:49.192508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.189 qpair failed and we were unable to recover it. 00:30:05.189 [2024-07-23 01:09:49.192726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.189 [2024-07-23 01:09:49.192929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.189 [2024-07-23 01:09:49.192959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.189 qpair failed and we were unable to recover it. 00:30:05.189 [2024-07-23 01:09:49.193125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.189 [2024-07-23 01:09:49.193281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.189 [2024-07-23 01:09:49.193309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.189 qpair failed and we were unable to recover it. 00:30:05.189 [2024-07-23 01:09:49.193525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.189 [2024-07-23 01:09:49.193694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.189 [2024-07-23 01:09:49.193723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.189 qpair failed and we were unable to recover it. 00:30:05.189 [2024-07-23 01:09:49.193911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.189 [2024-07-23 01:09:49.194080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.189 [2024-07-23 01:09:49.194109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.189 qpair failed and we were unable to recover it. 00:30:05.189 [2024-07-23 01:09:49.194291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.189 [2024-07-23 01:09:49.194488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.189 [2024-07-23 01:09:49.194517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.189 qpair failed and we were unable to recover it. 00:30:05.189 [2024-07-23 01:09:49.194677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.189 [2024-07-23 01:09:49.194846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.189 [2024-07-23 01:09:49.194875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.189 qpair failed and we were unable to recover it. 00:30:05.189 [2024-07-23 01:09:49.195059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.189 [2024-07-23 01:09:49.195235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.189 [2024-07-23 01:09:49.195264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.189 qpair failed and we were unable to recover it. 00:30:05.189 [2024-07-23 01:09:49.195455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.189 [2024-07-23 01:09:49.195625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.189 [2024-07-23 01:09:49.195655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.189 qpair failed and we were unable to recover it. 00:30:05.189 [2024-07-23 01:09:49.195835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.189 [2024-07-23 01:09:49.195989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.189 [2024-07-23 01:09:49.196025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.189 qpair failed and we were unable to recover it. 00:30:05.189 [2024-07-23 01:09:49.196206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.189 [2024-07-23 01:09:49.196407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.189 [2024-07-23 01:09:49.196435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.189 qpair failed and we were unable to recover it. 00:30:05.189 [2024-07-23 01:09:49.196631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.189 [2024-07-23 01:09:49.196811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.189 [2024-07-23 01:09:49.196840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.189 qpair failed and we were unable to recover it. 00:30:05.189 [2024-07-23 01:09:49.197027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.189 [2024-07-23 01:09:49.197201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.189 [2024-07-23 01:09:49.197230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.189 qpair failed and we were unable to recover it. 00:30:05.189 [2024-07-23 01:09:49.197421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.189 [2024-07-23 01:09:49.197641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.189 [2024-07-23 01:09:49.197671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.189 qpair failed and we were unable to recover it. 00:30:05.189 [2024-07-23 01:09:49.197855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.189 [2024-07-23 01:09:49.198033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.189 [2024-07-23 01:09:49.198062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.189 qpair failed and we were unable to recover it. 00:30:05.189 [2024-07-23 01:09:49.198241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.189 [2024-07-23 01:09:49.198413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.189 [2024-07-23 01:09:49.198442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.189 qpair failed and we were unable to recover it. 00:30:05.189 [2024-07-23 01:09:49.198629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.189 [2024-07-23 01:09:49.198804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.189 [2024-07-23 01:09:49.198833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.189 qpair failed and we were unable to recover it. 00:30:05.189 [2024-07-23 01:09:49.199022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.189 [2024-07-23 01:09:49.199199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.189 [2024-07-23 01:09:49.199228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.189 qpair failed and we were unable to recover it. 00:30:05.189 [2024-07-23 01:09:49.199392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.189 [2024-07-23 01:09:49.199563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.189 [2024-07-23 01:09:49.199592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.189 qpair failed and we were unable to recover it. 00:30:05.189 [2024-07-23 01:09:49.199820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.189 [2024-07-23 01:09:49.200010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.189 [2024-07-23 01:09:49.200043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.190 qpair failed and we were unable to recover it. 00:30:05.190 [2024-07-23 01:09:49.200226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.190 [2024-07-23 01:09:49.200432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.190 [2024-07-23 01:09:49.200461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.190 qpair failed and we were unable to recover it. 00:30:05.190 [2024-07-23 01:09:49.200627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.190 [2024-07-23 01:09:49.200820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.190 [2024-07-23 01:09:49.200849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.190 qpair failed and we were unable to recover it. 00:30:05.190 [2024-07-23 01:09:49.201060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.190 [2024-07-23 01:09:49.201204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.190 [2024-07-23 01:09:49.201232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.190 qpair failed and we were unable to recover it. 00:30:05.190 [2024-07-23 01:09:49.201402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.190 [2024-07-23 01:09:49.201604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.190 [2024-07-23 01:09:49.201642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.190 qpair failed and we were unable to recover it. 00:30:05.190 [2024-07-23 01:09:49.201832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.190 [2024-07-23 01:09:49.202015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.190 [2024-07-23 01:09:49.202044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.190 qpair failed and we were unable to recover it. 00:30:05.190 [2024-07-23 01:09:49.202233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.190 [2024-07-23 01:09:49.202413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.190 [2024-07-23 01:09:49.202442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.190 qpair failed and we were unable to recover it. 00:30:05.190 [2024-07-23 01:09:49.202610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.190 [2024-07-23 01:09:49.202828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.190 [2024-07-23 01:09:49.202857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.190 qpair failed and we were unable to recover it. 00:30:05.190 [2024-07-23 01:09:49.203043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.190 [2024-07-23 01:09:49.203227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.190 [2024-07-23 01:09:49.203257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.190 qpair failed and we were unable to recover it. 00:30:05.190 [2024-07-23 01:09:49.203472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.190 [2024-07-23 01:09:49.203659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.190 [2024-07-23 01:09:49.203688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.190 qpair failed and we were unable to recover it. 00:30:05.190 [2024-07-23 01:09:49.203856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.190 [2024-07-23 01:09:49.204035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.190 [2024-07-23 01:09:49.204069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.190 qpair failed and we were unable to recover it. 00:30:05.190 [2024-07-23 01:09:49.204258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.190 [2024-07-23 01:09:49.204437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.190 [2024-07-23 01:09:49.204466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.190 qpair failed and we were unable to recover it. 00:30:05.190 [2024-07-23 01:09:49.204655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.190 [2024-07-23 01:09:49.204836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.190 [2024-07-23 01:09:49.204864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.190 qpair failed and we were unable to recover it. 00:30:05.190 [2024-07-23 01:09:49.205039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.190 [2024-07-23 01:09:49.205209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.190 [2024-07-23 01:09:49.205238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.190 qpair failed and we were unable to recover it. 00:30:05.190 [2024-07-23 01:09:49.205432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.190 [2024-07-23 01:09:49.205586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.190 [2024-07-23 01:09:49.205637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.190 qpair failed and we were unable to recover it. 00:30:05.190 [2024-07-23 01:09:49.205829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.190 [2024-07-23 01:09:49.206012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.190 [2024-07-23 01:09:49.206041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.190 qpair failed and we were unable to recover it. 00:30:05.190 [2024-07-23 01:09:49.206227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.190 [2024-07-23 01:09:49.206401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.190 [2024-07-23 01:09:49.206431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.190 qpair failed and we were unable to recover it. 00:30:05.190 [2024-07-23 01:09:49.206590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.190 [2024-07-23 01:09:49.206757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.190 [2024-07-23 01:09:49.206787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.190 qpair failed and we were unable to recover it. 00:30:05.190 [2024-07-23 01:09:49.207011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.190 [2024-07-23 01:09:49.207176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.190 [2024-07-23 01:09:49.207204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.190 qpair failed and we were unable to recover it. 00:30:05.190 [2024-07-23 01:09:49.207415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.190 [2024-07-23 01:09:49.207599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.190 [2024-07-23 01:09:49.207638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.190 qpair failed and we were unable to recover it. 00:30:05.190 [2024-07-23 01:09:49.207825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.190 [2024-07-23 01:09:49.208026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.190 [2024-07-23 01:09:49.208054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.190 qpair failed and we were unable to recover it. 00:30:05.190 [2024-07-23 01:09:49.208221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.190 [2024-07-23 01:09:49.208426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.190 [2024-07-23 01:09:49.208454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.190 qpair failed and we were unable to recover it. 00:30:05.190 [2024-07-23 01:09:49.208670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.190 [2024-07-23 01:09:49.208823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.190 [2024-07-23 01:09:49.208852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.190 qpair failed and we were unable to recover it. 00:30:05.190 [2024-07-23 01:09:49.209065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.190 [2024-07-23 01:09:49.209238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.190 [2024-07-23 01:09:49.209266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.190 qpair failed and we were unable to recover it. 00:30:05.190 [2024-07-23 01:09:49.209474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.190 [2024-07-23 01:09:49.209650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.190 [2024-07-23 01:09:49.209680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.190 qpair failed and we were unable to recover it. 00:30:05.190 [2024-07-23 01:09:49.209891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.190 [2024-07-23 01:09:49.210106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.190 [2024-07-23 01:09:49.210135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.190 qpair failed and we were unable to recover it. 00:30:05.190 [2024-07-23 01:09:49.210328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.190 [2024-07-23 01:09:49.210499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.190 [2024-07-23 01:09:49.210528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.190 qpair failed and we were unable to recover it. 00:30:05.190 [2024-07-23 01:09:49.210717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.190 [2024-07-23 01:09:49.210893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.191 [2024-07-23 01:09:49.210921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.191 qpair failed and we were unable to recover it. 00:30:05.191 [2024-07-23 01:09:49.211133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.191 [2024-07-23 01:09:49.211339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.191 [2024-07-23 01:09:49.211368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.191 qpair failed and we were unable to recover it. 00:30:05.191 [2024-07-23 01:09:49.211532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.191 [2024-07-23 01:09:49.211723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.191 [2024-07-23 01:09:49.211753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.191 qpair failed and we were unable to recover it. 00:30:05.191 [2024-07-23 01:09:49.211971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.191 [2024-07-23 01:09:49.212142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.191 [2024-07-23 01:09:49.212171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.191 qpair failed and we were unable to recover it. 00:30:05.191 [2024-07-23 01:09:49.212339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.191 [2024-07-23 01:09:49.212520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.191 [2024-07-23 01:09:49.212548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.191 qpair failed and we were unable to recover it. 00:30:05.191 [2024-07-23 01:09:49.212743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.191 [2024-07-23 01:09:49.212886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.191 [2024-07-23 01:09:49.212915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.191 qpair failed and we were unable to recover it. 00:30:05.191 [2024-07-23 01:09:49.213081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.191 [2024-07-23 01:09:49.213279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.191 [2024-07-23 01:09:49.213308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.191 qpair failed and we were unable to recover it. 00:30:05.191 [2024-07-23 01:09:49.213505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.191 [2024-07-23 01:09:49.213681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.191 [2024-07-23 01:09:49.213711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.191 qpair failed and we were unable to recover it. 00:30:05.191 [2024-07-23 01:09:49.213922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.191 [2024-07-23 01:09:49.214091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.191 [2024-07-23 01:09:49.214119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.191 qpair failed and we were unable to recover it. 00:30:05.191 [2024-07-23 01:09:49.214296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.191 [2024-07-23 01:09:49.214470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.191 [2024-07-23 01:09:49.214499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.191 qpair failed and we were unable to recover it. 00:30:05.191 [2024-07-23 01:09:49.214684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.191 [2024-07-23 01:09:49.214857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.191 [2024-07-23 01:09:49.214885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.191 qpair failed and we were unable to recover it. 00:30:05.191 [2024-07-23 01:09:49.215068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.191 [2024-07-23 01:09:49.215244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.191 [2024-07-23 01:09:49.215273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.191 qpair failed and we were unable to recover it. 00:30:05.191 [2024-07-23 01:09:49.215456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.191 [2024-07-23 01:09:49.215633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.191 [2024-07-23 01:09:49.215666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.191 qpair failed and we were unable to recover it. 00:30:05.191 [2024-07-23 01:09:49.215880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.191 [2024-07-23 01:09:49.216032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.191 [2024-07-23 01:09:49.216061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.191 qpair failed and we were unable to recover it. 00:30:05.191 [2024-07-23 01:09:49.216231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.191 [2024-07-23 01:09:49.216404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.191 [2024-07-23 01:09:49.216433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.191 qpair failed and we were unable to recover it. 00:30:05.191 [2024-07-23 01:09:49.216608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.191 [2024-07-23 01:09:49.216795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.191 [2024-07-23 01:09:49.216825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.191 qpair failed and we were unable to recover it. 00:30:05.191 [2024-07-23 01:09:49.217039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.191 [2024-07-23 01:09:49.217207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.191 [2024-07-23 01:09:49.217236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.191 qpair failed and we were unable to recover it. 00:30:05.191 [2024-07-23 01:09:49.217418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.191 [2024-07-23 01:09:49.217640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.191 [2024-07-23 01:09:49.217671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.191 qpair failed and we were unable to recover it. 00:30:05.191 [2024-07-23 01:09:49.217840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.191 [2024-07-23 01:09:49.217990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.191 [2024-07-23 01:09:49.218019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.191 qpair failed and we were unable to recover it. 00:30:05.191 [2024-07-23 01:09:49.218205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.191 [2024-07-23 01:09:49.218403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.191 [2024-07-23 01:09:49.218432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.191 qpair failed and we were unable to recover it. 00:30:05.191 [2024-07-23 01:09:49.218628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.191 [2024-07-23 01:09:49.218832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.191 [2024-07-23 01:09:49.218861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.191 qpair failed and we were unable to recover it. 00:30:05.191 [2024-07-23 01:09:49.219057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.191 [2024-07-23 01:09:49.219236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.191 [2024-07-23 01:09:49.219265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.191 qpair failed and we were unable to recover it. 00:30:05.191 [2024-07-23 01:09:49.219456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.191 [2024-07-23 01:09:49.219612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.191 [2024-07-23 01:09:49.219649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.191 qpair failed and we were unable to recover it. 00:30:05.191 [2024-07-23 01:09:49.219841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.191 [2024-07-23 01:09:49.220023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.191 [2024-07-23 01:09:49.220052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.191 qpair failed and we were unable to recover it. 00:30:05.191 [2024-07-23 01:09:49.220242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.191 [2024-07-23 01:09:49.220446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.192 [2024-07-23 01:09:49.220474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.192 qpair failed and we were unable to recover it. 00:30:05.192 [2024-07-23 01:09:49.220643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.192 [2024-07-23 01:09:49.220821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.192 [2024-07-23 01:09:49.220849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.192 qpair failed and we were unable to recover it. 00:30:05.192 [2024-07-23 01:09:49.221062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.192 [2024-07-23 01:09:49.221210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.192 [2024-07-23 01:09:49.221238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.192 qpair failed and we were unable to recover it. 00:30:05.192 [2024-07-23 01:09:49.221451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.192 [2024-07-23 01:09:49.221634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.192 [2024-07-23 01:09:49.221663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.192 qpair failed and we were unable to recover it. 00:30:05.192 [2024-07-23 01:09:49.221821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.192 [2024-07-23 01:09:49.221995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.192 [2024-07-23 01:09:49.222024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.192 qpair failed and we were unable to recover it. 00:30:05.192 [2024-07-23 01:09:49.222176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.192 [2024-07-23 01:09:49.222383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.192 [2024-07-23 01:09:49.222413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.192 qpair failed and we were unable to recover it. 00:30:05.192 [2024-07-23 01:09:49.222600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.192 [2024-07-23 01:09:49.222784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.192 [2024-07-23 01:09:49.222813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.192 qpair failed and we were unable to recover it. 00:30:05.192 [2024-07-23 01:09:49.222996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.192 [2024-07-23 01:09:49.223139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.192 [2024-07-23 01:09:49.223168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.192 qpair failed and we were unable to recover it. 00:30:05.192 [2024-07-23 01:09:49.223334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.192 [2024-07-23 01:09:49.223536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.192 [2024-07-23 01:09:49.223565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.192 qpair failed and we were unable to recover it. 00:30:05.192 [2024-07-23 01:09:49.223766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.192 [2024-07-23 01:09:49.223935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.192 [2024-07-23 01:09:49.223964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.192 qpair failed and we were unable to recover it. 00:30:05.192 [2024-07-23 01:09:49.224154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.192 [2024-07-23 01:09:49.224332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.192 [2024-07-23 01:09:49.224361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.192 qpair failed and we were unable to recover it. 00:30:05.192 [2024-07-23 01:09:49.224514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.192 [2024-07-23 01:09:49.224668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.192 [2024-07-23 01:09:49.224698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.192 qpair failed and we were unable to recover it. 00:30:05.192 [2024-07-23 01:09:49.224879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.192 [2024-07-23 01:09:49.225054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.192 [2024-07-23 01:09:49.225083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.192 qpair failed and we were unable to recover it. 00:30:05.192 [2024-07-23 01:09:49.225295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.192 [2024-07-23 01:09:49.225504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.192 [2024-07-23 01:09:49.225533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.192 qpair failed and we were unable to recover it. 00:30:05.192 [2024-07-23 01:09:49.225717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.192 [2024-07-23 01:09:49.225917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.192 [2024-07-23 01:09:49.225946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.192 qpair failed and we were unable to recover it. 00:30:05.192 [2024-07-23 01:09:49.226137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.192 [2024-07-23 01:09:49.226287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.192 [2024-07-23 01:09:49.226315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.192 qpair failed and we were unable to recover it. 00:30:05.192 [2024-07-23 01:09:49.226501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.192 [2024-07-23 01:09:49.226705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.192 [2024-07-23 01:09:49.226735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.192 qpair failed and we were unable to recover it. 00:30:05.192 [2024-07-23 01:09:49.226922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.192 [2024-07-23 01:09:49.227123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.192 [2024-07-23 01:09:49.227153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.192 qpair failed and we were unable to recover it. 00:30:05.192 [2024-07-23 01:09:49.227345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.192 [2024-07-23 01:09:49.227516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.192 [2024-07-23 01:09:49.227545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.192 qpair failed and we were unable to recover it. 00:30:05.192 [2024-07-23 01:09:49.227714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.192 [2024-07-23 01:09:49.227918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.192 [2024-07-23 01:09:49.227947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.192 qpair failed and we were unable to recover it. 00:30:05.192 [2024-07-23 01:09:49.228169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.192 [2024-07-23 01:09:49.228336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.192 [2024-07-23 01:09:49.228365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.192 qpair failed and we were unable to recover it. 00:30:05.192 [2024-07-23 01:09:49.228566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.192 [2024-07-23 01:09:49.228756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.192 [2024-07-23 01:09:49.228786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.192 qpair failed and we were unable to recover it. 00:30:05.192 [2024-07-23 01:09:49.228954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.192 [2024-07-23 01:09:49.229132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.192 [2024-07-23 01:09:49.229161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.192 qpair failed and we were unable to recover it. 00:30:05.192 [2024-07-23 01:09:49.229324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.192 [2024-07-23 01:09:49.229472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.192 [2024-07-23 01:09:49.229504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.192 qpair failed and we were unable to recover it. 00:30:05.192 [2024-07-23 01:09:49.229692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.192 [2024-07-23 01:09:49.229873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.192 [2024-07-23 01:09:49.229903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.192 qpair failed and we were unable to recover it. 00:30:05.192 [2024-07-23 01:09:49.230087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.192 [2024-07-23 01:09:49.230271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.192 [2024-07-23 01:09:49.230301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.192 qpair failed and we were unable to recover it. 00:30:05.192 [2024-07-23 01:09:49.230514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.192 [2024-07-23 01:09:49.230686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.192 [2024-07-23 01:09:49.230715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.192 qpair failed and we were unable to recover it. 00:30:05.192 [2024-07-23 01:09:49.230902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.192 [2024-07-23 01:09:49.231083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.192 [2024-07-23 01:09:49.231112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.192 qpair failed and we were unable to recover it. 00:30:05.192 [2024-07-23 01:09:49.231276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.192 [2024-07-23 01:09:49.231424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.192 [2024-07-23 01:09:49.231453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.192 qpair failed and we were unable to recover it. 00:30:05.193 [2024-07-23 01:09:49.231621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.193 [2024-07-23 01:09:49.231801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.193 [2024-07-23 01:09:49.231830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.193 qpair failed and we were unable to recover it. 00:30:05.193 [2024-07-23 01:09:49.232023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.193 [2024-07-23 01:09:49.232173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.193 [2024-07-23 01:09:49.232208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.193 qpair failed and we were unable to recover it. 00:30:05.193 [2024-07-23 01:09:49.232397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.193 [2024-07-23 01:09:49.232576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.193 [2024-07-23 01:09:49.232606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.193 qpair failed and we were unable to recover it. 00:30:05.193 [2024-07-23 01:09:49.232799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.193 [2024-07-23 01:09:49.232996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.193 [2024-07-23 01:09:49.233026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.193 qpair failed and we were unable to recover it. 00:30:05.193 [2024-07-23 01:09:49.233214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.193 [2024-07-23 01:09:49.233389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.193 [2024-07-23 01:09:49.233418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.193 qpair failed and we were unable to recover it. 00:30:05.193 [2024-07-23 01:09:49.233610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.193 [2024-07-23 01:09:49.233778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.193 [2024-07-23 01:09:49.233807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.193 qpair failed and we were unable to recover it. 00:30:05.193 [2024-07-23 01:09:49.233997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.193 [2024-07-23 01:09:49.234209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.193 [2024-07-23 01:09:49.234238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.193 qpair failed and we were unable to recover it. 00:30:05.193 [2024-07-23 01:09:49.234414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.193 [2024-07-23 01:09:49.234593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.193 [2024-07-23 01:09:49.234632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.193 qpair failed and we were unable to recover it. 00:30:05.193 [2024-07-23 01:09:49.234821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.193 [2024-07-23 01:09:49.234994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.193 [2024-07-23 01:09:49.235023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.193 qpair failed and we were unable to recover it. 00:30:05.193 [2024-07-23 01:09:49.235218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.193 [2024-07-23 01:09:49.235366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.193 [2024-07-23 01:09:49.235395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.193 qpair failed and we were unable to recover it. 00:30:05.193 [2024-07-23 01:09:49.235578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.193 [2024-07-23 01:09:49.235741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.193 [2024-07-23 01:09:49.235771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.193 qpair failed and we were unable to recover it. 00:30:05.193 [2024-07-23 01:09:49.235977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.193 [2024-07-23 01:09:49.236190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.193 [2024-07-23 01:09:49.236221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.193 qpair failed and we were unable to recover it. 00:30:05.193 [2024-07-23 01:09:49.236396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.193 [2024-07-23 01:09:49.236560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.193 [2024-07-23 01:09:49.236585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.193 qpair failed and we were unable to recover it. 00:30:05.193 [2024-07-23 01:09:49.236739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.193 [2024-07-23 01:09:49.236878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.193 [2024-07-23 01:09:49.236902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.193 qpair failed and we were unable to recover it. 00:30:05.193 [2024-07-23 01:09:49.237036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.193 [2024-07-23 01:09:49.237202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.193 [2024-07-23 01:09:49.237226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.193 qpair failed and we were unable to recover it. 00:30:05.193 [2024-07-23 01:09:49.237373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.193 [2024-07-23 01:09:49.237541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.193 [2024-07-23 01:09:49.237568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.193 qpair failed and we were unable to recover it. 00:30:05.193 [2024-07-23 01:09:49.237740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.193 [2024-07-23 01:09:49.237872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.193 [2024-07-23 01:09:49.237897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.193 qpair failed and we were unable to recover it. 00:30:05.193 [2024-07-23 01:09:49.238034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.193 [2024-07-23 01:09:49.238164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.193 [2024-07-23 01:09:49.238189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.193 qpair failed and we were unable to recover it. 00:30:05.193 [2024-07-23 01:09:49.238354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.193 [2024-07-23 01:09:49.238544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.193 [2024-07-23 01:09:49.238569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.193 qpair failed and we were unable to recover it. 00:30:05.193 [2024-07-23 01:09:49.238779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.193 [2024-07-23 01:09:49.238945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.193 [2024-07-23 01:09:49.238970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.193 qpair failed and we were unable to recover it. 00:30:05.193 [2024-07-23 01:09:49.239104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.193 [2024-07-23 01:09:49.239243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.193 [2024-07-23 01:09:49.239267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.193 qpair failed and we were unable to recover it. 00:30:05.193 [2024-07-23 01:09:49.239409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.193 [2024-07-23 01:09:49.239552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.193 [2024-07-23 01:09:49.239576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.193 qpair failed and we were unable to recover it. 00:30:05.193 [2024-07-23 01:09:49.239744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.193 [2024-07-23 01:09:49.239885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.193 [2024-07-23 01:09:49.239910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.193 qpair failed and we were unable to recover it. 00:30:05.193 [2024-07-23 01:09:49.240076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.193 [2024-07-23 01:09:49.240211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.193 [2024-07-23 01:09:49.240236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.193 qpair failed and we were unable to recover it. 00:30:05.193 [2024-07-23 01:09:49.240437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.193 [2024-07-23 01:09:49.240626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.193 [2024-07-23 01:09:49.240651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.193 qpair failed and we were unable to recover it. 00:30:05.193 [2024-07-23 01:09:49.240809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.193 [2024-07-23 01:09:49.240942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.193 [2024-07-23 01:09:49.240966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.193 qpair failed and we were unable to recover it. 00:30:05.193 [2024-07-23 01:09:49.241106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.193 [2024-07-23 01:09:49.241262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.193 [2024-07-23 01:09:49.241288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.193 qpair failed and we were unable to recover it. 00:30:05.193 [2024-07-23 01:09:49.241487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.193 [2024-07-23 01:09:49.241647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.193 [2024-07-23 01:09:49.241672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.193 qpair failed and we were unable to recover it. 00:30:05.194 [2024-07-23 01:09:49.241814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.194 [2024-07-23 01:09:49.241993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.194 [2024-07-23 01:09:49.242017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.194 qpair failed and we were unable to recover it. 00:30:05.194 [2024-07-23 01:09:49.242152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.194 [2024-07-23 01:09:49.242290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.194 [2024-07-23 01:09:49.242314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.194 qpair failed and we were unable to recover it. 00:30:05.194 [2024-07-23 01:09:49.242478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.194 [2024-07-23 01:09:49.242653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.194 [2024-07-23 01:09:49.242678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.194 qpair failed and we were unable to recover it. 00:30:05.194 [2024-07-23 01:09:49.242843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.194 [2024-07-23 01:09:49.243008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.194 [2024-07-23 01:09:49.243036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.194 qpair failed and we were unable to recover it. 00:30:05.194 [2024-07-23 01:09:49.243204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.194 [2024-07-23 01:09:49.243363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.194 [2024-07-23 01:09:49.243388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.194 qpair failed and we were unable to recover it. 00:30:05.194 [2024-07-23 01:09:49.243552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.194 [2024-07-23 01:09:49.243689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.194 [2024-07-23 01:09:49.243713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.194 qpair failed and we were unable to recover it. 00:30:05.194 [2024-07-23 01:09:49.243852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.194 [2024-07-23 01:09:49.244011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.194 [2024-07-23 01:09:49.244036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.194 qpair failed and we were unable to recover it. 00:30:05.194 [2024-07-23 01:09:49.244226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.194 [2024-07-23 01:09:49.244387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.194 [2024-07-23 01:09:49.244412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.194 qpair failed and we were unable to recover it. 00:30:05.194 [2024-07-23 01:09:49.244583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.194 [2024-07-23 01:09:49.244750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.194 [2024-07-23 01:09:49.244775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.194 qpair failed and we were unable to recover it. 00:30:05.194 [2024-07-23 01:09:49.244914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.194 [2024-07-23 01:09:49.245056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.194 [2024-07-23 01:09:49.245081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.194 qpair failed and we were unable to recover it. 00:30:05.194 [2024-07-23 01:09:49.245249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.194 [2024-07-23 01:09:49.245414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.194 [2024-07-23 01:09:49.245440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.194 qpair failed and we were unable to recover it. 00:30:05.194 [2024-07-23 01:09:49.245600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.194 [2024-07-23 01:09:49.245798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.194 [2024-07-23 01:09:49.245823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.194 qpair failed and we were unable to recover it. 00:30:05.194 [2024-07-23 01:09:49.245994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.194 [2024-07-23 01:09:49.246159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.194 [2024-07-23 01:09:49.246183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.194 qpair failed and we were unable to recover it. 00:30:05.194 [2024-07-23 01:09:49.246372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.194 [2024-07-23 01:09:49.246503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.194 [2024-07-23 01:09:49.246528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.194 qpair failed and we were unable to recover it. 00:30:05.194 [2024-07-23 01:09:49.246683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.194 [2024-07-23 01:09:49.246925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.194 [2024-07-23 01:09:49.246950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.194 qpair failed and we were unable to recover it. 00:30:05.194 [2024-07-23 01:09:49.247136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.194 [2024-07-23 01:09:49.247309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.194 [2024-07-23 01:09:49.247334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.194 qpair failed and we were unable to recover it. 00:30:05.194 [2024-07-23 01:09:49.247499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.194 [2024-07-23 01:09:49.247659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.194 [2024-07-23 01:09:49.247684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.194 qpair failed and we were unable to recover it. 00:30:05.194 [2024-07-23 01:09:49.247816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.194 [2024-07-23 01:09:49.247952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.194 [2024-07-23 01:09:49.247976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.194 qpair failed and we were unable to recover it. 00:30:05.194 [2024-07-23 01:09:49.248141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.194 [2024-07-23 01:09:49.248302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.194 [2024-07-23 01:09:49.248326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.194 qpair failed and we were unable to recover it. 00:30:05.194 [2024-07-23 01:09:49.248518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.194 [2024-07-23 01:09:49.248688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.194 [2024-07-23 01:09:49.248712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.194 qpair failed and we were unable to recover it. 00:30:05.194 [2024-07-23 01:09:49.248904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.194 [2024-07-23 01:09:49.249059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.194 [2024-07-23 01:09:49.249083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.194 qpair failed and we were unable to recover it. 00:30:05.194 [2024-07-23 01:09:49.249222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.194 [2024-07-23 01:09:49.249356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.194 [2024-07-23 01:09:49.249381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.194 qpair failed and we were unable to recover it. 00:30:05.194 [2024-07-23 01:09:49.249545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.194 [2024-07-23 01:09:49.249684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.194 [2024-07-23 01:09:49.249709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.194 qpair failed and we were unable to recover it. 00:30:05.194 [2024-07-23 01:09:49.249846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.194 [2024-07-23 01:09:49.250009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.194 [2024-07-23 01:09:49.250034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.194 qpair failed and we were unable to recover it. 00:30:05.194 [2024-07-23 01:09:49.250209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.194 [2024-07-23 01:09:49.250372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.194 [2024-07-23 01:09:49.250396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.194 qpair failed and we were unable to recover it. 00:30:05.194 [2024-07-23 01:09:49.250559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.194 [2024-07-23 01:09:49.250719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.194 [2024-07-23 01:09:49.250744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.194 qpair failed and we were unable to recover it. 00:30:05.194 [2024-07-23 01:09:49.250886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.194 [2024-07-23 01:09:49.251020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.194 [2024-07-23 01:09:49.251044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.194 qpair failed and we were unable to recover it. 00:30:05.194 [2024-07-23 01:09:49.251241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.194 [2024-07-23 01:09:49.251430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.194 [2024-07-23 01:09:49.251455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.194 qpair failed and we were unable to recover it. 00:30:05.194 [2024-07-23 01:09:49.251697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.194 [2024-07-23 01:09:49.251833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.195 [2024-07-23 01:09:49.251857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.195 qpair failed and we were unable to recover it. 00:30:05.195 [2024-07-23 01:09:49.252044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.195 [2024-07-23 01:09:49.252181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.195 [2024-07-23 01:09:49.252205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.195 qpair failed and we were unable to recover it. 00:30:05.195 [2024-07-23 01:09:49.252391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.195 [2024-07-23 01:09:49.252578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.195 [2024-07-23 01:09:49.252602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.195 qpair failed and we were unable to recover it. 00:30:05.195 [2024-07-23 01:09:49.252792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.195 [2024-07-23 01:09:49.252952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.195 [2024-07-23 01:09:49.252977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.195 qpair failed and we were unable to recover it. 00:30:05.195 [2024-07-23 01:09:49.253166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.195 [2024-07-23 01:09:49.253304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.195 [2024-07-23 01:09:49.253329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.195 qpair failed and we were unable to recover it. 00:30:05.195 [2024-07-23 01:09:49.253496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.195 [2024-07-23 01:09:49.253628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.195 [2024-07-23 01:09:49.253653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.195 qpair failed and we were unable to recover it. 00:30:05.195 [2024-07-23 01:09:49.253814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.195 [2024-07-23 01:09:49.253975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.195 [2024-07-23 01:09:49.254000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.195 qpair failed and we were unable to recover it. 00:30:05.195 [2024-07-23 01:09:49.254161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.195 [2024-07-23 01:09:49.254349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.195 [2024-07-23 01:09:49.254374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.195 qpair failed and we were unable to recover it. 00:30:05.195 [2024-07-23 01:09:49.254538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.195 [2024-07-23 01:09:49.254724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.195 [2024-07-23 01:09:49.254749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.195 qpair failed and we were unable to recover it. 00:30:05.195 [2024-07-23 01:09:49.254891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.195 [2024-07-23 01:09:49.255076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.195 [2024-07-23 01:09:49.255100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.195 qpair failed and we were unable to recover it. 00:30:05.195 [2024-07-23 01:09:49.255340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.195 [2024-07-23 01:09:49.255502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.195 [2024-07-23 01:09:49.255526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.195 qpair failed and we were unable to recover it. 00:30:05.195 [2024-07-23 01:09:49.255691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.195 [2024-07-23 01:09:49.255829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.195 [2024-07-23 01:09:49.255853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.195 qpair failed and we were unable to recover it. 00:30:05.195 [2024-07-23 01:09:49.255992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.195 [2024-07-23 01:09:49.256153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.195 [2024-07-23 01:09:49.256177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.195 qpair failed and we were unable to recover it. 00:30:05.195 [2024-07-23 01:09:49.256338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.195 [2024-07-23 01:09:49.256578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.195 [2024-07-23 01:09:49.256602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.195 qpair failed and we were unable to recover it. 00:30:05.195 [2024-07-23 01:09:49.256798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.195 [2024-07-23 01:09:49.256965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.195 [2024-07-23 01:09:49.256989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.195 qpair failed and we were unable to recover it. 00:30:05.195 [2024-07-23 01:09:49.257123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.195 [2024-07-23 01:09:49.257312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.195 [2024-07-23 01:09:49.257337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.195 qpair failed and we were unable to recover it. 00:30:05.195 [2024-07-23 01:09:49.257472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.195 [2024-07-23 01:09:49.257645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.195 [2024-07-23 01:09:49.257679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.195 qpair failed and we were unable to recover it. 00:30:05.195 [2024-07-23 01:09:49.257805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.195 [2024-07-23 01:09:49.257938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.195 [2024-07-23 01:09:49.257962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.195 qpair failed and we were unable to recover it. 00:30:05.195 [2024-07-23 01:09:49.258152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.195 [2024-07-23 01:09:49.258391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.195 [2024-07-23 01:09:49.258416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.195 qpair failed and we were unable to recover it. 00:30:05.195 [2024-07-23 01:09:49.258594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.195 [2024-07-23 01:09:49.258764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.195 [2024-07-23 01:09:49.258788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.195 qpair failed and we were unable to recover it. 00:30:05.195 [2024-07-23 01:09:49.258980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.195 [2024-07-23 01:09:49.259137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.195 [2024-07-23 01:09:49.259162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.195 qpair failed and we were unable to recover it. 00:30:05.195 [2024-07-23 01:09:49.259324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.195 [2024-07-23 01:09:49.259484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.195 [2024-07-23 01:09:49.259508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.195 qpair failed and we were unable to recover it. 00:30:05.195 [2024-07-23 01:09:49.259696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.195 [2024-07-23 01:09:49.259860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.195 [2024-07-23 01:09:49.259884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.195 qpair failed and we were unable to recover it. 00:30:05.195 [2024-07-23 01:09:49.260019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.195 [2024-07-23 01:09:49.260185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.195 [2024-07-23 01:09:49.260210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.195 qpair failed and we were unable to recover it. 00:30:05.195 [2024-07-23 01:09:49.260350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.195 [2024-07-23 01:09:49.260486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.195 [2024-07-23 01:09:49.260510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.195 qpair failed and we were unable to recover it. 00:30:05.195 [2024-07-23 01:09:49.260753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.195 [2024-07-23 01:09:49.260948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.196 [2024-07-23 01:09:49.260972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.196 qpair failed and we were unable to recover it. 00:30:05.196 [2024-07-23 01:09:49.261146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.196 [2024-07-23 01:09:49.261332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.196 [2024-07-23 01:09:49.261361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.196 qpair failed and we were unable to recover it. 00:30:05.196 [2024-07-23 01:09:49.261520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.196 [2024-07-23 01:09:49.261653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.196 [2024-07-23 01:09:49.261679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.196 qpair failed and we were unable to recover it. 00:30:05.196 [2024-07-23 01:09:49.261815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.196 [2024-07-23 01:09:49.261946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.196 [2024-07-23 01:09:49.261971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.196 qpair failed and we were unable to recover it. 00:30:05.196 [2024-07-23 01:09:49.262135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.196 [2024-07-23 01:09:49.262269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.196 [2024-07-23 01:09:49.262294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.196 qpair failed and we were unable to recover it. 00:30:05.196 [2024-07-23 01:09:49.262456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.196 [2024-07-23 01:09:49.262584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.196 [2024-07-23 01:09:49.262608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.196 qpair failed and we were unable to recover it. 00:30:05.196 [2024-07-23 01:09:49.262792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.196 [2024-07-23 01:09:49.263031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.196 [2024-07-23 01:09:49.263056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.196 qpair failed and we were unable to recover it. 00:30:05.196 [2024-07-23 01:09:49.263220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.196 [2024-07-23 01:09:49.263412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.196 [2024-07-23 01:09:49.263436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.196 qpair failed and we were unable to recover it. 00:30:05.196 [2024-07-23 01:09:49.263573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.196 [2024-07-23 01:09:49.263737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.196 [2024-07-23 01:09:49.263762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.196 qpair failed and we were unable to recover it. 00:30:05.196 [2024-07-23 01:09:49.264002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.196 [2024-07-23 01:09:49.264241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.196 [2024-07-23 01:09:49.264265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.196 qpair failed and we were unable to recover it. 00:30:05.196 [2024-07-23 01:09:49.264430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.196 [2024-07-23 01:09:49.264597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.196 [2024-07-23 01:09:49.264639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.196 qpair failed and we were unable to recover it. 00:30:05.196 [2024-07-23 01:09:49.264783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.196 [2024-07-23 01:09:49.264938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.196 [2024-07-23 01:09:49.264967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.196 qpair failed and we were unable to recover it. 00:30:05.196 [2024-07-23 01:09:49.265131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.196 [2024-07-23 01:09:49.265370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.196 [2024-07-23 01:09:49.265396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.196 qpair failed and we were unable to recover it. 00:30:05.196 [2024-07-23 01:09:49.265589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.196 [2024-07-23 01:09:49.265755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.196 [2024-07-23 01:09:49.265780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.196 qpair failed and we were unable to recover it. 00:30:05.196 [2024-07-23 01:09:49.265943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.196 [2024-07-23 01:09:49.266136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.196 [2024-07-23 01:09:49.266160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.196 qpair failed and we were unable to recover it. 00:30:05.196 [2024-07-23 01:09:49.266324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.196 [2024-07-23 01:09:49.266491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.196 [2024-07-23 01:09:49.266517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.196 qpair failed and we were unable to recover it. 00:30:05.196 [2024-07-23 01:09:49.266688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.196 [2024-07-23 01:09:49.266848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.196 [2024-07-23 01:09:49.266873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.196 qpair failed and we were unable to recover it. 00:30:05.196 [2024-07-23 01:09:49.267041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.196 [2024-07-23 01:09:49.267173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.196 [2024-07-23 01:09:49.267198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.196 qpair failed and we were unable to recover it. 00:30:05.196 [2024-07-23 01:09:49.267356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.196 [2024-07-23 01:09:49.267492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.196 [2024-07-23 01:09:49.267518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.196 qpair failed and we were unable to recover it. 00:30:05.196 [2024-07-23 01:09:49.267686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.196 [2024-07-23 01:09:49.267930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.196 [2024-07-23 01:09:49.267954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.196 qpair failed and we were unable to recover it. 00:30:05.196 [2024-07-23 01:09:49.268123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.196 [2024-07-23 01:09:49.268308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.196 [2024-07-23 01:09:49.268332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.196 qpair failed and we were unable to recover it. 00:30:05.196 [2024-07-23 01:09:49.268517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.196 [2024-07-23 01:09:49.268689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.196 [2024-07-23 01:09:49.268713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.196 qpair failed and we were unable to recover it. 00:30:05.196 [2024-07-23 01:09:49.268889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.196 [2024-07-23 01:09:49.269052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.196 [2024-07-23 01:09:49.269078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.196 qpair failed and we were unable to recover it. 00:30:05.196 [2024-07-23 01:09:49.269268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.196 [2024-07-23 01:09:49.269444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.196 [2024-07-23 01:09:49.269468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.196 qpair failed and we were unable to recover it. 00:30:05.196 [2024-07-23 01:09:49.269634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.196 [2024-07-23 01:09:49.269794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.196 [2024-07-23 01:09:49.269818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.196 qpair failed and we were unable to recover it. 00:30:05.196 [2024-07-23 01:09:49.270005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.196 [2024-07-23 01:09:49.270163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.196 [2024-07-23 01:09:49.270189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.196 qpair failed and we were unable to recover it. 00:30:05.196 [2024-07-23 01:09:49.270325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.196 [2024-07-23 01:09:49.270489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.196 [2024-07-23 01:09:49.270513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.196 qpair failed and we were unable to recover it. 00:30:05.196 [2024-07-23 01:09:49.270648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.196 [2024-07-23 01:09:49.270786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.196 [2024-07-23 01:09:49.270810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.196 qpair failed and we were unable to recover it. 00:30:05.196 [2024-07-23 01:09:49.271000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.196 [2024-07-23 01:09:49.271158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.196 [2024-07-23 01:09:49.271182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.196 qpair failed and we were unable to recover it. 00:30:05.197 [2024-07-23 01:09:49.271315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.197 [2024-07-23 01:09:49.271490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.197 [2024-07-23 01:09:49.271514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.197 qpair failed and we were unable to recover it. 00:30:05.197 [2024-07-23 01:09:49.271700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.197 [2024-07-23 01:09:49.271862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.197 [2024-07-23 01:09:49.271886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.197 qpair failed and we were unable to recover it. 00:30:05.197 [2024-07-23 01:09:49.272052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.197 [2024-07-23 01:09:49.272188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.197 [2024-07-23 01:09:49.272212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.197 qpair failed and we were unable to recover it. 00:30:05.197 [2024-07-23 01:09:49.272351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.197 [2024-07-23 01:09:49.272516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.197 [2024-07-23 01:09:49.272540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.197 qpair failed and we were unable to recover it. 00:30:05.197 [2024-07-23 01:09:49.272683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.197 [2024-07-23 01:09:49.272837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.197 [2024-07-23 01:09:49.272862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.197 qpair failed and we were unable to recover it. 00:30:05.197 [2024-07-23 01:09:49.273045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.197 [2024-07-23 01:09:49.273183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.197 [2024-07-23 01:09:49.273209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.197 qpair failed and we were unable to recover it. 00:30:05.197 [2024-07-23 01:09:49.273347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.197 [2024-07-23 01:09:49.273542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.197 [2024-07-23 01:09:49.273567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.197 qpair failed and we were unable to recover it. 00:30:05.197 [2024-07-23 01:09:49.273733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.197 [2024-07-23 01:09:49.273899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.197 [2024-07-23 01:09:49.273925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.197 qpair failed and we were unable to recover it. 00:30:05.197 [2024-07-23 01:09:49.274103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.197 [2024-07-23 01:09:49.274280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.197 [2024-07-23 01:09:49.274305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.197 qpair failed and we were unable to recover it. 00:30:05.197 [2024-07-23 01:09:49.274468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.197 [2024-07-23 01:09:49.274635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.197 [2024-07-23 01:09:49.274660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.197 qpair failed and we were unable to recover it. 00:30:05.197 [2024-07-23 01:09:49.274802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.197 [2024-07-23 01:09:49.274967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.197 [2024-07-23 01:09:49.274991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.197 qpair failed and we were unable to recover it. 00:30:05.197 [2024-07-23 01:09:49.275155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.197 [2024-07-23 01:09:49.275291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.197 [2024-07-23 01:09:49.275316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.197 qpair failed and we were unable to recover it. 00:30:05.197 [2024-07-23 01:09:49.275457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.197 [2024-07-23 01:09:49.275589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.197 [2024-07-23 01:09:49.275618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.197 qpair failed and we were unable to recover it. 00:30:05.197 [2024-07-23 01:09:49.275762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.197 [2024-07-23 01:09:49.275931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.197 [2024-07-23 01:09:49.275955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.197 qpair failed and we were unable to recover it. 00:30:05.197 [2024-07-23 01:09:49.276092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.197 [2024-07-23 01:09:49.276264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.197 [2024-07-23 01:09:49.276289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.197 qpair failed and we were unable to recover it. 00:30:05.197 [2024-07-23 01:09:49.276479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.197 [2024-07-23 01:09:49.276610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.197 [2024-07-23 01:09:49.276652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.197 qpair failed and we were unable to recover it. 00:30:05.197 [2024-07-23 01:09:49.276840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.197 [2024-07-23 01:09:49.277026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.197 [2024-07-23 01:09:49.277050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.197 qpair failed and we were unable to recover it. 00:30:05.197 [2024-07-23 01:09:49.277241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.197 [2024-07-23 01:09:49.277401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.197 [2024-07-23 01:09:49.277425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.197 qpair failed and we were unable to recover it. 00:30:05.197 [2024-07-23 01:09:49.277588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.197 [2024-07-23 01:09:49.277761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.197 [2024-07-23 01:09:49.277786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.197 qpair failed and we were unable to recover it. 00:30:05.197 [2024-07-23 01:09:49.277963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.197 [2024-07-23 01:09:49.278120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.197 [2024-07-23 01:09:49.278145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.197 qpair failed and we were unable to recover it. 00:30:05.197 [2024-07-23 01:09:49.278304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.197 [2024-07-23 01:09:49.278465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.197 [2024-07-23 01:09:49.278490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.197 qpair failed and we were unable to recover it. 00:30:05.197 [2024-07-23 01:09:49.278682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.197 [2024-07-23 01:09:49.278822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.197 [2024-07-23 01:09:49.278848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.197 qpair failed and we were unable to recover it. 00:30:05.197 [2024-07-23 01:09:49.279019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.197 [2024-07-23 01:09:49.279171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.197 [2024-07-23 01:09:49.279196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.197 qpair failed and we were unable to recover it. 00:30:05.197 [2024-07-23 01:09:49.279380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.197 [2024-07-23 01:09:49.279543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.197 [2024-07-23 01:09:49.279572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.197 qpair failed and we were unable to recover it. 00:30:05.197 [2024-07-23 01:09:49.279736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.197 [2024-07-23 01:09:49.279925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.197 [2024-07-23 01:09:49.279949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.197 qpair failed and we were unable to recover it. 00:30:05.197 [2024-07-23 01:09:49.280114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.197 [2024-07-23 01:09:49.280353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.197 [2024-07-23 01:09:49.280378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.197 qpair failed and we were unable to recover it. 00:30:05.197 [2024-07-23 01:09:49.280546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.197 [2024-07-23 01:09:49.280711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.197 [2024-07-23 01:09:49.280736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.197 qpair failed and we were unable to recover it. 00:30:05.197 [2024-07-23 01:09:49.280924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.197 [2024-07-23 01:09:49.281163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.197 [2024-07-23 01:09:49.281188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.197 qpair failed and we were unable to recover it. 00:30:05.197 [2024-07-23 01:09:49.281313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.198 [2024-07-23 01:09:49.281500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.198 [2024-07-23 01:09:49.281524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.198 qpair failed and we were unable to recover it. 00:30:05.198 [2024-07-23 01:09:49.281684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.198 [2024-07-23 01:09:49.281874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.198 [2024-07-23 01:09:49.281899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.198 qpair failed and we were unable to recover it. 00:30:05.198 [2024-07-23 01:09:49.282060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.198 [2024-07-23 01:09:49.282237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.198 [2024-07-23 01:09:49.282262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.198 qpair failed and we were unable to recover it. 00:30:05.198 [2024-07-23 01:09:49.282406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.198 [2024-07-23 01:09:49.282645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.198 [2024-07-23 01:09:49.282670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.198 qpair failed and we were unable to recover it. 00:30:05.198 [2024-07-23 01:09:49.282811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.198 [2024-07-23 01:09:49.282993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.198 [2024-07-23 01:09:49.283018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.198 qpair failed and we were unable to recover it. 00:30:05.198 [2024-07-23 01:09:49.283157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.198 [2024-07-23 01:09:49.283349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.198 [2024-07-23 01:09:49.283378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.198 qpair failed and we were unable to recover it. 00:30:05.198 [2024-07-23 01:09:49.283567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.198 [2024-07-23 01:09:49.283734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.198 [2024-07-23 01:09:49.283759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.198 qpair failed and we were unable to recover it. 00:30:05.198 [2024-07-23 01:09:49.283920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.198 [2024-07-23 01:09:49.284082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.198 [2024-07-23 01:09:49.284107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.198 qpair failed and we were unable to recover it. 00:30:05.198 [2024-07-23 01:09:49.284271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.198 [2024-07-23 01:09:49.284411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.198 [2024-07-23 01:09:49.284435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.198 qpair failed and we were unable to recover it. 00:30:05.198 [2024-07-23 01:09:49.284583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.198 [2024-07-23 01:09:49.284794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.198 [2024-07-23 01:09:49.284820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.198 qpair failed and we were unable to recover it. 00:30:05.198 [2024-07-23 01:09:49.284984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.198 [2024-07-23 01:09:49.285146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.198 [2024-07-23 01:09:49.285171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.198 qpair failed and we were unable to recover it. 00:30:05.198 [2024-07-23 01:09:49.285336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.198 [2024-07-23 01:09:49.285577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.198 [2024-07-23 01:09:49.285602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.198 qpair failed and we were unable to recover it. 00:30:05.198 [2024-07-23 01:09:49.285773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.198 [2024-07-23 01:09:49.285911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.198 [2024-07-23 01:09:49.285935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.198 qpair failed and we were unable to recover it. 00:30:05.198 [2024-07-23 01:09:49.286120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.198 [2024-07-23 01:09:49.286307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.198 [2024-07-23 01:09:49.286331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.198 qpair failed and we were unable to recover it. 00:30:05.198 [2024-07-23 01:09:49.287344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.198 [2024-07-23 01:09:49.287532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.198 [2024-07-23 01:09:49.287558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.198 qpair failed and we were unable to recover it. 00:30:05.198 [2024-07-23 01:09:49.287697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.198 [2024-07-23 01:09:49.287861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.198 [2024-07-23 01:09:49.287886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.198 qpair failed and we were unable to recover it. 00:30:05.198 [2024-07-23 01:09:49.288058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.198 [2024-07-23 01:09:49.288200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.198 [2024-07-23 01:09:49.288224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.198 qpair failed and we were unable to recover it. 00:30:05.198 [2024-07-23 01:09:49.288381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.198 [2024-07-23 01:09:49.288538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.198 [2024-07-23 01:09:49.288563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.198 qpair failed and we were unable to recover it. 00:30:05.198 [2024-07-23 01:09:49.288716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.198 [2024-07-23 01:09:49.288876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.198 [2024-07-23 01:09:49.288900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.198 qpair failed and we were unable to recover it. 00:30:05.198 [2024-07-23 01:09:49.289060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.198 [2024-07-23 01:09:49.289221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.198 [2024-07-23 01:09:49.289245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.198 qpair failed and we were unable to recover it. 00:30:05.198 [2024-07-23 01:09:49.289409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.198 [2024-07-23 01:09:49.289570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.198 [2024-07-23 01:09:49.289594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.198 qpair failed and we were unable to recover it. 00:30:05.198 [2024-07-23 01:09:49.289762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.198 [2024-07-23 01:09:49.289895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.198 [2024-07-23 01:09:49.289920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.198 qpair failed and we were unable to recover it. 00:30:05.198 [2024-07-23 01:09:49.290084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.198 [2024-07-23 01:09:49.290245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.198 [2024-07-23 01:09:49.290269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.198 qpair failed and we were unable to recover it. 00:30:05.198 [2024-07-23 01:09:49.290513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.198 [2024-07-23 01:09:49.290661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.198 [2024-07-23 01:09:49.290685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.198 qpair failed and we were unable to recover it. 00:30:05.198 [2024-07-23 01:09:49.290853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.198 [2024-07-23 01:09:49.291010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.198 [2024-07-23 01:09:49.291034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.198 qpair failed and we were unable to recover it. 00:30:05.198 [2024-07-23 01:09:49.291196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.198 [2024-07-23 01:09:49.291357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.198 [2024-07-23 01:09:49.291381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.198 qpair failed and we were unable to recover it. 00:30:05.198 [2024-07-23 01:09:49.291546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.198 [2024-07-23 01:09:49.291687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.198 [2024-07-23 01:09:49.291712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.198 qpair failed and we were unable to recover it. 00:30:05.198 [2024-07-23 01:09:49.291873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.198 [2024-07-23 01:09:49.292034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.199 [2024-07-23 01:09:49.292059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.199 qpair failed and we were unable to recover it. 00:30:05.199 [2024-07-23 01:09:49.292223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.199 [2024-07-23 01:09:49.292383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.199 [2024-07-23 01:09:49.292408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.199 qpair failed and we were unable to recover it. 00:30:05.199 [2024-07-23 01:09:49.292571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.199 [2024-07-23 01:09:49.292717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.199 [2024-07-23 01:09:49.292744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.199 qpair failed and we were unable to recover it. 00:30:05.199 [2024-07-23 01:09:49.292899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.199 [2024-07-23 01:09:49.293054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.199 [2024-07-23 01:09:49.293079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.199 qpair failed and we were unable to recover it. 00:30:05.199 [2024-07-23 01:09:49.293208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.199 [2024-07-23 01:09:49.293370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.199 [2024-07-23 01:09:49.293394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.199 qpair failed and we were unable to recover it. 00:30:05.199 [2024-07-23 01:09:49.293560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.199 [2024-07-23 01:09:49.293700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.199 [2024-07-23 01:09:49.293724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.199 qpair failed and we were unable to recover it. 00:30:05.199 [2024-07-23 01:09:49.293890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.199 [2024-07-23 01:09:49.294056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.199 [2024-07-23 01:09:49.294080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.199 qpair failed and we were unable to recover it. 00:30:05.199 [2024-07-23 01:09:49.294241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.199 [2024-07-23 01:09:49.294404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.199 [2024-07-23 01:09:49.294429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.199 qpair failed and we were unable to recover it. 00:30:05.199 [2024-07-23 01:09:49.294568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.199 [2024-07-23 01:09:49.294736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.199 [2024-07-23 01:09:49.294761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.199 qpair failed and we were unable to recover it. 00:30:05.199 [2024-07-23 01:09:49.294896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.199 [2024-07-23 01:09:49.295060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.199 [2024-07-23 01:09:49.295085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.199 qpair failed and we were unable to recover it. 00:30:05.199 [2024-07-23 01:09:49.295325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.199 [2024-07-23 01:09:49.295514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.199 [2024-07-23 01:09:49.295539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.199 qpair failed and we were unable to recover it. 00:30:05.199 [2024-07-23 01:09:49.295679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.199 [2024-07-23 01:09:49.295822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.199 [2024-07-23 01:09:49.295846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.199 qpair failed and we were unable to recover it. 00:30:05.199 [2024-07-23 01:09:49.296013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.199 [2024-07-23 01:09:49.296177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.199 [2024-07-23 01:09:49.296200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.199 qpair failed and we were unable to recover it. 00:30:05.199 [2024-07-23 01:09:49.296358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.199 [2024-07-23 01:09:49.296520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.199 [2024-07-23 01:09:49.296544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.199 qpair failed and we were unable to recover it. 00:30:05.199 [2024-07-23 01:09:49.296708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.199 [2024-07-23 01:09:49.296865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.199 [2024-07-23 01:09:49.296891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.199 qpair failed and we were unable to recover it. 00:30:05.199 [2024-07-23 01:09:49.297056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.199 [2024-07-23 01:09:49.297225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.199 [2024-07-23 01:09:49.297249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.199 qpair failed and we were unable to recover it. 00:30:05.199 [2024-07-23 01:09:49.297414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.199 [2024-07-23 01:09:49.297565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.199 [2024-07-23 01:09:49.297590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.199 qpair failed and we were unable to recover it. 00:30:05.199 [2024-07-23 01:09:49.297760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.199 [2024-07-23 01:09:49.297895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.199 [2024-07-23 01:09:49.297919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.199 qpair failed and we were unable to recover it. 00:30:05.199 [2024-07-23 01:09:49.298079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.199 [2024-07-23 01:09:49.298220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.199 [2024-07-23 01:09:49.298244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.199 qpair failed and we were unable to recover it. 00:30:05.199 [2024-07-23 01:09:49.298411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.199 [2024-07-23 01:09:49.298550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.199 [2024-07-23 01:09:49.298574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.199 qpair failed and we were unable to recover it. 00:30:05.199 [2024-07-23 01:09:49.298717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.199 [2024-07-23 01:09:49.298907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.199 [2024-07-23 01:09:49.298932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.199 qpair failed and we were unable to recover it. 00:30:05.199 [2024-07-23 01:09:49.299094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.199 [2024-07-23 01:09:49.299251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.199 [2024-07-23 01:09:49.299275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.199 qpair failed and we were unable to recover it. 00:30:05.199 [2024-07-23 01:09:49.299431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.199 [2024-07-23 01:09:49.299589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.199 [2024-07-23 01:09:49.299620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.199 qpair failed and we were unable to recover it. 00:30:05.199 [2024-07-23 01:09:49.299796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.199 [2024-07-23 01:09:49.299962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.199 [2024-07-23 01:09:49.299987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.199 qpair failed and we were unable to recover it. 00:30:05.199 [2024-07-23 01:09:49.300228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.199 [2024-07-23 01:09:49.300386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.199 [2024-07-23 01:09:49.300410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.199 qpair failed and we were unable to recover it. 00:30:05.199 [2024-07-23 01:09:49.300571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.199 [2024-07-23 01:09:49.300751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.199 [2024-07-23 01:09:49.300776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.199 qpair failed and we were unable to recover it. 00:30:05.199 [2024-07-23 01:09:49.300936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.199 [2024-07-23 01:09:49.301075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.199 [2024-07-23 01:09:49.301099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.199 qpair failed and we were unable to recover it. 00:30:05.199 [2024-07-23 01:09:49.301245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.199 [2024-07-23 01:09:49.301378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.199 [2024-07-23 01:09:49.301402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.199 qpair failed and we were unable to recover it. 00:30:05.199 [2024-07-23 01:09:49.301565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.199 [2024-07-23 01:09:49.301705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.199 [2024-07-23 01:09:49.301734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.199 qpair failed and we were unable to recover it. 00:30:05.199 [2024-07-23 01:09:49.301902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.200 [2024-07-23 01:09:49.302059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.200 [2024-07-23 01:09:49.302088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.200 qpair failed and we were unable to recover it. 00:30:05.200 [2024-07-23 01:09:49.302277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.200 [2024-07-23 01:09:49.302436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.200 [2024-07-23 01:09:49.302461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.200 qpair failed and we were unable to recover it. 00:30:05.200 [2024-07-23 01:09:49.302601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.200 [2024-07-23 01:09:49.302769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.200 [2024-07-23 01:09:49.302793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.200 qpair failed and we were unable to recover it. 00:30:05.200 [2024-07-23 01:09:49.302983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.200 [2024-07-23 01:09:49.303144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.200 [2024-07-23 01:09:49.303169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.200 qpair failed and we were unable to recover it. 00:30:05.200 [2024-07-23 01:09:49.303327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.200 [2024-07-23 01:09:49.303452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.200 [2024-07-23 01:09:49.303476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.200 qpair failed and we were unable to recover it. 00:30:05.200 [2024-07-23 01:09:49.303637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.200 [2024-07-23 01:09:49.303810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.200 [2024-07-23 01:09:49.303835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.200 qpair failed and we were unable to recover it. 00:30:05.200 [2024-07-23 01:09:49.304002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.200 [2024-07-23 01:09:49.304177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.200 [2024-07-23 01:09:49.304201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.200 qpair failed and we were unable to recover it. 00:30:05.200 [2024-07-23 01:09:49.304395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.200 [2024-07-23 01:09:49.304552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.200 [2024-07-23 01:09:49.304576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.200 qpair failed and we were unable to recover it. 00:30:05.200 [2024-07-23 01:09:49.304720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.200 [2024-07-23 01:09:49.304910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.200 [2024-07-23 01:09:49.304934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.200 qpair failed and we were unable to recover it. 00:30:05.200 [2024-07-23 01:09:49.305097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.200 [2024-07-23 01:09:49.305237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.200 [2024-07-23 01:09:49.305261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.200 qpair failed and we were unable to recover it. 00:30:05.200 [2024-07-23 01:09:49.305409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.200 [2024-07-23 01:09:49.305544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.200 [2024-07-23 01:09:49.305567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.200 qpair failed and we were unable to recover it. 00:30:05.200 [2024-07-23 01:09:49.305738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.200 [2024-07-23 01:09:49.305915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.200 [2024-07-23 01:09:49.305940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.200 qpair failed and we were unable to recover it. 00:30:05.200 [2024-07-23 01:09:49.306100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.200 [2024-07-23 01:09:49.306262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.200 [2024-07-23 01:09:49.306286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.200 qpair failed and we were unable to recover it. 00:30:05.200 [2024-07-23 01:09:49.306445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.200 [2024-07-23 01:09:49.306584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.200 [2024-07-23 01:09:49.306608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.200 qpair failed and we were unable to recover it. 00:30:05.200 [2024-07-23 01:09:49.306756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.200 [2024-07-23 01:09:49.306902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.200 [2024-07-23 01:09:49.306926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.200 qpair failed and we were unable to recover it. 00:30:05.200 [2024-07-23 01:09:49.307061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.200 [2024-07-23 01:09:49.307210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.200 [2024-07-23 01:09:49.307234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.200 qpair failed and we were unable to recover it. 00:30:05.200 [2024-07-23 01:09:49.307371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.200 [2024-07-23 01:09:49.307514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.200 [2024-07-23 01:09:49.307540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.200 qpair failed and we were unable to recover it. 00:30:05.200 [2024-07-23 01:09:49.307703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.200 [2024-07-23 01:09:49.307837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.200 [2024-07-23 01:09:49.307862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.200 qpair failed and we were unable to recover it. 00:30:05.200 [2024-07-23 01:09:49.307996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.200 [2024-07-23 01:09:49.308124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.200 [2024-07-23 01:09:49.308148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.200 qpair failed and we were unable to recover it. 00:30:05.200 [2024-07-23 01:09:49.308314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.200 [2024-07-23 01:09:49.308479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.200 [2024-07-23 01:09:49.308503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.200 qpair failed and we were unable to recover it. 00:30:05.200 [2024-07-23 01:09:49.308648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.200 [2024-07-23 01:09:49.308797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.200 [2024-07-23 01:09:49.308821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.200 qpair failed and we were unable to recover it. 00:30:05.200 [2024-07-23 01:09:49.308987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.200 [2024-07-23 01:09:49.309141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.200 [2024-07-23 01:09:49.309165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.200 qpair failed and we were unable to recover it. 00:30:05.200 [2024-07-23 01:09:49.309322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.200 [2024-07-23 01:09:49.309485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.200 [2024-07-23 01:09:49.309509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.200 qpair failed and we were unable to recover it. 00:30:05.200 [2024-07-23 01:09:49.309670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.200 [2024-07-23 01:09:49.309840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.200 [2024-07-23 01:09:49.309864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.200 qpair failed and we were unable to recover it. 00:30:05.200 [2024-07-23 01:09:49.310005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.200 [2024-07-23 01:09:49.310163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.200 [2024-07-23 01:09:49.310188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.200 qpair failed and we were unable to recover it. 00:30:05.201 [2024-07-23 01:09:49.310377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.201 [2024-07-23 01:09:49.310538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.201 [2024-07-23 01:09:49.310563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.201 qpair failed and we were unable to recover it. 00:30:05.201 [2024-07-23 01:09:49.310745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.201 [2024-07-23 01:09:49.310907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.201 [2024-07-23 01:09:49.310931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.201 qpair failed and we were unable to recover it. 00:30:05.201 [2024-07-23 01:09:49.311100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.201 [2024-07-23 01:09:49.311250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.201 [2024-07-23 01:09:49.311275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.201 qpair failed and we were unable to recover it. 00:30:05.201 [2024-07-23 01:09:49.311406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.201 [2024-07-23 01:09:49.311544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.201 [2024-07-23 01:09:49.311568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.201 qpair failed and we were unable to recover it. 00:30:05.201 [2024-07-23 01:09:49.311735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.201 [2024-07-23 01:09:49.311864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.201 [2024-07-23 01:09:49.311888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.201 qpair failed and we were unable to recover it. 00:30:05.201 [2024-07-23 01:09:49.312031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.201 [2024-07-23 01:09:49.312190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.201 [2024-07-23 01:09:49.312215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.201 qpair failed and we were unable to recover it. 00:30:05.201 [2024-07-23 01:09:49.312382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.201 [2024-07-23 01:09:49.312547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.201 [2024-07-23 01:09:49.312571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.201 qpair failed and we were unable to recover it. 00:30:05.201 [2024-07-23 01:09:49.312751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.201 [2024-07-23 01:09:49.312882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.201 [2024-07-23 01:09:49.312906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.201 qpair failed and we were unable to recover it. 00:30:05.201 [2024-07-23 01:09:49.313076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.201 [2024-07-23 01:09:49.313239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.201 [2024-07-23 01:09:49.313263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.201 qpair failed and we were unable to recover it. 00:30:05.201 [2024-07-23 01:09:49.314018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.201 [2024-07-23 01:09:49.314217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.201 [2024-07-23 01:09:49.314243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.201 qpair failed and we were unable to recover it. 00:30:05.201 [2024-07-23 01:09:49.314383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.201 [2024-07-23 01:09:49.314571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.201 [2024-07-23 01:09:49.314596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.201 qpair failed and we were unable to recover it. 00:30:05.201 [2024-07-23 01:09:49.314741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.201 [2024-07-23 01:09:49.314986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.201 [2024-07-23 01:09:49.315011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.201 qpair failed and we were unable to recover it. 00:30:05.201 [2024-07-23 01:09:49.315204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.201 [2024-07-23 01:09:49.315340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.201 [2024-07-23 01:09:49.315364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.201 qpair failed and we were unable to recover it. 00:30:05.201 [2024-07-23 01:09:49.315528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.201 [2024-07-23 01:09:49.315697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.201 [2024-07-23 01:09:49.315723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.201 qpair failed and we were unable to recover it. 00:30:05.201 [2024-07-23 01:09:49.315880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.201 [2024-07-23 01:09:49.316017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.201 [2024-07-23 01:09:49.316042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.201 qpair failed and we were unable to recover it. 00:30:05.201 [2024-07-23 01:09:49.316181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.201 [2024-07-23 01:09:49.316344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.201 [2024-07-23 01:09:49.316368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.201 qpair failed and we were unable to recover it. 00:30:05.201 [2024-07-23 01:09:49.316537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.201 [2024-07-23 01:09:49.316679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.201 [2024-07-23 01:09:49.316705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.201 qpair failed and we were unable to recover it. 00:30:05.201 [2024-07-23 01:09:49.316888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.201 [2024-07-23 01:09:49.317044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.201 [2024-07-23 01:09:49.317069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.201 qpair failed and we were unable to recover it. 00:30:05.201 [2024-07-23 01:09:49.317230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.201 [2024-07-23 01:09:49.317364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.201 [2024-07-23 01:09:49.317388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.201 qpair failed and we were unable to recover it. 00:30:05.201 [2024-07-23 01:09:49.317556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.201 [2024-07-23 01:09:49.317692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.201 [2024-07-23 01:09:49.317718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.201 qpair failed and we were unable to recover it. 00:30:05.201 [2024-07-23 01:09:49.317886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.201 [2024-07-23 01:09:49.318017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.201 [2024-07-23 01:09:49.318041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.201 qpair failed and we were unable to recover it. 00:30:05.201 [2024-07-23 01:09:49.318205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.201 [2024-07-23 01:09:49.318372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.201 [2024-07-23 01:09:49.318396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.201 qpair failed and we were unable to recover it. 00:30:05.201 [2024-07-23 01:09:49.318558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.201 [2024-07-23 01:09:49.318701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.201 [2024-07-23 01:09:49.318726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.201 qpair failed and we were unable to recover it. 00:30:05.201 [2024-07-23 01:09:49.318894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.201 [2024-07-23 01:09:49.319056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.201 [2024-07-23 01:09:49.319080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.201 qpair failed and we were unable to recover it. 00:30:05.201 [2024-07-23 01:09:49.319242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.201 [2024-07-23 01:09:49.319402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.201 [2024-07-23 01:09:49.319426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.201 qpair failed and we were unable to recover it. 00:30:05.201 [2024-07-23 01:09:49.319564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.201 [2024-07-23 01:09:49.319701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.201 [2024-07-23 01:09:49.319726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.201 qpair failed and we were unable to recover it. 00:30:05.201 [2024-07-23 01:09:49.319890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.201 [2024-07-23 01:09:49.320075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.201 [2024-07-23 01:09:49.320103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.201 qpair failed and we were unable to recover it. 00:30:05.201 [2024-07-23 01:09:49.320246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.201 [2024-07-23 01:09:49.320413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.201 [2024-07-23 01:09:49.320439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.201 qpair failed and we were unable to recover it. 00:30:05.201 [2024-07-23 01:09:49.320627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.202 [2024-07-23 01:09:49.320785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.202 [2024-07-23 01:09:49.320809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.202 qpair failed and we were unable to recover it. 00:30:05.202 [2024-07-23 01:09:49.320948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.202 [2024-07-23 01:09:49.321093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.202 [2024-07-23 01:09:49.321118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.202 qpair failed and we were unable to recover it. 00:30:05.202 [2024-07-23 01:09:49.321282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.202 [2024-07-23 01:09:49.321422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.202 [2024-07-23 01:09:49.321446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.202 qpair failed and we were unable to recover it. 00:30:05.202 [2024-07-23 01:09:49.321608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.202 [2024-07-23 01:09:49.321802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.202 [2024-07-23 01:09:49.321827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.202 qpair failed and we were unable to recover it. 00:30:05.202 [2024-07-23 01:09:49.321970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.202 [2024-07-23 01:09:49.322136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.202 [2024-07-23 01:09:49.322160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.202 qpair failed and we were unable to recover it. 00:30:05.202 [2024-07-23 01:09:49.322400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.202 [2024-07-23 01:09:49.322559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.202 [2024-07-23 01:09:49.322583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.202 qpair failed and we were unable to recover it. 00:30:05.202 [2024-07-23 01:09:49.322733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.202 [2024-07-23 01:09:49.322899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.202 [2024-07-23 01:09:49.322923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.202 qpair failed and we were unable to recover it. 00:30:05.202 [2024-07-23 01:09:49.323054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.202 [2024-07-23 01:09:49.323216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.202 [2024-07-23 01:09:49.323240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.202 qpair failed and we were unable to recover it. 00:30:05.202 [2024-07-23 01:09:49.323379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.202 [2024-07-23 01:09:49.323516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.202 [2024-07-23 01:09:49.323540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.202 qpair failed and we were unable to recover it. 00:30:05.202 [2024-07-23 01:09:49.323786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.202 [2024-07-23 01:09:49.323981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.202 [2024-07-23 01:09:49.324005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.202 qpair failed and we were unable to recover it. 00:30:05.202 [2024-07-23 01:09:49.324170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.202 [2024-07-23 01:09:49.324327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.202 [2024-07-23 01:09:49.324351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.202 qpair failed and we were unable to recover it. 00:30:05.202 [2024-07-23 01:09:49.324514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.202 [2024-07-23 01:09:49.324676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.202 [2024-07-23 01:09:49.324700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.202 qpair failed and we were unable to recover it. 00:30:05.202 [2024-07-23 01:09:49.324842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.202 [2024-07-23 01:09:49.325016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.202 [2024-07-23 01:09:49.325040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.202 qpair failed and we were unable to recover it. 00:30:05.202 [2024-07-23 01:09:49.325173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.202 [2024-07-23 01:09:49.325359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.202 [2024-07-23 01:09:49.325383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.202 qpair failed and we were unable to recover it. 00:30:05.202 [2024-07-23 01:09:49.325540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.202 [2024-07-23 01:09:49.325706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.202 [2024-07-23 01:09:49.325731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.202 qpair failed and we were unable to recover it. 00:30:05.202 [2024-07-23 01:09:49.325869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.202 [2024-07-23 01:09:49.326111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.202 [2024-07-23 01:09:49.326136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.202 qpair failed and we were unable to recover it. 00:30:05.202 [2024-07-23 01:09:49.326329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.202 [2024-07-23 01:09:49.326456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.202 [2024-07-23 01:09:49.326480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.202 qpair failed and we were unable to recover it. 00:30:05.202 [2024-07-23 01:09:49.326646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.202 [2024-07-23 01:09:49.326797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.202 [2024-07-23 01:09:49.326822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.202 qpair failed and we were unable to recover it. 00:30:05.202 [2024-07-23 01:09:49.326969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.202 [2024-07-23 01:09:49.327105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.202 [2024-07-23 01:09:49.327129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.202 qpair failed and we were unable to recover it. 00:30:05.202 [2024-07-23 01:09:49.327286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.202 [2024-07-23 01:09:49.327447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.202 [2024-07-23 01:09:49.327472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.202 qpair failed and we were unable to recover it. 00:30:05.202 [2024-07-23 01:09:49.327675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.202 [2024-07-23 01:09:49.327832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.202 [2024-07-23 01:09:49.327857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.202 qpair failed and we were unable to recover it. 00:30:05.202 [2024-07-23 01:09:49.327994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.202 [2024-07-23 01:09:49.328154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.202 [2024-07-23 01:09:49.328179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.202 qpair failed and we were unable to recover it. 00:30:05.202 [2024-07-23 01:09:49.328348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.202 [2024-07-23 01:09:49.328588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.202 [2024-07-23 01:09:49.328618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.202 qpair failed and we were unable to recover it. 00:30:05.202 [2024-07-23 01:09:49.328764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.202 [2024-07-23 01:09:49.328897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.202 [2024-07-23 01:09:49.328922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.202 qpair failed and we were unable to recover it. 00:30:05.202 [2024-07-23 01:09:49.329057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.202 [2024-07-23 01:09:49.329218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.202 [2024-07-23 01:09:49.329243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.202 qpair failed and we were unable to recover it. 00:30:05.202 [2024-07-23 01:09:49.329432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.202 [2024-07-23 01:09:49.329568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.202 [2024-07-23 01:09:49.329593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.202 qpair failed and we were unable to recover it. 00:30:05.202 [2024-07-23 01:09:49.329784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.202 [2024-07-23 01:09:49.329984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.202 [2024-07-23 01:09:49.330017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.202 qpair failed and we were unable to recover it. 00:30:05.202 [2024-07-23 01:09:49.330213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.202 [2024-07-23 01:09:49.330418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.202 [2024-07-23 01:09:49.330446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.202 qpair failed and we were unable to recover it. 00:30:05.202 [2024-07-23 01:09:49.330642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.202 [2024-07-23 01:09:49.330818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.203 [2024-07-23 01:09:49.330848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.203 qpair failed and we were unable to recover it. 00:30:05.203 [2024-07-23 01:09:49.331040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.203 [2024-07-23 01:09:49.331221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.203 [2024-07-23 01:09:49.331250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.203 qpair failed and we were unable to recover it. 00:30:05.203 [2024-07-23 01:09:49.331414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.203 [2024-07-23 01:09:49.331588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.203 [2024-07-23 01:09:49.331625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.203 qpair failed and we were unable to recover it. 00:30:05.203 [2024-07-23 01:09:49.331795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.203 [2024-07-23 01:09:49.331947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.203 [2024-07-23 01:09:49.331975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.203 qpair failed and we were unable to recover it. 00:30:05.203 [2024-07-23 01:09:49.332188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.203 [2024-07-23 01:09:49.332345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.203 [2024-07-23 01:09:49.332374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.203 qpair failed and we were unable to recover it. 00:30:05.203 [2024-07-23 01:09:49.332568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.203 [2024-07-23 01:09:49.332743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.203 [2024-07-23 01:09:49.332771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.203 qpair failed and we were unable to recover it. 00:30:05.203 [2024-07-23 01:09:49.332955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.203 [2024-07-23 01:09:49.333091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.203 [2024-07-23 01:09:49.333118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.203 qpair failed and we were unable to recover it. 00:30:05.203 [2024-07-23 01:09:49.333252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.203 [2024-07-23 01:09:49.333399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.203 [2024-07-23 01:09:49.333424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.203 qpair failed and we were unable to recover it. 00:30:05.203 [2024-07-23 01:09:49.333565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.203 [2024-07-23 01:09:49.333736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.203 [2024-07-23 01:09:49.333762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.203 qpair failed and we were unable to recover it. 00:30:05.203 [2024-07-23 01:09:49.333909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.203 [2024-07-23 01:09:49.334068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.203 [2024-07-23 01:09:49.334093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.203 qpair failed and we were unable to recover it. 00:30:05.203 [2024-07-23 01:09:49.334255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.203 [2024-07-23 01:09:49.334431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.203 [2024-07-23 01:09:49.334456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.203 qpair failed and we were unable to recover it. 00:30:05.203 [2024-07-23 01:09:49.334593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.203 [2024-07-23 01:09:49.334782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.203 [2024-07-23 01:09:49.334809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.203 qpair failed and we were unable to recover it. 00:30:05.203 [2024-07-23 01:09:49.334955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.203 [2024-07-23 01:09:49.335198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.203 [2024-07-23 01:09:49.335223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.203 qpair failed and we were unable to recover it. 00:30:05.203 [2024-07-23 01:09:49.335388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.203 [2024-07-23 01:09:49.335561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.203 [2024-07-23 01:09:49.335585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.203 qpair failed and we were unable to recover it. 00:30:05.203 [2024-07-23 01:09:49.335725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.203 [2024-07-23 01:09:49.335866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.203 [2024-07-23 01:09:49.335890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.203 qpair failed and we were unable to recover it. 00:30:05.203 [2024-07-23 01:09:49.336024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.203 [2024-07-23 01:09:49.336193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.203 [2024-07-23 01:09:49.336218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.203 qpair failed and we were unable to recover it. 00:30:05.203 [2024-07-23 01:09:49.336381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.203 [2024-07-23 01:09:49.336521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.203 [2024-07-23 01:09:49.336546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.203 qpair failed and we were unable to recover it. 00:30:05.203 [2024-07-23 01:09:49.336691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.203 [2024-07-23 01:09:49.336859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.203 [2024-07-23 01:09:49.336883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.203 qpair failed and we were unable to recover it. 00:30:05.203 [2024-07-23 01:09:49.337018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.203 [2024-07-23 01:09:49.337180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.203 [2024-07-23 01:09:49.337205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.203 qpair failed and we were unable to recover it. 00:30:05.203 [2024-07-23 01:09:49.337364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.203 [2024-07-23 01:09:49.337530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.203 [2024-07-23 01:09:49.337554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.203 qpair failed and we were unable to recover it. 00:30:05.203 [2024-07-23 01:09:49.337695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.203 [2024-07-23 01:09:49.337833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.203 [2024-07-23 01:09:49.337860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.203 qpair failed and we were unable to recover it. 00:30:05.203 [2024-07-23 01:09:49.338000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.203 [2024-07-23 01:09:49.338172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.203 [2024-07-23 01:09:49.338196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.203 qpair failed and we were unable to recover it. 00:30:05.203 [2024-07-23 01:09:49.338330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.203 [2024-07-23 01:09:49.338486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.203 [2024-07-23 01:09:49.338510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.203 qpair failed and we were unable to recover it. 00:30:05.203 [2024-07-23 01:09:49.338668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.203 [2024-07-23 01:09:49.338833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.203 [2024-07-23 01:09:49.338858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.203 qpair failed and we were unable to recover it. 00:30:05.203 [2024-07-23 01:09:49.339022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.203 [2024-07-23 01:09:49.339155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.203 [2024-07-23 01:09:49.339180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.203 qpair failed and we were unable to recover it. 00:30:05.203 [2024-07-23 01:09:49.339339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.203 [2024-07-23 01:09:49.339506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.203 [2024-07-23 01:09:49.339530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.203 qpair failed and we were unable to recover it. 00:30:05.203 [2024-07-23 01:09:49.339696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.203 [2024-07-23 01:09:49.339825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.203 [2024-07-23 01:09:49.339851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.203 qpair failed and we were unable to recover it. 00:30:05.204 [2024-07-23 01:09:49.340019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.204 [2024-07-23 01:09:49.340150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.204 [2024-07-23 01:09:49.340174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.204 qpair failed and we were unable to recover it. 00:30:05.204 [2024-07-23 01:09:49.340315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.204 [2024-07-23 01:09:49.340477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.204 [2024-07-23 01:09:49.340502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.204 qpair failed and we were unable to recover it. 00:30:05.204 [2024-07-23 01:09:49.340645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.204 [2024-07-23 01:09:49.340810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.204 [2024-07-23 01:09:49.340835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.204 qpair failed and we were unable to recover it. 00:30:05.204 [2024-07-23 01:09:49.341039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.204 [2024-07-23 01:09:49.341169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.204 [2024-07-23 01:09:49.341194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.204 qpair failed and we were unable to recover it. 00:30:05.204 [2024-07-23 01:09:49.341344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.204 [2024-07-23 01:09:49.341511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.204 [2024-07-23 01:09:49.341539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.204 qpair failed and we were unable to recover it. 00:30:05.204 [2024-07-23 01:09:49.341716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.204 [2024-07-23 01:09:49.341885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.204 [2024-07-23 01:09:49.341912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.204 qpair failed and we were unable to recover it. 00:30:05.204 [2024-07-23 01:09:49.342103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.204 [2024-07-23 01:09:49.342277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.204 [2024-07-23 01:09:49.342301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.204 qpair failed and we were unable to recover it. 00:30:05.204 [2024-07-23 01:09:49.342439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.204 [2024-07-23 01:09:49.342609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.204 [2024-07-23 01:09:49.342638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.204 qpair failed and we were unable to recover it. 00:30:05.204 [2024-07-23 01:09:49.342802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.204 [2024-07-23 01:09:49.342972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.204 [2024-07-23 01:09:49.342996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.204 qpair failed and we were unable to recover it. 00:30:05.204 [2024-07-23 01:09:49.343156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.204 [2024-07-23 01:09:49.343316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.204 [2024-07-23 01:09:49.343355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.204 qpair failed and we were unable to recover it. 00:30:05.204 [2024-07-23 01:09:49.343549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.204 [2024-07-23 01:09:49.343688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.204 [2024-07-23 01:09:49.343714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.204 qpair failed and we were unable to recover it. 00:30:05.204 [2024-07-23 01:09:49.343856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.204 [2024-07-23 01:09:49.344020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.204 [2024-07-23 01:09:49.344045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.204 qpair failed and we were unable to recover it. 00:30:05.204 [2024-07-23 01:09:49.344233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.204 [2024-07-23 01:09:49.344369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.204 [2024-07-23 01:09:49.344393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.204 qpair failed and we were unable to recover it. 00:30:05.204 [2024-07-23 01:09:49.344556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.204 [2024-07-23 01:09:49.344733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.204 [2024-07-23 01:09:49.344759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.204 qpair failed and we were unable to recover it. 00:30:05.204 [2024-07-23 01:09:49.344924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.204 [2024-07-23 01:09:49.345058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.204 [2024-07-23 01:09:49.345093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.204 qpair failed and we were unable to recover it. 00:30:05.204 [2024-07-23 01:09:49.345260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.204 [2024-07-23 01:09:49.345426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.204 [2024-07-23 01:09:49.345450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.204 qpair failed and we were unable to recover it. 00:30:05.204 [2024-07-23 01:09:49.345610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.204 [2024-07-23 01:09:49.345810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.204 [2024-07-23 01:09:49.345836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.204 qpair failed and we were unable to recover it. 00:30:05.204 [2024-07-23 01:09:49.346038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.204 [2024-07-23 01:09:49.346181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.204 [2024-07-23 01:09:49.346206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.204 qpair failed and we were unable to recover it. 00:30:05.204 [2024-07-23 01:09:49.346339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.204 [2024-07-23 01:09:49.346512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.204 [2024-07-23 01:09:49.346537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.204 qpair failed and we were unable to recover it. 00:30:05.204 [2024-07-23 01:09:49.346687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.204 [2024-07-23 01:09:49.346829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.204 [2024-07-23 01:09:49.346853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.204 qpair failed and we were unable to recover it. 00:30:05.204 [2024-07-23 01:09:49.347044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.204 [2024-07-23 01:09:49.347193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.204 [2024-07-23 01:09:49.347217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.204 qpair failed and we were unable to recover it. 00:30:05.204 [2024-07-23 01:09:49.347348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.204 [2024-07-23 01:09:49.347505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.204 [2024-07-23 01:09:49.347529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.204 qpair failed and we were unable to recover it. 00:30:05.204 [2024-07-23 01:09:49.347673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.204 [2024-07-23 01:09:49.347818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.204 [2024-07-23 01:09:49.347843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.204 qpair failed and we were unable to recover it. 00:30:05.204 [2024-07-23 01:09:49.348097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.204 [2024-07-23 01:09:49.348264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.204 [2024-07-23 01:09:49.348288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.204 qpair failed and we were unable to recover it. 00:30:05.204 [2024-07-23 01:09:49.348450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.204 [2024-07-23 01:09:49.348620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.204 [2024-07-23 01:09:49.348645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.204 qpair failed and we were unable to recover it. 00:30:05.204 [2024-07-23 01:09:49.348826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.204 [2024-07-23 01:09:49.348954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.204 [2024-07-23 01:09:49.348978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.204 qpair failed and we were unable to recover it. 00:30:05.204 [2024-07-23 01:09:49.349125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.204 [2024-07-23 01:09:49.349306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.204 [2024-07-23 01:09:49.349331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.204 qpair failed and we were unable to recover it. 00:30:05.204 [2024-07-23 01:09:49.349515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.204 [2024-07-23 01:09:49.349703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.204 [2024-07-23 01:09:49.349729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.204 qpair failed and we were unable to recover it. 00:30:05.204 [2024-07-23 01:09:49.349925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.204 [2024-07-23 01:09:49.350109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.204 [2024-07-23 01:09:49.350133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.205 qpair failed and we were unable to recover it. 00:30:05.205 [2024-07-23 01:09:49.350326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.205 [2024-07-23 01:09:49.350465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.205 [2024-07-23 01:09:49.350490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.205 qpair failed and we were unable to recover it. 00:30:05.205 [2024-07-23 01:09:49.350622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.205 [2024-07-23 01:09:49.350786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.205 [2024-07-23 01:09:49.350811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.205 qpair failed and we were unable to recover it. 00:30:05.205 [2024-07-23 01:09:49.350948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.205 [2024-07-23 01:09:49.351108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.205 [2024-07-23 01:09:49.351132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.205 qpair failed and we were unable to recover it. 00:30:05.205 [2024-07-23 01:09:49.351354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.205 [2024-07-23 01:09:49.351550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.205 [2024-07-23 01:09:49.351574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.205 qpair failed and we were unable to recover it. 00:30:05.205 [2024-07-23 01:09:49.351827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.205 [2024-07-23 01:09:49.351999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.205 [2024-07-23 01:09:49.352025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.205 qpair failed and we were unable to recover it. 00:30:05.205 [2024-07-23 01:09:49.352192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.205 [2024-07-23 01:09:49.352326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.205 [2024-07-23 01:09:49.352350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.205 qpair failed and we were unable to recover it. 00:30:05.205 [2024-07-23 01:09:49.352488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.205 [2024-07-23 01:09:49.352668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.205 [2024-07-23 01:09:49.352693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.205 qpair failed and we were unable to recover it. 00:30:05.205 [2024-07-23 01:09:49.352836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.205 [2024-07-23 01:09:49.352991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.205 [2024-07-23 01:09:49.353015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.205 qpair failed and we were unable to recover it. 00:30:05.205 [2024-07-23 01:09:49.353178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.205 [2024-07-23 01:09:49.353308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.205 [2024-07-23 01:09:49.353334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.205 qpair failed and we were unable to recover it. 00:30:05.205 [2024-07-23 01:09:49.353575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.205 [2024-07-23 01:09:49.353789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.205 [2024-07-23 01:09:49.353814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.205 qpair failed and we were unable to recover it. 00:30:05.205 [2024-07-23 01:09:49.353959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.205 [2024-07-23 01:09:49.354115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.205 [2024-07-23 01:09:49.354139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.205 qpair failed and we were unable to recover it. 00:30:05.205 [2024-07-23 01:09:49.354275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.205 [2024-07-23 01:09:49.354412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.205 [2024-07-23 01:09:49.354436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.205 qpair failed and we were unable to recover it. 00:30:05.205 [2024-07-23 01:09:49.354627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.205 [2024-07-23 01:09:49.354867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.205 [2024-07-23 01:09:49.354892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.205 qpair failed and we were unable to recover it. 00:30:05.205 [2024-07-23 01:09:49.355055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.205 [2024-07-23 01:09:49.355218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.205 [2024-07-23 01:09:49.355242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.205 qpair failed and we were unable to recover it. 00:30:05.205 [2024-07-23 01:09:49.355401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.205 [2024-07-23 01:09:49.355587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.205 [2024-07-23 01:09:49.355612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.205 qpair failed and we were unable to recover it. 00:30:05.205 [2024-07-23 01:09:49.355805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.205 [2024-07-23 01:09:49.355941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.205 [2024-07-23 01:09:49.355965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.205 qpair failed and we were unable to recover it. 00:30:05.205 [2024-07-23 01:09:49.356106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.205 [2024-07-23 01:09:49.356270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.205 [2024-07-23 01:09:49.356294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.205 qpair failed and we were unable to recover it. 00:30:05.205 [2024-07-23 01:09:49.356453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.205 [2024-07-23 01:09:49.356584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.205 [2024-07-23 01:09:49.356608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.205 qpair failed and we were unable to recover it. 00:30:05.205 [2024-07-23 01:09:49.356766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.205 [2024-07-23 01:09:49.356905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.205 [2024-07-23 01:09:49.356930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.205 qpair failed and we were unable to recover it. 00:30:05.205 [2024-07-23 01:09:49.357062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.205 [2024-07-23 01:09:49.357194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.205 [2024-07-23 01:09:49.357219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.205 qpair failed and we were unable to recover it. 00:30:05.205 [2024-07-23 01:09:49.357351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.205 [2024-07-23 01:09:49.357518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.205 [2024-07-23 01:09:49.357542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.205 qpair failed and we were unable to recover it. 00:30:05.205 [2024-07-23 01:09:49.357678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.205 [2024-07-23 01:09:49.357808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.205 [2024-07-23 01:09:49.357833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.205 qpair failed and we were unable to recover it. 00:30:05.205 [2024-07-23 01:09:49.357965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.205 [2024-07-23 01:09:49.358101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.205 [2024-07-23 01:09:49.358126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.205 qpair failed and we were unable to recover it. 00:30:05.205 [2024-07-23 01:09:49.358311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.205 [2024-07-23 01:09:49.358449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.205 [2024-07-23 01:09:49.358473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.205 qpair failed and we were unable to recover it. 00:30:05.206 [2024-07-23 01:09:49.358632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.206 [2024-07-23 01:09:49.358767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.206 [2024-07-23 01:09:49.358791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.206 qpair failed and we were unable to recover it. 00:30:05.206 [2024-07-23 01:09:49.358970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.206 [2024-07-23 01:09:49.359103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.206 [2024-07-23 01:09:49.359128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.206 qpair failed and we were unable to recover it. 00:30:05.206 [2024-07-23 01:09:49.359312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.206 [2024-07-23 01:09:49.359472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.206 [2024-07-23 01:09:49.359501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.206 qpair failed and we were unable to recover it. 00:30:05.206 [2024-07-23 01:09:49.359691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.206 [2024-07-23 01:09:49.359866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.206 [2024-07-23 01:09:49.359890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.206 qpair failed and we were unable to recover it. 00:30:05.206 [2024-07-23 01:09:49.360025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.206 [2024-07-23 01:09:49.360187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.206 [2024-07-23 01:09:49.360210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.206 qpair failed and we were unable to recover it. 00:30:05.206 [2024-07-23 01:09:49.360400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.206 [2024-07-23 01:09:49.360554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.206 [2024-07-23 01:09:49.360579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.206 qpair failed and we were unable to recover it. 00:30:05.206 [2024-07-23 01:09:49.360729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.206 [2024-07-23 01:09:49.360893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.206 [2024-07-23 01:09:49.360917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.206 qpair failed and we were unable to recover it. 00:30:05.206 [2024-07-23 01:09:49.361052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.206 [2024-07-23 01:09:49.361194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.206 [2024-07-23 01:09:49.361221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.206 qpair failed and we were unable to recover it. 00:30:05.206 [2024-07-23 01:09:49.361382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.206 [2024-07-23 01:09:49.361554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.206 [2024-07-23 01:09:49.361578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.206 qpair failed and we were unable to recover it. 00:30:05.206 [2024-07-23 01:09:49.361755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.206 [2024-07-23 01:09:49.361920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.206 [2024-07-23 01:09:49.361944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.206 qpair failed and we were unable to recover it. 00:30:05.206 [2024-07-23 01:09:49.362114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.206 [2024-07-23 01:09:49.362270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.206 [2024-07-23 01:09:49.362294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.206 qpair failed and we were unable to recover it. 00:30:05.206 [2024-07-23 01:09:49.362458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.206 [2024-07-23 01:09:49.362626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.206 [2024-07-23 01:09:49.362651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.206 qpair failed and we were unable to recover it. 00:30:05.206 [2024-07-23 01:09:49.362787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.206 [2024-07-23 01:09:49.362931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.206 [2024-07-23 01:09:49.362955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.206 qpair failed and we were unable to recover it. 00:30:05.206 [2024-07-23 01:09:49.363122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.206 [2024-07-23 01:09:49.363290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.206 [2024-07-23 01:09:49.363314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.206 qpair failed and we were unable to recover it. 00:30:05.206 [2024-07-23 01:09:49.363481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.206 [2024-07-23 01:09:49.363669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.206 [2024-07-23 01:09:49.363694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.206 qpair failed and we were unable to recover it. 00:30:05.206 [2024-07-23 01:09:49.363850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.206 [2024-07-23 01:09:49.364020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.206 [2024-07-23 01:09:49.364045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.206 qpair failed and we were unable to recover it. 00:30:05.206 [2024-07-23 01:09:49.364176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.479 [2024-07-23 01:09:49.364312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.479 [2024-07-23 01:09:49.364335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.479 qpair failed and we were unable to recover it. 00:30:05.479 [2024-07-23 01:09:49.364524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.479 [2024-07-23 01:09:49.364673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.479 [2024-07-23 01:09:49.364697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.479 qpair failed and we were unable to recover it. 00:30:05.479 [2024-07-23 01:09:49.364845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.479 [2024-07-23 01:09:49.365000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.479 [2024-07-23 01:09:49.365024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.479 qpair failed and we were unable to recover it. 00:30:05.479 [2024-07-23 01:09:49.365178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.479 [2024-07-23 01:09:49.365332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.479 [2024-07-23 01:09:49.365355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.479 qpair failed and we were unable to recover it. 00:30:05.479 [2024-07-23 01:09:49.365521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.479 [2024-07-23 01:09:49.365683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.479 [2024-07-23 01:09:49.365707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.479 qpair failed and we were unable to recover it. 00:30:05.479 [2024-07-23 01:09:49.365879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.479 [2024-07-23 01:09:49.366057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.479 [2024-07-23 01:09:49.366080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.479 qpair failed and we were unable to recover it. 00:30:05.479 [2024-07-23 01:09:49.366235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.479 [2024-07-23 01:09:49.366390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.479 [2024-07-23 01:09:49.366414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.479 qpair failed and we were unable to recover it. 00:30:05.479 [2024-07-23 01:09:49.366559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.479 [2024-07-23 01:09:49.366696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.479 [2024-07-23 01:09:49.366722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.479 qpair failed and we were unable to recover it. 00:30:05.479 [2024-07-23 01:09:49.366891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.479 [2024-07-23 01:09:49.367090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.479 [2024-07-23 01:09:49.367113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.479 qpair failed and we were unable to recover it. 00:30:05.479 [2024-07-23 01:09:49.367282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.479 [2024-07-23 01:09:49.367416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.479 [2024-07-23 01:09:49.367439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.479 qpair failed and we were unable to recover it. 00:30:05.479 [2024-07-23 01:09:49.367575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.479 [2024-07-23 01:09:49.367725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.479 [2024-07-23 01:09:49.367748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.479 qpair failed and we were unable to recover it. 00:30:05.480 [2024-07-23 01:09:49.367936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.480 [2024-07-23 01:09:49.368079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.480 [2024-07-23 01:09:49.368103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.480 qpair failed and we were unable to recover it. 00:30:05.480 [2024-07-23 01:09:49.368281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.480 [2024-07-23 01:09:49.368443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.480 [2024-07-23 01:09:49.368467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.480 qpair failed and we were unable to recover it. 00:30:05.480 [2024-07-23 01:09:49.368611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.480 [2024-07-23 01:09:49.368781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.480 [2024-07-23 01:09:49.368805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.480 qpair failed and we were unable to recover it. 00:30:05.480 [2024-07-23 01:09:49.368970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.480 [2024-07-23 01:09:49.369126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.480 [2024-07-23 01:09:49.369151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.480 qpair failed and we were unable to recover it. 00:30:05.480 [2024-07-23 01:09:49.369285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.480 [2024-07-23 01:09:49.369446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.480 [2024-07-23 01:09:49.369470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.480 qpair failed and we were unable to recover it. 00:30:05.480 [2024-07-23 01:09:49.369651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.480 [2024-07-23 01:09:49.369813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.480 [2024-07-23 01:09:49.369837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.480 qpair failed and we were unable to recover it. 00:30:05.480 [2024-07-23 01:09:49.369983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.480 [2024-07-23 01:09:49.370151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.480 [2024-07-23 01:09:49.370175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.480 qpair failed and we were unable to recover it. 00:30:05.480 [2024-07-23 01:09:49.370333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.480 [2024-07-23 01:09:49.370493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.480 [2024-07-23 01:09:49.370518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.480 qpair failed and we were unable to recover it. 00:30:05.480 [2024-07-23 01:09:49.370657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.480 [2024-07-23 01:09:49.370844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.480 [2024-07-23 01:09:49.370868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.480 qpair failed and we were unable to recover it. 00:30:05.480 [2024-07-23 01:09:49.371033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.480 [2024-07-23 01:09:49.371168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.480 [2024-07-23 01:09:49.371192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.480 qpair failed and we were unable to recover it. 00:30:05.480 [2024-07-23 01:09:49.371336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.480 [2024-07-23 01:09:49.371490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.480 [2024-07-23 01:09:49.371514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.480 qpair failed and we were unable to recover it. 00:30:05.480 [2024-07-23 01:09:49.371678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.480 [2024-07-23 01:09:49.371816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.480 [2024-07-23 01:09:49.371840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.480 qpair failed and we were unable to recover it. 00:30:05.480 [2024-07-23 01:09:49.371988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.480 [2024-07-23 01:09:49.372178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.480 [2024-07-23 01:09:49.372202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.480 qpair failed and we were unable to recover it. 00:30:05.480 [2024-07-23 01:09:49.372367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.480 [2024-07-23 01:09:49.372551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.480 [2024-07-23 01:09:49.372575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.480 qpair failed and we were unable to recover it. 00:30:05.480 [2024-07-23 01:09:49.372756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.480 [2024-07-23 01:09:49.372924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.480 [2024-07-23 01:09:49.372948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.480 qpair failed and we were unable to recover it. 00:30:05.480 [2024-07-23 01:09:49.373113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.480 [2024-07-23 01:09:49.373249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.480 [2024-07-23 01:09:49.373272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.480 qpair failed and we were unable to recover it. 00:30:05.480 [2024-07-23 01:09:49.373411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.480 [2024-07-23 01:09:49.373580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.480 [2024-07-23 01:09:49.373604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.480 qpair failed and we were unable to recover it. 00:30:05.480 [2024-07-23 01:09:49.373776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.480 [2024-07-23 01:09:49.373912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.480 [2024-07-23 01:09:49.373937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.480 qpair failed and we were unable to recover it. 00:30:05.480 [2024-07-23 01:09:49.374122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.480 [2024-07-23 01:09:49.374258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.480 [2024-07-23 01:09:49.374282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.480 qpair failed and we were unable to recover it. 00:30:05.480 [2024-07-23 01:09:49.374442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.480 [2024-07-23 01:09:49.374578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.480 [2024-07-23 01:09:49.374602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.480 qpair failed and we were unable to recover it. 00:30:05.480 [2024-07-23 01:09:49.374808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.480 [2024-07-23 01:09:49.374978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.480 [2024-07-23 01:09:49.375003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.480 qpair failed and we were unable to recover it. 00:30:05.480 [2024-07-23 01:09:49.375172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.480 [2024-07-23 01:09:49.375306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.480 [2024-07-23 01:09:49.375330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.480 qpair failed and we were unable to recover it. 00:30:05.480 [2024-07-23 01:09:49.375465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.480 [2024-07-23 01:09:49.375604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.480 [2024-07-23 01:09:49.375635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.480 qpair failed and we were unable to recover it. 00:30:05.480 [2024-07-23 01:09:49.375823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.480 [2024-07-23 01:09:49.376006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.480 [2024-07-23 01:09:49.376030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.480 qpair failed and we were unable to recover it. 00:30:05.480 [2024-07-23 01:09:49.376199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.480 [2024-07-23 01:09:49.376372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.480 [2024-07-23 01:09:49.376396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.480 qpair failed and we were unable to recover it. 00:30:05.480 [2024-07-23 01:09:49.376552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.480 [2024-07-23 01:09:49.376685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.480 [2024-07-23 01:09:49.376710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.480 qpair failed and we were unable to recover it. 00:30:05.480 [2024-07-23 01:09:49.376895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.480 [2024-07-23 01:09:49.377042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.480 [2024-07-23 01:09:49.377071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.480 qpair failed and we were unable to recover it. 00:30:05.480 [2024-07-23 01:09:49.377204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.480 [2024-07-23 01:09:49.377368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.480 [2024-07-23 01:09:49.377394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.480 qpair failed and we were unable to recover it. 00:30:05.480 [2024-07-23 01:09:49.377562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.481 [2024-07-23 01:09:49.377716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.481 [2024-07-23 01:09:49.377741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.481 qpair failed and we were unable to recover it. 00:30:05.481 [2024-07-23 01:09:49.377903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.481 [2024-07-23 01:09:49.378074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.481 [2024-07-23 01:09:49.378098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.481 qpair failed and we were unable to recover it. 00:30:05.481 [2024-07-23 01:09:49.378258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.481 [2024-07-23 01:09:49.378388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.481 [2024-07-23 01:09:49.378412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.481 qpair failed and we were unable to recover it. 00:30:05.481 [2024-07-23 01:09:49.378571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.481 [2024-07-23 01:09:49.378775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.481 [2024-07-23 01:09:49.378800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.481 qpair failed and we were unable to recover it. 00:30:05.481 [2024-07-23 01:09:49.378933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.481 [2024-07-23 01:09:49.379127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.481 [2024-07-23 01:09:49.379151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.481 qpair failed and we were unable to recover it. 00:30:05.481 [2024-07-23 01:09:49.379316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.481 [2024-07-23 01:09:49.379448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.481 [2024-07-23 01:09:49.379472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.481 qpair failed and we were unable to recover it. 00:30:05.481 [2024-07-23 01:09:49.379660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.481 [2024-07-23 01:09:49.379787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.481 [2024-07-23 01:09:49.379811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.481 qpair failed and we were unable to recover it. 00:30:05.481 [2024-07-23 01:09:49.379952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.481 [2024-07-23 01:09:49.380115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.481 [2024-07-23 01:09:49.380139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.481 qpair failed and we were unable to recover it. 00:30:05.481 [2024-07-23 01:09:49.380270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.481 [2024-07-23 01:09:49.380402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.481 [2024-07-23 01:09:49.380430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.481 qpair failed and we were unable to recover it. 00:30:05.481 [2024-07-23 01:09:49.380588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.481 [2024-07-23 01:09:49.380753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.481 [2024-07-23 01:09:49.380777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.481 qpair failed and we were unable to recover it. 00:30:05.481 [2024-07-23 01:09:49.380916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.481 [2024-07-23 01:09:49.381055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.481 [2024-07-23 01:09:49.381079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.481 qpair failed and we were unable to recover it. 00:30:05.481 [2024-07-23 01:09:49.381216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.481 [2024-07-23 01:09:49.381348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.481 [2024-07-23 01:09:49.381372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.481 qpair failed and we were unable to recover it. 00:30:05.481 [2024-07-23 01:09:49.381539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.481 [2024-07-23 01:09:49.381706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.481 [2024-07-23 01:09:49.381731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.481 qpair failed and we were unable to recover it. 00:30:05.481 [2024-07-23 01:09:49.381906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.481 [2024-07-23 01:09:49.382069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.481 [2024-07-23 01:09:49.382093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.481 qpair failed and we were unable to recover it. 00:30:05.481 [2024-07-23 01:09:49.382259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.481 [2024-07-23 01:09:49.382419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.481 [2024-07-23 01:09:49.382445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.481 qpair failed and we were unable to recover it. 00:30:05.481 [2024-07-23 01:09:49.382607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.481 [2024-07-23 01:09:49.382779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.481 [2024-07-23 01:09:49.382804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.481 qpair failed and we were unable to recover it. 00:30:05.481 [2024-07-23 01:09:49.382995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.481 [2024-07-23 01:09:49.383154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.481 [2024-07-23 01:09:49.383178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.481 qpair failed and we were unable to recover it. 00:30:05.481 [2024-07-23 01:09:49.383335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.481 [2024-07-23 01:09:49.383531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.481 [2024-07-23 01:09:49.383555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.481 qpair failed and we were unable to recover it. 00:30:05.481 [2024-07-23 01:09:49.383688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.481 [2024-07-23 01:09:49.383855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.481 [2024-07-23 01:09:49.383879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.481 qpair failed and we were unable to recover it. 00:30:05.481 [2024-07-23 01:09:49.384024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.481 [2024-07-23 01:09:49.384187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.481 [2024-07-23 01:09:49.384211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.481 qpair failed and we were unable to recover it. 00:30:05.481 [2024-07-23 01:09:49.384344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.481 [2024-07-23 01:09:49.384477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.481 [2024-07-23 01:09:49.384504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.481 qpair failed and we were unable to recover it. 00:30:05.481 [2024-07-23 01:09:49.384654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.481 [2024-07-23 01:09:49.384788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.481 [2024-07-23 01:09:49.384812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.481 qpair failed and we were unable to recover it. 00:30:05.481 [2024-07-23 01:09:49.385000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.481 [2024-07-23 01:09:49.385157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.481 [2024-07-23 01:09:49.385181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.481 qpair failed and we were unable to recover it. 00:30:05.481 [2024-07-23 01:09:49.385365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.481 [2024-07-23 01:09:49.385521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.481 [2024-07-23 01:09:49.385545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.481 qpair failed and we were unable to recover it. 00:30:05.481 [2024-07-23 01:09:49.385710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.481 [2024-07-23 01:09:49.385876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.481 [2024-07-23 01:09:49.385901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.481 qpair failed and we were unable to recover it. 00:30:05.481 [2024-07-23 01:09:49.386034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.481 [2024-07-23 01:09:49.386167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.481 [2024-07-23 01:09:49.386192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.481 qpair failed and we were unable to recover it. 00:30:05.481 [2024-07-23 01:09:49.386352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.481 [2024-07-23 01:09:49.386521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.481 [2024-07-23 01:09:49.386545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.481 qpair failed and we were unable to recover it. 00:30:05.481 [2024-07-23 01:09:49.386680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.481 [2024-07-23 01:09:49.386837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.481 [2024-07-23 01:09:49.386862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.481 qpair failed and we were unable to recover it. 00:30:05.481 [2024-07-23 01:09:49.387051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.481 [2024-07-23 01:09:49.387205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.481 [2024-07-23 01:09:49.387229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.481 qpair failed and we were unable to recover it. 00:30:05.482 [2024-07-23 01:09:49.387364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.482 [2024-07-23 01:09:49.387491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.482 [2024-07-23 01:09:49.387515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.482 qpair failed and we were unable to recover it. 00:30:05.482 [2024-07-23 01:09:49.387657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.482 [2024-07-23 01:09:49.387797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.482 [2024-07-23 01:09:49.387823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.482 qpair failed and we were unable to recover it. 00:30:05.482 [2024-07-23 01:09:49.387970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.482 [2024-07-23 01:09:49.388158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.482 [2024-07-23 01:09:49.388182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.482 qpair failed and we were unable to recover it. 00:30:05.482 [2024-07-23 01:09:49.388333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.482 [2024-07-23 01:09:49.388467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.482 [2024-07-23 01:09:49.388491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.482 qpair failed and we were unable to recover it. 00:30:05.482 [2024-07-23 01:09:49.388641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.482 [2024-07-23 01:09:49.388805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.482 [2024-07-23 01:09:49.388829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.482 qpair failed and we were unable to recover it. 00:30:05.482 [2024-07-23 01:09:49.389000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.482 [2024-07-23 01:09:49.389138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.482 [2024-07-23 01:09:49.389162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.482 qpair failed and we were unable to recover it. 00:30:05.482 [2024-07-23 01:09:49.389306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.482 [2024-07-23 01:09:49.389445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.482 [2024-07-23 01:09:49.389470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.482 qpair failed and we were unable to recover it. 00:30:05.482 [2024-07-23 01:09:49.389604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.482 [2024-07-23 01:09:49.389786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.482 [2024-07-23 01:09:49.389811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.482 qpair failed and we were unable to recover it. 00:30:05.482 [2024-07-23 01:09:49.389970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.482 [2024-07-23 01:09:49.390108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.482 [2024-07-23 01:09:49.390132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.482 qpair failed and we were unable to recover it. 00:30:05.482 [2024-07-23 01:09:49.390266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.482 [2024-07-23 01:09:49.390430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.482 [2024-07-23 01:09:49.390456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.482 qpair failed and we were unable to recover it. 00:30:05.482 [2024-07-23 01:09:49.390650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.482 [2024-07-23 01:09:49.390793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.482 [2024-07-23 01:09:49.390818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.482 qpair failed and we were unable to recover it. 00:30:05.482 [2024-07-23 01:09:49.390960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.482 [2024-07-23 01:09:49.391119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.482 [2024-07-23 01:09:49.391143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.482 qpair failed and we were unable to recover it. 00:30:05.482 [2024-07-23 01:09:49.391303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.482 [2024-07-23 01:09:49.391476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.482 [2024-07-23 01:09:49.391501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.482 qpair failed and we were unable to recover it. 00:30:05.482 [2024-07-23 01:09:49.391644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.482 [2024-07-23 01:09:49.391808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.482 [2024-07-23 01:09:49.391832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.482 qpair failed and we were unable to recover it. 00:30:05.482 [2024-07-23 01:09:49.392000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.482 [2024-07-23 01:09:49.392138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.482 [2024-07-23 01:09:49.392163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.482 qpair failed and we were unable to recover it. 00:30:05.482 [2024-07-23 01:09:49.392321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.482 [2024-07-23 01:09:49.392450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.482 [2024-07-23 01:09:49.392475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.482 qpair failed and we were unable to recover it. 00:30:05.482 [2024-07-23 01:09:49.392622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.482 [2024-07-23 01:09:49.392780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.482 [2024-07-23 01:09:49.392805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.482 qpair failed and we were unable to recover it. 00:30:05.482 [2024-07-23 01:09:49.392971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.482 [2024-07-23 01:09:49.393108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.482 [2024-07-23 01:09:49.393135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.482 qpair failed and we were unable to recover it. 00:30:05.482 [2024-07-23 01:09:49.393313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.482 [2024-07-23 01:09:49.393472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.482 [2024-07-23 01:09:49.393497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.482 qpair failed and we were unable to recover it. 00:30:05.482 [2024-07-23 01:09:49.393657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.482 [2024-07-23 01:09:49.393824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.482 [2024-07-23 01:09:49.393848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.482 qpair failed and we were unable to recover it. 00:30:05.482 [2024-07-23 01:09:49.393983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.482 [2024-07-23 01:09:49.394171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.482 [2024-07-23 01:09:49.394195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.482 qpair failed and we were unable to recover it. 00:30:05.482 [2024-07-23 01:09:49.394342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.482 [2024-07-23 01:09:49.394485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.482 [2024-07-23 01:09:49.394509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.482 qpair failed and we were unable to recover it. 00:30:05.482 [2024-07-23 01:09:49.394652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.482 [2024-07-23 01:09:49.394812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.482 [2024-07-23 01:09:49.394837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.482 qpair failed and we were unable to recover it. 00:30:05.482 [2024-07-23 01:09:49.394997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.482 [2024-07-23 01:09:49.395118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.482 [2024-07-23 01:09:49.395142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.482 qpair failed and we were unable to recover it. 00:30:05.482 [2024-07-23 01:09:49.395280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.482 [2024-07-23 01:09:49.395413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.482 [2024-07-23 01:09:49.395437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.482 qpair failed and we were unable to recover it. 00:30:05.482 [2024-07-23 01:09:49.395598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.482 [2024-07-23 01:09:49.395747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.482 [2024-07-23 01:09:49.395772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.482 qpair failed and we were unable to recover it. 00:30:05.482 [2024-07-23 01:09:49.395935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.483 [2024-07-23 01:09:49.396091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.483 [2024-07-23 01:09:49.396115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.483 qpair failed and we were unable to recover it. 00:30:05.483 [2024-07-23 01:09:49.396278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.483 [2024-07-23 01:09:49.396439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.483 [2024-07-23 01:09:49.396464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.483 qpair failed and we were unable to recover it. 00:30:05.483 [2024-07-23 01:09:49.396629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.483 [2024-07-23 01:09:49.396782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.483 [2024-07-23 01:09:49.396806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.483 qpair failed and we were unable to recover it. 00:30:05.483 [2024-07-23 01:09:49.396974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.483 [2024-07-23 01:09:49.397137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.483 [2024-07-23 01:09:49.397161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.483 qpair failed and we were unable to recover it. 00:30:05.483 [2024-07-23 01:09:49.397325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.483 [2024-07-23 01:09:49.397487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.483 [2024-07-23 01:09:49.397515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.483 qpair failed and we were unable to recover it. 00:30:05.483 [2024-07-23 01:09:49.397659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.483 [2024-07-23 01:09:49.397803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.483 [2024-07-23 01:09:49.397829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.483 qpair failed and we were unable to recover it. 00:30:05.483 [2024-07-23 01:09:49.397959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.483 [2024-07-23 01:09:49.398122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.483 [2024-07-23 01:09:49.398146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.483 qpair failed and we were unable to recover it. 00:30:05.483 [2024-07-23 01:09:49.398287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.483 [2024-07-23 01:09:49.398421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.483 [2024-07-23 01:09:49.398445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.483 qpair failed and we were unable to recover it. 00:30:05.483 [2024-07-23 01:09:49.398569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.483 [2024-07-23 01:09:49.398733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.483 [2024-07-23 01:09:49.398758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.483 qpair failed and we were unable to recover it. 00:30:05.483 [2024-07-23 01:09:49.398922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.483 [2024-07-23 01:09:49.399087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.483 [2024-07-23 01:09:49.399111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.483 qpair failed and we were unable to recover it. 00:30:05.483 [2024-07-23 01:09:49.399274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.483 [2024-07-23 01:09:49.399412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.483 [2024-07-23 01:09:49.399438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.483 qpair failed and we were unable to recover it. 00:30:05.483 [2024-07-23 01:09:49.399601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.483 [2024-07-23 01:09:49.399769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.483 [2024-07-23 01:09:49.399793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.483 qpair failed and we were unable to recover it. 00:30:05.483 [2024-07-23 01:09:49.399957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.483 [2024-07-23 01:09:49.400094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.483 [2024-07-23 01:09:49.400118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.483 qpair failed and we were unable to recover it. 00:30:05.483 [2024-07-23 01:09:49.400254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.483 [2024-07-23 01:09:49.400390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.483 [2024-07-23 01:09:49.400415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.483 qpair failed and we were unable to recover it. 00:30:05.483 [2024-07-23 01:09:49.400547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.483 [2024-07-23 01:09:49.400674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.483 [2024-07-23 01:09:49.400699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.483 qpair failed and we were unable to recover it. 00:30:05.483 [2024-07-23 01:09:49.400880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.483 [2024-07-23 01:09:49.401048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.483 [2024-07-23 01:09:49.401072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.483 qpair failed and we were unable to recover it. 00:30:05.483 [2024-07-23 01:09:49.401234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.483 [2024-07-23 01:09:49.401406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.483 [2024-07-23 01:09:49.401430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.483 qpair failed and we were unable to recover it. 00:30:05.483 [2024-07-23 01:09:49.401590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.483 [2024-07-23 01:09:49.401726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.483 [2024-07-23 01:09:49.401751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.483 qpair failed and we were unable to recover it. 00:30:05.483 [2024-07-23 01:09:49.401908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.483 [2024-07-23 01:09:49.402045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.483 [2024-07-23 01:09:49.402070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.483 qpair failed and we were unable to recover it. 00:30:05.483 [2024-07-23 01:09:49.402234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.483 [2024-07-23 01:09:49.402391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.483 [2024-07-23 01:09:49.402415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.483 qpair failed and we were unable to recover it. 00:30:05.483 [2024-07-23 01:09:49.402555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.483 [2024-07-23 01:09:49.402691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.483 [2024-07-23 01:09:49.402716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.483 qpair failed and we were unable to recover it. 00:30:05.483 [2024-07-23 01:09:49.402882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.483 [2024-07-23 01:09:49.403016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.483 [2024-07-23 01:09:49.403040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.483 qpair failed and we were unable to recover it. 00:30:05.483 [2024-07-23 01:09:49.403199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.483 [2024-07-23 01:09:49.403339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.483 [2024-07-23 01:09:49.403364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.483 qpair failed and we were unable to recover it. 00:30:05.483 [2024-07-23 01:09:49.403497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.483 [2024-07-23 01:09:49.403678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.483 [2024-07-23 01:09:49.403703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.483 qpair failed and we were unable to recover it. 00:30:05.483 [2024-07-23 01:09:49.403867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.483 [2024-07-23 01:09:49.404029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.483 [2024-07-23 01:09:49.404053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.483 qpair failed and we were unable to recover it. 00:30:05.483 [2024-07-23 01:09:49.404245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.483 [2024-07-23 01:09:49.404451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.483 [2024-07-23 01:09:49.404479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.483 qpair failed and we were unable to recover it. 00:30:05.483 [2024-07-23 01:09:49.404697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.483 [2024-07-23 01:09:49.404838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.483 [2024-07-23 01:09:49.404862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.483 qpair failed and we were unable to recover it. 00:30:05.483 [2024-07-23 01:09:49.405023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.483 [2024-07-23 01:09:49.405183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.483 [2024-07-23 01:09:49.405224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.483 qpair failed and we were unable to recover it. 00:30:05.483 [2024-07-23 01:09:49.405385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.483 [2024-07-23 01:09:49.405545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.483 [2024-07-23 01:09:49.405571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.483 qpair failed and we were unable to recover it. 00:30:05.484 [2024-07-23 01:09:49.405786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.484 [2024-07-23 01:09:49.405973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.484 [2024-07-23 01:09:49.406000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.484 qpair failed and we were unable to recover it. 00:30:05.484 [2024-07-23 01:09:49.406142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.484 [2024-07-23 01:09:49.406351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.484 [2024-07-23 01:09:49.406375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.484 qpair failed and we were unable to recover it. 00:30:05.484 [2024-07-23 01:09:49.406583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.484 [2024-07-23 01:09:49.406776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.484 [2024-07-23 01:09:49.406801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.484 qpair failed and we were unable to recover it. 00:30:05.484 [2024-07-23 01:09:49.406936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.484 [2024-07-23 01:09:49.407127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.484 [2024-07-23 01:09:49.407151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.484 qpair failed and we were unable to recover it. 00:30:05.484 [2024-07-23 01:09:49.407288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.484 [2024-07-23 01:09:49.407432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.484 [2024-07-23 01:09:49.407458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.484 qpair failed and we were unable to recover it. 00:30:05.484 [2024-07-23 01:09:49.407672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.484 [2024-07-23 01:09:49.407849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.484 [2024-07-23 01:09:49.407878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.484 qpair failed and we were unable to recover it. 00:30:05.484 [2024-07-23 01:09:49.408096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.484 [2024-07-23 01:09:49.408286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.484 [2024-07-23 01:09:49.408310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.484 qpair failed and we were unable to recover it. 00:30:05.484 [2024-07-23 01:09:49.408443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.484 [2024-07-23 01:09:49.408635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.484 [2024-07-23 01:09:49.408660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.484 qpair failed and we were unable to recover it. 00:30:05.484 [2024-07-23 01:09:49.408806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.484 [2024-07-23 01:09:49.408949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.484 [2024-07-23 01:09:49.408973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.484 qpair failed and we were unable to recover it. 00:30:05.484 [2024-07-23 01:09:49.409155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.484 [2024-07-23 01:09:49.409344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.484 [2024-07-23 01:09:49.409369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.484 qpair failed and we were unable to recover it. 00:30:05.484 [2024-07-23 01:09:49.409498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.484 [2024-07-23 01:09:49.409657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.484 [2024-07-23 01:09:49.409682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.484 qpair failed and we were unable to recover it. 00:30:05.484 [2024-07-23 01:09:49.409846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.484 [2024-07-23 01:09:49.410020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.484 [2024-07-23 01:09:49.410044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.484 qpair failed and we were unable to recover it. 00:30:05.484 [2024-07-23 01:09:49.410191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.484 [2024-07-23 01:09:49.410381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.484 [2024-07-23 01:09:49.410405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.484 qpair failed and we were unable to recover it. 00:30:05.484 [2024-07-23 01:09:49.410593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.484 [2024-07-23 01:09:49.410803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.484 [2024-07-23 01:09:49.410830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.484 qpair failed and we were unable to recover it. 00:30:05.484 [2024-07-23 01:09:49.411011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.484 [2024-07-23 01:09:49.411188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.484 [2024-07-23 01:09:49.411214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.484 qpair failed and we were unable to recover it. 00:30:05.484 [2024-07-23 01:09:49.411429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.484 [2024-07-23 01:09:49.411618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.484 [2024-07-23 01:09:49.411643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.484 qpair failed and we were unable to recover it. 00:30:05.484 [2024-07-23 01:09:49.411807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.484 [2024-07-23 01:09:49.411998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.484 [2024-07-23 01:09:49.412025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.484 qpair failed and we were unable to recover it. 00:30:05.484 [2024-07-23 01:09:49.412183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.484 [2024-07-23 01:09:49.412357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.484 [2024-07-23 01:09:49.412386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.484 qpair failed and we were unable to recover it. 00:30:05.484 [2024-07-23 01:09:49.412576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.484 [2024-07-23 01:09:49.412750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.484 [2024-07-23 01:09:49.412775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.484 qpair failed and we were unable to recover it. 00:30:05.484 [2024-07-23 01:09:49.412911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.484 [2024-07-23 01:09:49.413044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.484 [2024-07-23 01:09:49.413068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.484 qpair failed and we were unable to recover it. 00:30:05.484 [2024-07-23 01:09:49.413291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.484 [2024-07-23 01:09:49.413465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.484 [2024-07-23 01:09:49.413492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.484 qpair failed and we were unable to recover it. 00:30:05.484 [2024-07-23 01:09:49.413681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.484 [2024-07-23 01:09:49.413842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.484 [2024-07-23 01:09:49.413883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.484 qpair failed and we were unable to recover it. 00:30:05.484 [2024-07-23 01:09:49.414031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.484 [2024-07-23 01:09:49.414210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.484 [2024-07-23 01:09:49.414234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.484 qpair failed and we were unable to recover it. 00:30:05.484 [2024-07-23 01:09:49.414424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.485 [2024-07-23 01:09:49.414587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.485 [2024-07-23 01:09:49.414611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.485 qpair failed and we were unable to recover it. 00:30:05.485 [2024-07-23 01:09:49.414759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.485 [2024-07-23 01:09:49.414888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.485 [2024-07-23 01:09:49.414912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.485 qpair failed and we were unable to recover it. 00:30:05.485 [2024-07-23 01:09:49.415122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.485 [2024-07-23 01:09:49.415250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.485 [2024-07-23 01:09:49.415274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.485 qpair failed and we were unable to recover it. 00:30:05.485 [2024-07-23 01:09:49.415445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.485 [2024-07-23 01:09:49.415663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.485 [2024-07-23 01:09:49.415695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.485 qpair failed and we were unable to recover it. 00:30:05.485 [2024-07-23 01:09:49.415849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.485 [2024-07-23 01:09:49.416070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.485 [2024-07-23 01:09:49.416094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.485 qpair failed and we were unable to recover it. 00:30:05.485 [2024-07-23 01:09:49.416253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.485 [2024-07-23 01:09:49.416435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.485 [2024-07-23 01:09:49.416462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.485 qpair failed and we were unable to recover it. 00:30:05.485 [2024-07-23 01:09:49.416642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.485 [2024-07-23 01:09:49.416795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.485 [2024-07-23 01:09:49.416819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.485 qpair failed and we were unable to recover it. 00:30:05.485 [2024-07-23 01:09:49.416979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.485 [2024-07-23 01:09:49.417184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.485 [2024-07-23 01:09:49.417212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.485 qpair failed and we were unable to recover it. 00:30:05.485 [2024-07-23 01:09:49.417388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.485 [2024-07-23 01:09:49.417579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.485 [2024-07-23 01:09:49.417603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.485 qpair failed and we were unable to recover it. 00:30:05.485 [2024-07-23 01:09:49.417801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.485 [2024-07-23 01:09:49.418008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.485 [2024-07-23 01:09:49.418035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.485 qpair failed and we were unable to recover it. 00:30:05.485 [2024-07-23 01:09:49.418217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.485 [2024-07-23 01:09:49.418407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.485 [2024-07-23 01:09:49.418431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.485 qpair failed and we were unable to recover it. 00:30:05.485 [2024-07-23 01:09:49.418571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.485 [2024-07-23 01:09:49.418769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.485 [2024-07-23 01:09:49.418794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.485 qpair failed and we were unable to recover it. 00:30:05.485 [2024-07-23 01:09:49.418930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.485 [2024-07-23 01:09:49.419094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.485 [2024-07-23 01:09:49.419118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.485 qpair failed and we were unable to recover it. 00:30:05.485 [2024-07-23 01:09:49.419251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.485 [2024-07-23 01:09:49.419412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.485 [2024-07-23 01:09:49.419436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.485 qpair failed and we were unable to recover it. 00:30:05.485 [2024-07-23 01:09:49.419628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.485 [2024-07-23 01:09:49.419771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.485 [2024-07-23 01:09:49.419795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.485 qpair failed and we were unable to recover it. 00:30:05.485 [2024-07-23 01:09:49.419948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.485 [2024-07-23 01:09:49.420126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.485 [2024-07-23 01:09:49.420152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.485 qpair failed and we were unable to recover it. 00:30:05.485 [2024-07-23 01:09:49.420366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.485 [2024-07-23 01:09:49.420546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.485 [2024-07-23 01:09:49.420572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.485 qpair failed and we were unable to recover it. 00:30:05.485 [2024-07-23 01:09:49.420763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.485 [2024-07-23 01:09:49.420908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.485 [2024-07-23 01:09:49.420932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.485 qpair failed and we were unable to recover it. 00:30:05.485 [2024-07-23 01:09:49.421069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.485 [2024-07-23 01:09:49.421232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.485 [2024-07-23 01:09:49.421256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.485 qpair failed and we were unable to recover it. 00:30:05.485 [2024-07-23 01:09:49.421390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.485 [2024-07-23 01:09:49.421553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.485 [2024-07-23 01:09:49.421578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.485 qpair failed and we were unable to recover it. 00:30:05.485 [2024-07-23 01:09:49.421791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.485 [2024-07-23 01:09:49.421968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.485 [2024-07-23 01:09:49.421995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.485 qpair failed and we were unable to recover it. 00:30:05.485 [2024-07-23 01:09:49.422145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.485 [2024-07-23 01:09:49.422302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.485 [2024-07-23 01:09:49.422326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.485 qpair failed and we were unable to recover it. 00:30:05.485 [2024-07-23 01:09:49.422559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.485 [2024-07-23 01:09:49.422706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.485 [2024-07-23 01:09:49.422730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.485 qpair failed and we were unable to recover it. 00:30:05.485 [2024-07-23 01:09:49.422895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.485 [2024-07-23 01:09:49.423047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.485 [2024-07-23 01:09:49.423074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.485 qpair failed and we were unable to recover it. 00:30:05.485 [2024-07-23 01:09:49.423250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.485 [2024-07-23 01:09:49.423436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.485 [2024-07-23 01:09:49.423462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.485 qpair failed and we were unable to recover it. 00:30:05.485 [2024-07-23 01:09:49.423602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.485 [2024-07-23 01:09:49.423799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.485 [2024-07-23 01:09:49.423823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.485 qpair failed and we were unable to recover it. 00:30:05.485 [2024-07-23 01:09:49.424002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.485 [2024-07-23 01:09:49.424206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.485 [2024-07-23 01:09:49.424233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.485 qpair failed and we were unable to recover it. 00:30:05.485 [2024-07-23 01:09:49.424386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.485 [2024-07-23 01:09:49.424594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.485 [2024-07-23 01:09:49.424624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.485 qpair failed and we were unable to recover it. 00:30:05.485 [2024-07-23 01:09:49.424793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.485 [2024-07-23 01:09:49.425000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.485 [2024-07-23 01:09:49.425027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.485 qpair failed and we were unable to recover it. 00:30:05.486 [2024-07-23 01:09:49.425232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.486 [2024-07-23 01:09:49.425377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.486 [2024-07-23 01:09:49.425404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.486 qpair failed and we were unable to recover it. 00:30:05.486 [2024-07-23 01:09:49.425557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.486 [2024-07-23 01:09:49.425703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.486 [2024-07-23 01:09:49.425731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.486 qpair failed and we were unable to recover it. 00:30:05.486 [2024-07-23 01:09:49.425941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.486 [2024-07-23 01:09:49.426104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.486 [2024-07-23 01:09:49.426128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.486 qpair failed and we were unable to recover it. 00:30:05.486 [2024-07-23 01:09:49.426307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.486 [2024-07-23 01:09:49.426482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.486 [2024-07-23 01:09:49.426508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.486 qpair failed and we were unable to recover it. 00:30:05.486 [2024-07-23 01:09:49.426696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.486 [2024-07-23 01:09:49.426863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.486 [2024-07-23 01:09:49.426888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.486 qpair failed and we were unable to recover it. 00:30:05.486 [2024-07-23 01:09:49.427074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.486 [2024-07-23 01:09:49.427287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.486 [2024-07-23 01:09:49.427312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.486 qpair failed and we were unable to recover it. 00:30:05.486 [2024-07-23 01:09:49.427446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.486 [2024-07-23 01:09:49.427631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.486 [2024-07-23 01:09:49.427659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.486 qpair failed and we were unable to recover it. 00:30:05.486 [2024-07-23 01:09:49.427840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.486 [2024-07-23 01:09:49.428031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.486 [2024-07-23 01:09:49.428055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.486 qpair failed and we were unable to recover it. 00:30:05.486 [2024-07-23 01:09:49.428196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.486 [2024-07-23 01:09:49.428382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.486 [2024-07-23 01:09:49.428423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.486 qpair failed and we were unable to recover it. 00:30:05.486 [2024-07-23 01:09:49.428634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.486 [2024-07-23 01:09:49.428800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.486 [2024-07-23 01:09:49.428824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.486 qpair failed and we were unable to recover it. 00:30:05.486 [2024-07-23 01:09:49.429002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.486 [2024-07-23 01:09:49.429197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.486 [2024-07-23 01:09:49.429238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.486 qpair failed and we were unable to recover it. 00:30:05.486 [2024-07-23 01:09:49.429393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.486 [2024-07-23 01:09:49.429599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.486 [2024-07-23 01:09:49.429647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.486 qpair failed and we were unable to recover it. 00:30:05.486 [2024-07-23 01:09:49.429811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.486 [2024-07-23 01:09:49.430001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.486 [2024-07-23 01:09:49.430025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.486 qpair failed and we were unable to recover it. 00:30:05.486 [2024-07-23 01:09:49.430260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.486 [2024-07-23 01:09:49.430421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.486 [2024-07-23 01:09:49.430446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.486 qpair failed and we were unable to recover it. 00:30:05.486 [2024-07-23 01:09:49.430582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.486 [2024-07-23 01:09:49.430791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.486 [2024-07-23 01:09:49.430816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.486 qpair failed and we were unable to recover it. 00:30:05.486 [2024-07-23 01:09:49.430945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.486 [2024-07-23 01:09:49.431140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.486 [2024-07-23 01:09:49.431165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.486 qpair failed and we were unable to recover it. 00:30:05.486 [2024-07-23 01:09:49.431337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.486 [2024-07-23 01:09:49.431477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.486 [2024-07-23 01:09:49.431520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.486 qpair failed and we were unable to recover it. 00:30:05.486 [2024-07-23 01:09:49.431678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.486 [2024-07-23 01:09:49.431880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.486 [2024-07-23 01:09:49.431908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.486 qpair failed and we were unable to recover it. 00:30:05.486 [2024-07-23 01:09:49.432069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.486 [2024-07-23 01:09:49.432204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.486 [2024-07-23 01:09:49.432228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.486 qpair failed and we were unable to recover it. 00:30:05.486 [2024-07-23 01:09:49.432392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.486 [2024-07-23 01:09:49.432589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.486 [2024-07-23 01:09:49.432623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.486 qpair failed and we were unable to recover it. 00:30:05.486 [2024-07-23 01:09:49.432804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.486 [2024-07-23 01:09:49.432973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.486 [2024-07-23 01:09:49.432998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.486 qpair failed and we were unable to recover it. 00:30:05.486 [2024-07-23 01:09:49.433157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.486 [2024-07-23 01:09:49.433315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.486 [2024-07-23 01:09:49.433339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.486 qpair failed and we were unable to recover it. 00:30:05.486 [2024-07-23 01:09:49.433509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.486 [2024-07-23 01:09:49.433670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.486 [2024-07-23 01:09:49.433695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.486 qpair failed and we were unable to recover it. 00:30:05.486 [2024-07-23 01:09:49.433849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.486 [2024-07-23 01:09:49.434002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.486 [2024-07-23 01:09:49.434029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.486 qpair failed and we were unable to recover it. 00:30:05.486 [2024-07-23 01:09:49.434203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.486 [2024-07-23 01:09:49.434405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.486 [2024-07-23 01:09:49.434428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.486 qpair failed and we were unable to recover it. 00:30:05.486 [2024-07-23 01:09:49.434593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.486 [2024-07-23 01:09:49.434753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.486 [2024-07-23 01:09:49.434798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.486 qpair failed and we were unable to recover it. 00:30:05.486 [2024-07-23 01:09:49.435019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.486 [2024-07-23 01:09:49.435184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.486 [2024-07-23 01:09:49.435208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.486 qpair failed and we were unable to recover it. 00:30:05.486 [2024-07-23 01:09:49.435373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.486 [2024-07-23 01:09:49.435505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.486 [2024-07-23 01:09:49.435529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.486 qpair failed and we were unable to recover it. 00:30:05.487 [2024-07-23 01:09:49.435720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.487 [2024-07-23 01:09:49.435911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.487 [2024-07-23 01:09:49.435935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.487 qpair failed and we were unable to recover it. 00:30:05.487 [2024-07-23 01:09:49.436097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.487 [2024-07-23 01:09:49.436233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.487 [2024-07-23 01:09:49.436257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.487 qpair failed and we were unable to recover it. 00:30:05.487 [2024-07-23 01:09:49.436430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.487 [2024-07-23 01:09:49.436655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.487 [2024-07-23 01:09:49.436680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.487 qpair failed and we were unable to recover it. 00:30:05.487 [2024-07-23 01:09:49.436820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.487 [2024-07-23 01:09:49.436960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.487 [2024-07-23 01:09:49.436984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.487 qpair failed and we were unable to recover it. 00:30:05.487 [2024-07-23 01:09:49.437149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.487 [2024-07-23 01:09:49.437324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.487 [2024-07-23 01:09:49.437351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.487 qpair failed and we were unable to recover it. 00:30:05.487 [2024-07-23 01:09:49.437514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.487 [2024-07-23 01:09:49.437655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.487 [2024-07-23 01:09:49.437682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.487 qpair failed and we were unable to recover it. 00:30:05.487 [2024-07-23 01:09:49.437908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.487 [2024-07-23 01:09:49.438073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.487 [2024-07-23 01:09:49.438097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.487 qpair failed and we were unable to recover it. 00:30:05.487 [2024-07-23 01:09:49.438261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.487 [2024-07-23 01:09:49.438486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.487 [2024-07-23 01:09:49.438516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.487 qpair failed and we were unable to recover it. 00:30:05.487 [2024-07-23 01:09:49.438663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.487 [2024-07-23 01:09:49.438890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.487 [2024-07-23 01:09:49.438918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.487 qpair failed and we were unable to recover it. 00:30:05.487 [2024-07-23 01:09:49.439111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.487 [2024-07-23 01:09:49.439244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.487 [2024-07-23 01:09:49.439269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.487 qpair failed and we were unable to recover it. 00:30:05.487 [2024-07-23 01:09:49.439468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.487 [2024-07-23 01:09:49.439600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.487 [2024-07-23 01:09:49.439632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.487 qpair failed and we were unable to recover it. 00:30:05.487 [2024-07-23 01:09:49.439797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.487 [2024-07-23 01:09:49.440004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.487 [2024-07-23 01:09:49.440029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.487 qpair failed and we were unable to recover it. 00:30:05.487 [2024-07-23 01:09:49.440168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.487 [2024-07-23 01:09:49.440381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.487 [2024-07-23 01:09:49.440409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.487 qpair failed and we were unable to recover it. 00:30:05.487 [2024-07-23 01:09:49.440622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.487 [2024-07-23 01:09:49.440812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.487 [2024-07-23 01:09:49.440836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.487 qpair failed and we were unable to recover it. 00:30:05.487 [2024-07-23 01:09:49.440990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.487 [2024-07-23 01:09:49.441142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.487 [2024-07-23 01:09:49.441166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.487 qpair failed and we were unable to recover it. 00:30:05.487 [2024-07-23 01:09:49.441301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.487 [2024-07-23 01:09:49.441526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.487 [2024-07-23 01:09:49.441551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.487 qpair failed and we were unable to recover it. 00:30:05.487 [2024-07-23 01:09:49.441708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.487 [2024-07-23 01:09:49.441842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.487 [2024-07-23 01:09:49.441883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.487 qpair failed and we were unable to recover it. 00:30:05.487 [2024-07-23 01:09:49.442064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.487 [2024-07-23 01:09:49.442247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.487 [2024-07-23 01:09:49.442273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.487 qpair failed and we were unable to recover it. 00:30:05.487 [2024-07-23 01:09:49.442453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.487 [2024-07-23 01:09:49.442661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.487 [2024-07-23 01:09:49.442685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.487 qpair failed and we were unable to recover it. 00:30:05.487 [2024-07-23 01:09:49.442851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.487 [2024-07-23 01:09:49.442990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.487 [2024-07-23 01:09:49.443015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.487 qpair failed and we were unable to recover it. 00:30:05.487 [2024-07-23 01:09:49.443176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.487 [2024-07-23 01:09:49.443339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.487 [2024-07-23 01:09:49.443380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.487 qpair failed and we were unable to recover it. 00:30:05.487 [2024-07-23 01:09:49.443548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.487 [2024-07-23 01:09:49.443743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.487 [2024-07-23 01:09:49.443767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.487 qpair failed and we were unable to recover it. 00:30:05.487 [2024-07-23 01:09:49.443927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.487 [2024-07-23 01:09:49.444117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.487 [2024-07-23 01:09:49.444141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.487 qpair failed and we were unable to recover it. 00:30:05.487 [2024-07-23 01:09:49.444306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.487 [2024-07-23 01:09:49.444470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.487 [2024-07-23 01:09:49.444494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.487 qpair failed and we were unable to recover it. 00:30:05.487 [2024-07-23 01:09:49.444654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.487 [2024-07-23 01:09:49.444820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.487 [2024-07-23 01:09:49.444845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.487 qpair failed and we were unable to recover it. 00:30:05.487 [2024-07-23 01:09:49.445010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.487 [2024-07-23 01:09:49.445141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.487 [2024-07-23 01:09:49.445165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.487 qpair failed and we were unable to recover it. 00:30:05.487 [2024-07-23 01:09:49.445351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.487 [2024-07-23 01:09:49.445519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.487 [2024-07-23 01:09:49.445543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.487 qpair failed and we were unable to recover it. 00:30:05.487 [2024-07-23 01:09:49.445702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.487 [2024-07-23 01:09:49.445869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.488 [2024-07-23 01:09:49.445909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.488 qpair failed and we were unable to recover it. 00:30:05.488 [2024-07-23 01:09:49.446097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.488 [2024-07-23 01:09:49.446232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.488 [2024-07-23 01:09:49.446256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.488 qpair failed and we were unable to recover it. 00:30:05.488 [2024-07-23 01:09:49.446400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.488 [2024-07-23 01:09:49.446569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.488 [2024-07-23 01:09:49.446593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.488 qpair failed and we were unable to recover it. 00:30:05.488 [2024-07-23 01:09:49.446844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.488 [2024-07-23 01:09:49.447007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.488 [2024-07-23 01:09:49.447031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.488 qpair failed and we were unable to recover it. 00:30:05.488 [2024-07-23 01:09:49.447249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.488 [2024-07-23 01:09:49.447448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.488 [2024-07-23 01:09:49.447476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.488 qpair failed and we were unable to recover it. 00:30:05.488 [2024-07-23 01:09:49.447670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.488 [2024-07-23 01:09:49.447834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.488 [2024-07-23 01:09:49.447858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.488 qpair failed and we were unable to recover it. 00:30:05.488 [2024-07-23 01:09:49.448043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.488 [2024-07-23 01:09:49.448204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.488 [2024-07-23 01:09:49.448245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.488 qpair failed and we were unable to recover it. 00:30:05.488 [2024-07-23 01:09:49.448453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.488 [2024-07-23 01:09:49.448642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.488 [2024-07-23 01:09:49.448666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.488 qpair failed and we were unable to recover it. 00:30:05.488 [2024-07-23 01:09:49.448841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.488 [2024-07-23 01:09:49.449025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.488 [2024-07-23 01:09:49.449048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.488 qpair failed and we were unable to recover it. 00:30:05.488 [2024-07-23 01:09:49.449210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.488 [2024-07-23 01:09:49.449396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.488 [2024-07-23 01:09:49.449421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.488 qpair failed and we were unable to recover it. 00:30:05.488 [2024-07-23 01:09:49.449578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.488 [2024-07-23 01:09:49.449720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.488 [2024-07-23 01:09:49.449745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.488 qpair failed and we were unable to recover it. 00:30:05.488 [2024-07-23 01:09:49.449891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.488 [2024-07-23 01:09:49.450071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.488 [2024-07-23 01:09:49.450095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.488 qpair failed and we were unable to recover it. 00:30:05.488 [2024-07-23 01:09:49.450249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.488 [2024-07-23 01:09:49.450405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.488 [2024-07-23 01:09:49.450429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.488 qpair failed and we were unable to recover it. 00:30:05.488 [2024-07-23 01:09:49.450583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.488 [2024-07-23 01:09:49.450732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.488 [2024-07-23 01:09:49.450757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.488 qpair failed and we were unable to recover it. 00:30:05.488 [2024-07-23 01:09:49.450911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.488 [2024-07-23 01:09:49.451073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.488 [2024-07-23 01:09:49.451098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.488 qpair failed and we were unable to recover it. 00:30:05.488 [2024-07-23 01:09:49.451263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.488 [2024-07-23 01:09:49.451452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.488 [2024-07-23 01:09:49.451479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.488 qpair failed and we were unable to recover it. 00:30:05.488 [2024-07-23 01:09:49.451674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.488 [2024-07-23 01:09:49.451838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.488 [2024-07-23 01:09:49.451878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.488 qpair failed and we were unable to recover it. 00:30:05.488 [2024-07-23 01:09:49.452091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.488 [2024-07-23 01:09:49.452250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.488 [2024-07-23 01:09:49.452276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.488 qpair failed and we were unable to recover it. 00:30:05.488 [2024-07-23 01:09:49.452468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.488 [2024-07-23 01:09:49.452683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.488 [2024-07-23 01:09:49.452708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.488 qpair failed and we were unable to recover it. 00:30:05.488 [2024-07-23 01:09:49.452908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.488 [2024-07-23 01:09:49.453093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.488 [2024-07-23 01:09:49.453120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.488 qpair failed and we were unable to recover it. 00:30:05.488 [2024-07-23 01:09:49.453333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.488 [2024-07-23 01:09:49.453472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.488 [2024-07-23 01:09:49.453496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.488 qpair failed and we were unable to recover it. 00:30:05.488 [2024-07-23 01:09:49.453730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.488 [2024-07-23 01:09:49.453879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.488 [2024-07-23 01:09:49.453903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.488 qpair failed and we were unable to recover it. 00:30:05.488 [2024-07-23 01:09:49.454043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.488 [2024-07-23 01:09:49.454167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.488 [2024-07-23 01:09:49.454191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.488 qpair failed and we were unable to recover it. 00:30:05.488 [2024-07-23 01:09:49.454326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.488 [2024-07-23 01:09:49.454513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.488 [2024-07-23 01:09:49.454555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.488 qpair failed and we were unable to recover it. 00:30:05.488 [2024-07-23 01:09:49.454706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.488 [2024-07-23 01:09:49.454878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.488 [2024-07-23 01:09:49.454905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.488 qpair failed and we were unable to recover it. 00:30:05.488 [2024-07-23 01:09:49.455086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.488 [2024-07-23 01:09:49.455219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.488 [2024-07-23 01:09:49.455259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.488 qpair failed and we were unable to recover it. 00:30:05.488 [2024-07-23 01:09:49.455435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.488 [2024-07-23 01:09:49.455648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.488 [2024-07-23 01:09:49.455674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.488 qpair failed and we were unable to recover it. 00:30:05.488 [2024-07-23 01:09:49.455821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.489 [2024-07-23 01:09:49.456010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.489 [2024-07-23 01:09:49.456052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.489 qpair failed and we were unable to recover it. 00:30:05.489 [2024-07-23 01:09:49.456239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.489 [2024-07-23 01:09:49.456423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.489 [2024-07-23 01:09:49.456447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.489 qpair failed and we were unable to recover it. 00:30:05.489 [2024-07-23 01:09:49.456611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.489 [2024-07-23 01:09:49.456809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.489 [2024-07-23 01:09:49.456836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.489 qpair failed and we were unable to recover it. 00:30:05.489 [2024-07-23 01:09:49.457006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.489 [2024-07-23 01:09:49.457197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.489 [2024-07-23 01:09:49.457238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.489 qpair failed and we were unable to recover it. 00:30:05.489 [2024-07-23 01:09:49.457384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.489 [2024-07-23 01:09:49.457572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.489 [2024-07-23 01:09:49.457600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.489 qpair failed and we were unable to recover it. 00:30:05.489 [2024-07-23 01:09:49.457770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.489 [2024-07-23 01:09:49.457951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.489 [2024-07-23 01:09:49.457977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.489 qpair failed and we were unable to recover it. 00:30:05.489 [2024-07-23 01:09:49.458185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.489 [2024-07-23 01:09:49.458390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.489 [2024-07-23 01:09:49.458417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.489 qpair failed and we were unable to recover it. 00:30:05.489 [2024-07-23 01:09:49.458578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.489 [2024-07-23 01:09:49.458772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.489 [2024-07-23 01:09:49.458815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.489 qpair failed and we were unable to recover it. 00:30:05.489 [2024-07-23 01:09:49.459005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.489 [2024-07-23 01:09:49.459168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.489 [2024-07-23 01:09:49.459194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.489 qpair failed and we were unable to recover it. 00:30:05.489 [2024-07-23 01:09:49.459384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.489 [2024-07-23 01:09:49.459540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.489 [2024-07-23 01:09:49.459564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.489 qpair failed and we were unable to recover it. 00:30:05.489 [2024-07-23 01:09:49.459728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.489 [2024-07-23 01:09:49.459862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.489 [2024-07-23 01:09:49.459886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.489 qpair failed and we were unable to recover it. 00:30:05.489 [2024-07-23 01:09:49.460041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.489 [2024-07-23 01:09:49.460199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.489 [2024-07-23 01:09:49.460238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.489 qpair failed and we were unable to recover it. 00:30:05.489 [2024-07-23 01:09:49.460391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.489 [2024-07-23 01:09:49.460577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.489 [2024-07-23 01:09:49.460602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.489 qpair failed and we were unable to recover it. 00:30:05.489 [2024-07-23 01:09:49.460747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.489 [2024-07-23 01:09:49.460902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.489 [2024-07-23 01:09:49.460926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.489 qpair failed and we were unable to recover it. 00:30:05.489 [2024-07-23 01:09:49.461115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.489 [2024-07-23 01:09:49.461254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.489 [2024-07-23 01:09:49.461278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.489 qpair failed and we were unable to recover it. 00:30:05.489 [2024-07-23 01:09:49.461445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.489 [2024-07-23 01:09:49.461610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.489 [2024-07-23 01:09:49.461639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.489 qpair failed and we were unable to recover it. 00:30:05.489 [2024-07-23 01:09:49.461858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.489 [2024-07-23 01:09:49.462008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.489 [2024-07-23 01:09:49.462034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.489 qpair failed and we were unable to recover it. 00:30:05.489 [2024-07-23 01:09:49.462242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.489 [2024-07-23 01:09:49.462400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.489 [2024-07-23 01:09:49.462425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.489 qpair failed and we were unable to recover it. 00:30:05.489 [2024-07-23 01:09:49.462587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.489 [2024-07-23 01:09:49.462768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.489 [2024-07-23 01:09:49.462796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.489 qpair failed and we were unable to recover it. 00:30:05.489 [2024-07-23 01:09:49.462994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.489 [2024-07-23 01:09:49.463167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.489 [2024-07-23 01:09:49.463192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.489 qpair failed and we were unable to recover it. 00:30:05.489 [2024-07-23 01:09:49.463359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.489 [2024-07-23 01:09:49.463545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.489 [2024-07-23 01:09:49.463569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.489 qpair failed and we were unable to recover it. 00:30:05.489 [2024-07-23 01:09:49.463736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.489 [2024-07-23 01:09:49.463941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.489 [2024-07-23 01:09:49.463966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.489 qpair failed and we were unable to recover it. 00:30:05.489 [2024-07-23 01:09:49.464138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.489 [2024-07-23 01:09:49.464294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.489 [2024-07-23 01:09:49.464335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.489 qpair failed and we were unable to recover it. 00:30:05.489 [2024-07-23 01:09:49.464509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.489 [2024-07-23 01:09:49.464718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.489 [2024-07-23 01:09:49.464743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.489 qpair failed and we were unable to recover it. 00:30:05.489 [2024-07-23 01:09:49.464898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.489 [2024-07-23 01:09:49.465032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.489 [2024-07-23 01:09:49.465056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.489 qpair failed and we were unable to recover it. 00:30:05.489 [2024-07-23 01:09:49.465261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.489 [2024-07-23 01:09:49.465449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.489 [2024-07-23 01:09:49.465473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.490 qpair failed and we were unable to recover it. 00:30:05.490 [2024-07-23 01:09:49.465635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.490 [2024-07-23 01:09:49.465797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.490 [2024-07-23 01:09:49.465821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.490 qpair failed and we were unable to recover it. 00:30:05.490 [2024-07-23 01:09:49.465962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.490 [2024-07-23 01:09:49.466179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.490 [2024-07-23 01:09:49.466206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.490 qpair failed and we were unable to recover it. 00:30:05.490 [2024-07-23 01:09:49.466424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.490 [2024-07-23 01:09:49.466587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.490 [2024-07-23 01:09:49.466611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.490 qpair failed and we were unable to recover it. 00:30:05.490 [2024-07-23 01:09:49.466798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.490 [2024-07-23 01:09:49.466971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.490 [2024-07-23 01:09:49.466997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.490 qpair failed and we were unable to recover it. 00:30:05.490 [2024-07-23 01:09:49.467186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.490 [2024-07-23 01:09:49.467390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.490 [2024-07-23 01:09:49.467417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.490 qpair failed and we were unable to recover it. 00:30:05.490 [2024-07-23 01:09:49.467593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.490 [2024-07-23 01:09:49.467806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.490 [2024-07-23 01:09:49.467830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.490 qpair failed and we were unable to recover it. 00:30:05.490 [2024-07-23 01:09:49.467962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.490 [2024-07-23 01:09:49.468186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.490 [2024-07-23 01:09:49.468210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.490 qpair failed and we were unable to recover it. 00:30:05.490 [2024-07-23 01:09:49.468375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.490 [2024-07-23 01:09:49.468540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.490 [2024-07-23 01:09:49.468565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.490 qpair failed and we were unable to recover it. 00:30:05.490 [2024-07-23 01:09:49.468754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.490 [2024-07-23 01:09:49.468940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.490 [2024-07-23 01:09:49.468965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.490 qpair failed and we were unable to recover it. 00:30:05.490 [2024-07-23 01:09:49.469132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.490 [2024-07-23 01:09:49.469275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.490 [2024-07-23 01:09:49.469299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.490 qpair failed and we were unable to recover it. 00:30:05.490 [2024-07-23 01:09:49.469435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.490 [2024-07-23 01:09:49.469566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.490 [2024-07-23 01:09:49.469590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.490 qpair failed and we were unable to recover it. 00:30:05.490 [2024-07-23 01:09:49.469782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.490 [2024-07-23 01:09:49.469932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.490 [2024-07-23 01:09:49.469960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.490 qpair failed and we were unable to recover it. 00:30:05.490 [2024-07-23 01:09:49.470170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.490 [2024-07-23 01:09:49.470353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.490 [2024-07-23 01:09:49.470379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.490 qpair failed and we were unable to recover it. 00:30:05.490 [2024-07-23 01:09:49.470564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.490 [2024-07-23 01:09:49.470746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.490 [2024-07-23 01:09:49.470773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.490 qpair failed and we were unable to recover it. 00:30:05.490 [2024-07-23 01:09:49.470954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.490 [2024-07-23 01:09:49.471126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.490 [2024-07-23 01:09:49.471153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.490 qpair failed and we were unable to recover it. 00:30:05.490 [2024-07-23 01:09:49.471322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.490 [2024-07-23 01:09:49.471501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.490 [2024-07-23 01:09:49.471531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.490 qpair failed and we were unable to recover it. 00:30:05.490 [2024-07-23 01:09:49.471721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.490 [2024-07-23 01:09:49.471905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.490 [2024-07-23 01:09:49.471933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.490 qpair failed and we were unable to recover it. 00:30:05.490 [2024-07-23 01:09:49.472093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.490 [2024-07-23 01:09:49.472265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.490 [2024-07-23 01:09:49.472292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.490 qpair failed and we were unable to recover it. 00:30:05.490 [2024-07-23 01:09:49.472479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.490 [2024-07-23 01:09:49.472660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.490 [2024-07-23 01:09:49.472688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.490 qpair failed and we were unable to recover it. 00:30:05.490 [2024-07-23 01:09:49.472876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.490 [2024-07-23 01:09:49.473046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.490 [2024-07-23 01:09:49.473086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.490 qpair failed and we were unable to recover it. 00:30:05.490 [2024-07-23 01:09:49.473267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.490 [2024-07-23 01:09:49.473445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.490 [2024-07-23 01:09:49.473472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.490 qpair failed and we were unable to recover it. 00:30:05.490 [2024-07-23 01:09:49.473644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.490 [2024-07-23 01:09:49.473808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.490 [2024-07-23 01:09:49.473833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.490 qpair failed and we were unable to recover it. 00:30:05.490 [2024-07-23 01:09:49.474053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.490 [2024-07-23 01:09:49.474231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.490 [2024-07-23 01:09:49.474258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.490 qpair failed and we were unable to recover it. 00:30:05.490 [2024-07-23 01:09:49.474413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.490 [2024-07-23 01:09:49.474591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.490 [2024-07-23 01:09:49.474626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.490 qpair failed and we were unable to recover it. 00:30:05.490 [2024-07-23 01:09:49.474820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.490 [2024-07-23 01:09:49.475024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.490 [2024-07-23 01:09:49.475051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.490 qpair failed and we were unable to recover it. 00:30:05.490 [2024-07-23 01:09:49.475224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.490 [2024-07-23 01:09:49.475428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.490 [2024-07-23 01:09:49.475453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.490 qpair failed and we were unable to recover it. 00:30:05.490 [2024-07-23 01:09:49.475635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.490 [2024-07-23 01:09:49.475820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.490 [2024-07-23 01:09:49.475847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.490 qpair failed and we were unable to recover it. 00:30:05.490 [2024-07-23 01:09:49.476024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.490 [2024-07-23 01:09:49.476203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.490 [2024-07-23 01:09:49.476230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.490 qpair failed and we were unable to recover it. 00:30:05.490 [2024-07-23 01:09:49.476408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.491 [2024-07-23 01:09:49.476610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.491 [2024-07-23 01:09:49.476656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.491 qpair failed and we were unable to recover it. 00:30:05.491 [2024-07-23 01:09:49.476882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.491 [2024-07-23 01:09:49.477061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.491 [2024-07-23 01:09:49.477093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.491 qpair failed and we were unable to recover it. 00:30:05.491 [2024-07-23 01:09:49.477275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.491 [2024-07-23 01:09:49.477459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.491 [2024-07-23 01:09:49.477485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.491 qpair failed and we were unable to recover it. 00:30:05.491 [2024-07-23 01:09:49.477643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.491 [2024-07-23 01:09:49.477840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.491 [2024-07-23 01:09:49.477864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.491 qpair failed and we were unable to recover it. 00:30:05.491 [2024-07-23 01:09:49.478028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.491 [2024-07-23 01:09:49.478164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.491 [2024-07-23 01:09:49.478188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.491 qpair failed and we were unable to recover it. 00:30:05.491 [2024-07-23 01:09:49.478343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.491 [2024-07-23 01:09:49.478551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.491 [2024-07-23 01:09:49.478578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.491 qpair failed and we were unable to recover it. 00:30:05.491 [2024-07-23 01:09:49.478761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.491 [2024-07-23 01:09:49.478931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.491 [2024-07-23 01:09:49.478970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.491 qpair failed and we were unable to recover it. 00:30:05.491 [2024-07-23 01:09:49.479145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.491 [2024-07-23 01:09:49.479349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.491 [2024-07-23 01:09:49.479376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.491 qpair failed and we were unable to recover it. 00:30:05.491 [2024-07-23 01:09:49.479584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.491 [2024-07-23 01:09:49.479743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.491 [2024-07-23 01:09:49.479770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.491 qpair failed and we were unable to recover it. 00:30:05.491 [2024-07-23 01:09:49.479952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.491 [2024-07-23 01:09:49.480130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.491 [2024-07-23 01:09:49.480157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.491 qpair failed and we were unable to recover it. 00:30:05.491 [2024-07-23 01:09:49.480340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.491 [2024-07-23 01:09:49.480519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.491 [2024-07-23 01:09:49.480546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.491 qpair failed and we were unable to recover it. 00:30:05.491 [2024-07-23 01:09:49.480736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.491 [2024-07-23 01:09:49.480890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.491 [2024-07-23 01:09:49.480917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.491 qpair failed and we were unable to recover it. 00:30:05.491 [2024-07-23 01:09:49.481110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.491 [2024-07-23 01:09:49.481271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.491 [2024-07-23 01:09:49.481312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.491 qpair failed and we were unable to recover it. 00:30:05.491 [2024-07-23 01:09:49.481504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.491 [2024-07-23 01:09:49.481661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.491 [2024-07-23 01:09:49.481686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.491 qpair failed and we were unable to recover it. 00:30:05.491 [2024-07-23 01:09:49.481862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.491 [2024-07-23 01:09:49.482076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.491 [2024-07-23 01:09:49.482103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.491 qpair failed and we were unable to recover it. 00:30:05.491 [2024-07-23 01:09:49.482285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.491 [2024-07-23 01:09:49.482459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.491 [2024-07-23 01:09:49.482486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.491 qpair failed and we were unable to recover it. 00:30:05.491 [2024-07-23 01:09:49.482691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.491 [2024-07-23 01:09:49.482851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.491 [2024-07-23 01:09:49.482892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.491 qpair failed and we were unable to recover it. 00:30:05.491 [2024-07-23 01:09:49.483102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.491 [2024-07-23 01:09:49.483306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.491 [2024-07-23 01:09:49.483333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.491 qpair failed and we were unable to recover it. 00:30:05.491 [2024-07-23 01:09:49.483516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.491 [2024-07-23 01:09:49.483695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.491 [2024-07-23 01:09:49.483724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.491 qpair failed and we were unable to recover it. 00:30:05.491 [2024-07-23 01:09:49.483881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.491 [2024-07-23 01:09:49.484087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.491 [2024-07-23 01:09:49.484114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.491 qpair failed and we were unable to recover it. 00:30:05.491 [2024-07-23 01:09:49.484283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.491 [2024-07-23 01:09:49.484473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.491 [2024-07-23 01:09:49.484515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.491 qpair failed and we were unable to recover it. 00:30:05.491 [2024-07-23 01:09:49.484664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.491 [2024-07-23 01:09:49.484858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.491 [2024-07-23 01:09:49.484882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.491 qpair failed and we were unable to recover it. 00:30:05.491 [2024-07-23 01:09:49.485059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.491 [2024-07-23 01:09:49.485229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.491 [2024-07-23 01:09:49.485255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.491 qpair failed and we were unable to recover it. 00:30:05.491 [2024-07-23 01:09:49.485457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.491 [2024-07-23 01:09:49.485637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.491 [2024-07-23 01:09:49.485664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.491 qpair failed and we were unable to recover it. 00:30:05.491 [2024-07-23 01:09:49.485849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.491 [2024-07-23 01:09:49.485977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.491 [2024-07-23 01:09:49.486018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.491 qpair failed and we were unable to recover it. 00:30:05.491 [2024-07-23 01:09:49.486200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.491 [2024-07-23 01:09:49.486369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.491 [2024-07-23 01:09:49.486393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.491 qpair failed and we were unable to recover it. 00:30:05.491 [2024-07-23 01:09:49.486595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.491 [2024-07-23 01:09:49.486776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.491 [2024-07-23 01:09:49.486804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.491 qpair failed and we were unable to recover it. 00:30:05.491 [2024-07-23 01:09:49.486961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.491 [2024-07-23 01:09:49.487118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.491 [2024-07-23 01:09:49.487145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.491 qpair failed and we were unable to recover it. 00:30:05.491 [2024-07-23 01:09:49.487325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.492 [2024-07-23 01:09:49.487511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.492 [2024-07-23 01:09:49.487535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.492 qpair failed and we were unable to recover it. 00:30:05.492 [2024-07-23 01:09:49.487705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.492 [2024-07-23 01:09:49.487860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.492 [2024-07-23 01:09:49.487887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.492 qpair failed and we were unable to recover it. 00:30:05.492 [2024-07-23 01:09:49.488065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.492 [2024-07-23 01:09:49.488241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.492 [2024-07-23 01:09:49.488267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.492 qpair failed and we were unable to recover it. 00:30:05.492 [2024-07-23 01:09:49.488449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.492 [2024-07-23 01:09:49.488652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.492 [2024-07-23 01:09:49.488679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.492 qpair failed and we were unable to recover it. 00:30:05.492 [2024-07-23 01:09:49.488862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.492 [2024-07-23 01:09:49.489046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.492 [2024-07-23 01:09:49.489073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.492 qpair failed and we were unable to recover it. 00:30:05.492 [2024-07-23 01:09:49.489283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.492 [2024-07-23 01:09:49.489493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.492 [2024-07-23 01:09:49.489520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.492 qpair failed and we were unable to recover it. 00:30:05.492 [2024-07-23 01:09:49.489725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.492 [2024-07-23 01:09:49.489906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.492 [2024-07-23 01:09:49.489935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.492 qpair failed and we were unable to recover it. 00:30:05.492 [2024-07-23 01:09:49.490081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.492 [2024-07-23 01:09:49.490268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.492 [2024-07-23 01:09:49.490292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.492 qpair failed and we were unable to recover it. 00:30:05.492 [2024-07-23 01:09:49.490456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.492 [2024-07-23 01:09:49.490623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.492 [2024-07-23 01:09:49.490648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.492 qpair failed and we were unable to recover it. 00:30:05.492 [2024-07-23 01:09:49.490784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.492 [2024-07-23 01:09:49.490974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.492 [2024-07-23 01:09:49.490998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.492 qpair failed and we were unable to recover it. 00:30:05.492 [2024-07-23 01:09:49.491207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.492 [2024-07-23 01:09:49.491386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.492 [2024-07-23 01:09:49.491410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.492 qpair failed and we were unable to recover it. 00:30:05.492 [2024-07-23 01:09:49.491568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.492 [2024-07-23 01:09:49.491759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.492 [2024-07-23 01:09:49.491802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.492 qpair failed and we were unable to recover it. 00:30:05.492 [2024-07-23 01:09:49.491992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.492 [2024-07-23 01:09:49.492202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.492 [2024-07-23 01:09:49.492229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.492 qpair failed and we were unable to recover it. 00:30:05.492 [2024-07-23 01:09:49.492387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.492 [2024-07-23 01:09:49.492568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.492 [2024-07-23 01:09:49.492596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.492 qpair failed and we were unable to recover it. 00:30:05.492 [2024-07-23 01:09:49.492798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.492 [2024-07-23 01:09:49.492949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.492 [2024-07-23 01:09:49.492976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.492 qpair failed and we were unable to recover it. 00:30:05.492 [2024-07-23 01:09:49.493195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.492 [2024-07-23 01:09:49.493356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.492 [2024-07-23 01:09:49.493380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.492 qpair failed and we were unable to recover it. 00:30:05.492 [2024-07-23 01:09:49.493511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.492 [2024-07-23 01:09:49.493673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.492 [2024-07-23 01:09:49.493714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.492 qpair failed and we were unable to recover it. 00:30:05.492 [2024-07-23 01:09:49.493890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.492 [2024-07-23 01:09:49.494092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.492 [2024-07-23 01:09:49.494119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.492 qpair failed and we were unable to recover it. 00:30:05.492 [2024-07-23 01:09:49.494299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.492 [2024-07-23 01:09:49.494479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.492 [2024-07-23 01:09:49.494506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.492 qpair failed and we were unable to recover it. 00:30:05.492 [2024-07-23 01:09:49.494683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.492 [2024-07-23 01:09:49.494866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.492 [2024-07-23 01:09:49.494893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.492 qpair failed and we were unable to recover it. 00:30:05.492 [2024-07-23 01:09:49.495103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.492 [2024-07-23 01:09:49.495288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.492 [2024-07-23 01:09:49.495312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.492 qpair failed and we were unable to recover it. 00:30:05.492 [2024-07-23 01:09:49.495477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.492 [2024-07-23 01:09:49.495632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.492 [2024-07-23 01:09:49.495673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.492 qpair failed and we were unable to recover it. 00:30:05.492 [2024-07-23 01:09:49.495855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.492 [2024-07-23 01:09:49.496002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.492 [2024-07-23 01:09:49.496029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.492 qpair failed and we were unable to recover it. 00:30:05.492 [2024-07-23 01:09:49.496206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.492 [2024-07-23 01:09:49.496409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.492 [2024-07-23 01:09:49.496433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.492 qpair failed and we were unable to recover it. 00:30:05.492 [2024-07-23 01:09:49.496593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.492 [2024-07-23 01:09:49.496792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.492 [2024-07-23 01:09:49.496827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.492 qpair failed and we were unable to recover it. 00:30:05.492 [2024-07-23 01:09:49.497011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.492 [2024-07-23 01:09:49.497199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.492 [2024-07-23 01:09:49.497224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.492 qpair failed and we were unable to recover it. 00:30:05.492 [2024-07-23 01:09:49.497390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.492 [2024-07-23 01:09:49.497571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.492 [2024-07-23 01:09:49.497600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.492 qpair failed and we were unable to recover it. 00:30:05.492 [2024-07-23 01:09:49.497786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.493 [2024-07-23 01:09:49.497991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.493 [2024-07-23 01:09:49.498017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.493 qpair failed and we were unable to recover it. 00:30:05.493 [2024-07-23 01:09:49.498201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.493 [2024-07-23 01:09:49.498408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.493 [2024-07-23 01:09:49.498435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.493 qpair failed and we were unable to recover it. 00:30:05.493 [2024-07-23 01:09:49.498656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.493 [2024-07-23 01:09:49.498816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.493 [2024-07-23 01:09:49.498857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.493 qpair failed and we were unable to recover it. 00:30:05.493 [2024-07-23 01:09:49.499049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.493 [2024-07-23 01:09:49.499190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.493 [2024-07-23 01:09:49.499214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.493 qpair failed and we were unable to recover it. 00:30:05.493 [2024-07-23 01:09:49.499430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.493 [2024-07-23 01:09:49.499585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.493 [2024-07-23 01:09:49.499608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.493 qpair failed and we were unable to recover it. 00:30:05.493 [2024-07-23 01:09:49.499785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.493 [2024-07-23 01:09:49.499940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.493 [2024-07-23 01:09:49.499969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.493 qpair failed and we were unable to recover it. 00:30:05.493 [2024-07-23 01:09:49.500113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.493 [2024-07-23 01:09:49.500296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.493 [2024-07-23 01:09:49.500319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.493 qpair failed and we were unable to recover it. 00:30:05.493 [2024-07-23 01:09:49.500482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.493 [2024-07-23 01:09:49.500644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.493 [2024-07-23 01:09:49.500673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.493 qpair failed and we were unable to recover it. 00:30:05.493 [2024-07-23 01:09:49.500864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.493 [2024-07-23 01:09:49.501051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.493 [2024-07-23 01:09:49.501075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.493 qpair failed and we were unable to recover it. 00:30:05.493 [2024-07-23 01:09:49.501214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.493 [2024-07-23 01:09:49.501393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.493 [2024-07-23 01:09:49.501419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.493 qpair failed and we were unable to recover it. 00:30:05.493 [2024-07-23 01:09:49.501597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.493 [2024-07-23 01:09:49.501746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.493 [2024-07-23 01:09:49.501773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.493 qpair failed and we were unable to recover it. 00:30:05.493 [2024-07-23 01:09:49.501957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.493 [2024-07-23 01:09:49.502106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.493 [2024-07-23 01:09:49.502135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.493 qpair failed and we were unable to recover it. 00:30:05.493 [2024-07-23 01:09:49.502344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.493 [2024-07-23 01:09:49.502536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.493 [2024-07-23 01:09:49.502560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.493 qpair failed and we were unable to recover it. 00:30:05.493 [2024-07-23 01:09:49.502719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.493 [2024-07-23 01:09:49.502875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.493 [2024-07-23 01:09:49.502899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.493 qpair failed and we were unable to recover it. 00:30:05.493 [2024-07-23 01:09:49.503100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.493 [2024-07-23 01:09:49.503280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.493 [2024-07-23 01:09:49.503307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.493 qpair failed and we were unable to recover it. 00:30:05.493 [2024-07-23 01:09:49.503515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.493 [2024-07-23 01:09:49.503671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.493 [2024-07-23 01:09:49.503698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.493 qpair failed and we were unable to recover it. 00:30:05.493 [2024-07-23 01:09:49.503887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.493 [2024-07-23 01:09:49.504073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.493 [2024-07-23 01:09:49.504113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.493 qpair failed and we were unable to recover it. 00:30:05.493 [2024-07-23 01:09:49.504319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.493 [2024-07-23 01:09:49.504457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.493 [2024-07-23 01:09:49.504481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.493 qpair failed and we were unable to recover it. 00:30:05.493 [2024-07-23 01:09:49.504639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.493 [2024-07-23 01:09:49.504838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.493 [2024-07-23 01:09:49.504866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.493 qpair failed and we were unable to recover it. 00:30:05.493 [2024-07-23 01:09:49.505049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.493 [2024-07-23 01:09:49.505231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.493 [2024-07-23 01:09:49.505258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.493 qpair failed and we were unable to recover it. 00:30:05.493 [2024-07-23 01:09:49.505462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.493 [2024-07-23 01:09:49.505649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.493 [2024-07-23 01:09:49.505677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.493 qpair failed and we were unable to recover it. 00:30:05.493 [2024-07-23 01:09:49.505889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.493 [2024-07-23 01:09:49.506094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.493 [2024-07-23 01:09:49.506121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.493 qpair failed and we were unable to recover it. 00:30:05.493 [2024-07-23 01:09:49.506312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.493 [2024-07-23 01:09:49.506472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.493 [2024-07-23 01:09:49.506512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.493 qpair failed and we were unable to recover it. 00:30:05.493 [2024-07-23 01:09:49.506657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.493 [2024-07-23 01:09:49.506863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.493 [2024-07-23 01:09:49.506889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.493 qpair failed and we were unable to recover it. 00:30:05.493 [2024-07-23 01:09:49.507111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.493 [2024-07-23 01:09:49.507243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.494 [2024-07-23 01:09:49.507269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.494 qpair failed and we were unable to recover it. 00:30:05.494 [2024-07-23 01:09:49.507495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.494 [2024-07-23 01:09:49.507661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.494 [2024-07-23 01:09:49.507686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.494 qpair failed and we were unable to recover it. 00:30:05.494 [2024-07-23 01:09:49.507819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.494 [2024-07-23 01:09:49.508002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.494 [2024-07-23 01:09:49.508029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.494 qpair failed and we were unable to recover it. 00:30:05.494 [2024-07-23 01:09:49.508203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.494 [2024-07-23 01:09:49.508372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.494 [2024-07-23 01:09:49.508400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.494 qpair failed and we were unable to recover it. 00:30:05.494 [2024-07-23 01:09:49.508586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.494 [2024-07-23 01:09:49.508798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.494 [2024-07-23 01:09:49.508825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.494 qpair failed and we were unable to recover it. 00:30:05.494 [2024-07-23 01:09:49.508976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.494 [2024-07-23 01:09:49.509178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.494 [2024-07-23 01:09:49.509205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.494 qpair failed and we were unable to recover it. 00:30:05.494 [2024-07-23 01:09:49.509386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.494 [2024-07-23 01:09:49.509578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.494 [2024-07-23 01:09:49.509602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.494 qpair failed and we were unable to recover it. 00:30:05.494 [2024-07-23 01:09:49.509790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.494 [2024-07-23 01:09:49.509965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.494 [2024-07-23 01:09:49.509991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.494 qpair failed and we were unable to recover it. 00:30:05.494 [2024-07-23 01:09:49.510188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.494 [2024-07-23 01:09:49.510351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.494 [2024-07-23 01:09:49.510375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.494 qpair failed and we were unable to recover it. 00:30:05.494 [2024-07-23 01:09:49.510561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.494 [2024-07-23 01:09:49.510749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.494 [2024-07-23 01:09:49.510778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.494 qpair failed and we were unable to recover it. 00:30:05.494 [2024-07-23 01:09:49.510964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.494 [2024-07-23 01:09:49.511133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.494 [2024-07-23 01:09:49.511158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.494 qpair failed and we were unable to recover it. 00:30:05.494 [2024-07-23 01:09:49.511350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.494 [2024-07-23 01:09:49.511517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.494 [2024-07-23 01:09:49.511544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.494 qpair failed and we were unable to recover it. 00:30:05.494 [2024-07-23 01:09:49.511721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.494 [2024-07-23 01:09:49.511876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.494 [2024-07-23 01:09:49.511903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.494 qpair failed and we were unable to recover it. 00:30:05.494 [2024-07-23 01:09:49.512056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.494 [2024-07-23 01:09:49.512194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.494 [2024-07-23 01:09:49.512218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.494 qpair failed and we were unable to recover it. 00:30:05.494 [2024-07-23 01:09:49.512351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.494 [2024-07-23 01:09:49.512546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.494 [2024-07-23 01:09:49.512573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.494 qpair failed and we were unable to recover it. 00:30:05.494 [2024-07-23 01:09:49.512740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.494 [2024-07-23 01:09:49.512914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.494 [2024-07-23 01:09:49.512942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.494 qpair failed and we were unable to recover it. 00:30:05.494 [2024-07-23 01:09:49.513089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.494 [2024-07-23 01:09:49.513247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.494 [2024-07-23 01:09:49.513275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.494 qpair failed and we were unable to recover it. 00:30:05.494 [2024-07-23 01:09:49.513453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.494 [2024-07-23 01:09:49.513634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.494 [2024-07-23 01:09:49.513662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.494 qpair failed and we were unable to recover it. 00:30:05.494 [2024-07-23 01:09:49.513830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.494 [2024-07-23 01:09:49.513982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.494 [2024-07-23 01:09:49.514009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.494 qpair failed and we were unable to recover it. 00:30:05.494 [2024-07-23 01:09:49.514183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.494 [2024-07-23 01:09:49.514358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.494 [2024-07-23 01:09:49.514384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.494 qpair failed and we were unable to recover it. 00:30:05.494 [2024-07-23 01:09:49.514570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.494 [2024-07-23 01:09:49.514784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.494 [2024-07-23 01:09:49.514812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.494 qpair failed and we were unable to recover it. 00:30:05.494 [2024-07-23 01:09:49.514994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.494 [2024-07-23 01:09:49.515132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.494 [2024-07-23 01:09:49.515174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.494 qpair failed and we were unable to recover it. 00:30:05.494 [2024-07-23 01:09:49.515380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.494 [2024-07-23 01:09:49.515556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.494 [2024-07-23 01:09:49.515584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.494 qpair failed and we were unable to recover it. 00:30:05.494 [2024-07-23 01:09:49.515801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.494 [2024-07-23 01:09:49.515939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.494 [2024-07-23 01:09:49.515963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.494 qpair failed and we were unable to recover it. 00:30:05.494 [2024-07-23 01:09:49.516125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.494 [2024-07-23 01:09:49.516308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.494 [2024-07-23 01:09:49.516339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.494 qpair failed and we were unable to recover it. 00:30:05.494 [2024-07-23 01:09:49.516559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.494 [2024-07-23 01:09:49.516723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.494 [2024-07-23 01:09:49.516748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.494 qpair failed and we were unable to recover it. 00:30:05.494 [2024-07-23 01:09:49.516915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.494 [2024-07-23 01:09:49.517128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.494 [2024-07-23 01:09:49.517155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.494 qpair failed and we were unable to recover it. 00:30:05.494 [2024-07-23 01:09:49.517310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.494 [2024-07-23 01:09:49.517491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.494 [2024-07-23 01:09:49.517517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.494 qpair failed and we were unable to recover it. 00:30:05.494 [2024-07-23 01:09:49.517696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.494 [2024-07-23 01:09:49.517868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.494 [2024-07-23 01:09:49.517896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.494 qpair failed and we were unable to recover it. 00:30:05.495 [2024-07-23 01:09:49.518061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.495 [2024-07-23 01:09:49.518229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.495 [2024-07-23 01:09:49.518253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.495 qpair failed and we were unable to recover it. 00:30:05.495 [2024-07-23 01:09:49.518421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.495 [2024-07-23 01:09:49.518590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.495 [2024-07-23 01:09:49.518624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.495 qpair failed and we were unable to recover it. 00:30:05.495 [2024-07-23 01:09:49.518833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.495 [2024-07-23 01:09:49.518989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.495 [2024-07-23 01:09:49.519016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.495 qpair failed and we were unable to recover it. 00:30:05.495 [2024-07-23 01:09:49.519186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.495 [2024-07-23 01:09:49.519366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.495 [2024-07-23 01:09:49.519393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.495 qpair failed and we were unable to recover it. 00:30:05.495 [2024-07-23 01:09:49.519577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.495 [2024-07-23 01:09:49.519726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.495 [2024-07-23 01:09:49.519751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.495 qpair failed and we were unable to recover it. 00:30:05.495 [2024-07-23 01:09:49.519897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.495 [2024-07-23 01:09:49.520082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.495 [2024-07-23 01:09:49.520113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.495 qpair failed and we were unable to recover it. 00:30:05.495 [2024-07-23 01:09:49.520296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.495 [2024-07-23 01:09:49.520441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.495 [2024-07-23 01:09:49.520467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.495 qpair failed and we were unable to recover it. 00:30:05.495 [2024-07-23 01:09:49.520637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.495 [2024-07-23 01:09:49.520813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.495 [2024-07-23 01:09:49.520840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.495 qpair failed and we were unable to recover it. 00:30:05.495 [2024-07-23 01:09:49.521022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.495 [2024-07-23 01:09:49.521210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.495 [2024-07-23 01:09:49.521237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.495 qpair failed and we were unable to recover it. 00:30:05.495 [2024-07-23 01:09:49.521389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.495 [2024-07-23 01:09:49.521545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.495 [2024-07-23 01:09:49.521572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.495 qpair failed and we were unable to recover it. 00:30:05.495 [2024-07-23 01:09:49.521771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.495 [2024-07-23 01:09:49.521931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.495 [2024-07-23 01:09:49.521971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.495 qpair failed and we were unable to recover it. 00:30:05.495 [2024-07-23 01:09:49.522127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.495 [2024-07-23 01:09:49.522315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.495 [2024-07-23 01:09:49.522342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.495 qpair failed and we were unable to recover it. 00:30:05.495 [2024-07-23 01:09:49.522520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.495 [2024-07-23 01:09:49.522678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.495 [2024-07-23 01:09:49.522703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.495 qpair failed and we were unable to recover it. 00:30:05.495 [2024-07-23 01:09:49.522881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.495 [2024-07-23 01:09:49.523058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.495 [2024-07-23 01:09:49.523082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.495 qpair failed and we were unable to recover it. 00:30:05.495 [2024-07-23 01:09:49.523273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.495 [2024-07-23 01:09:49.523480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.495 [2024-07-23 01:09:49.523507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.495 qpair failed and we were unable to recover it. 00:30:05.495 [2024-07-23 01:09:49.523697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.495 [2024-07-23 01:09:49.523887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.495 [2024-07-23 01:09:49.523911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.495 qpair failed and we were unable to recover it. 00:30:05.495 [2024-07-23 01:09:49.524051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.495 [2024-07-23 01:09:49.524188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.495 [2024-07-23 01:09:49.524212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.495 qpair failed and we were unable to recover it. 00:30:05.495 [2024-07-23 01:09:49.524350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.495 [2024-07-23 01:09:49.524564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.495 [2024-07-23 01:09:49.524591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.495 qpair failed and we were unable to recover it. 00:30:05.495 [2024-07-23 01:09:49.524749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.495 [2024-07-23 01:09:49.524952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.495 [2024-07-23 01:09:49.524979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.495 qpair failed and we were unable to recover it. 00:30:05.495 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 44: 3529663 Killed "${NVMF_APP[@]}" "$@" 00:30:05.495 [2024-07-23 01:09:49.525161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.495 [2024-07-23 01:09:49.525346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.495 [2024-07-23 01:09:49.525370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.495 qpair failed and we were unable to recover it. 00:30:05.495 01:09:49 -- host/target_disconnect.sh@56 -- # disconnect_init 10.0.0.2 00:30:05.495 [2024-07-23 01:09:49.525535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.495 01:09:49 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:30:05.495 [2024-07-23 01:09:49.525689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.495 [2024-07-23 01:09:49.525717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.495 qpair failed and we were unable to recover it. 00:30:05.495 01:09:49 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:30:05.495 [2024-07-23 01:09:49.525897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.495 01:09:49 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:05.495 01:09:49 -- common/autotest_common.sh@10 -- # set +x 00:30:05.495 [2024-07-23 01:09:49.526071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.495 [2024-07-23 01:09:49.526098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.495 qpair failed and we were unable to recover it. 00:30:05.495 [2024-07-23 01:09:49.526286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.495 [2024-07-23 01:09:49.526474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.495 [2024-07-23 01:09:49.526500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.495 qpair failed and we were unable to recover it. 00:30:05.495 [2024-07-23 01:09:49.526665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.495 [2024-07-23 01:09:49.526830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.495 [2024-07-23 01:09:49.526854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.495 qpair failed and we were unable to recover it. 00:30:05.495 [2024-07-23 01:09:49.527017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.495 [2024-07-23 01:09:49.527223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.495 [2024-07-23 01:09:49.527250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.495 qpair failed and we were unable to recover it. 00:30:05.495 [2024-07-23 01:09:49.527406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.495 [2024-07-23 01:09:49.527586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.495 [2024-07-23 01:09:49.527621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.495 qpair failed and we were unable to recover it. 00:30:05.495 [2024-07-23 01:09:49.527828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.495 [2024-07-23 01:09:49.527988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.495 [2024-07-23 01:09:49.528012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.495 qpair failed and we were unable to recover it. 00:30:05.495 [2024-07-23 01:09:49.528153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.495 [2024-07-23 01:09:49.528337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.496 [2024-07-23 01:09:49.528367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.496 qpair failed and we were unable to recover it. 00:30:05.496 [2024-07-23 01:09:49.528548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.496 [2024-07-23 01:09:49.528745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.496 [2024-07-23 01:09:49.528773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.496 qpair failed and we were unable to recover it. 00:30:05.496 [2024-07-23 01:09:49.528951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.496 [2024-07-23 01:09:49.529131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.496 [2024-07-23 01:09:49.529159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.496 qpair failed and we were unable to recover it. 00:30:05.496 [2024-07-23 01:09:49.529338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.496 [2024-07-23 01:09:49.529520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.496 [2024-07-23 01:09:49.529547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.496 qpair failed and we were unable to recover it. 00:30:05.496 [2024-07-23 01:09:49.529731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.496 [2024-07-23 01:09:49.529887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.496 [2024-07-23 01:09:49.529915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.496 qpair failed and we were unable to recover it. 00:30:05.496 [2024-07-23 01:09:49.530127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.496 [2024-07-23 01:09:49.530334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.496 [2024-07-23 01:09:49.530361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.496 qpair failed and we were unable to recover it. 00:30:05.496 01:09:49 -- nvmf/common.sh@469 -- # nvmfpid=3530360 00:30:05.496 [2024-07-23 01:09:49.530563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.496 01:09:49 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:30:05.496 01:09:49 -- nvmf/common.sh@470 -- # waitforlisten 3530360 00:30:05.496 [2024-07-23 01:09:49.530735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.496 [2024-07-23 01:09:49.530763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.496 qpair failed and we were unable to recover it. 00:30:05.496 01:09:49 -- common/autotest_common.sh@819 -- # '[' -z 3530360 ']' 00:30:05.496 [2024-07-23 01:09:49.530945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.496 01:09:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:05.496 [2024-07-23 01:09:49.531165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.496 [2024-07-23 01:09:49.531191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.496 qpair failed and we were unable to recover it. 00:30:05.496 01:09:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:05.496 [2024-07-23 01:09:49.531331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.496 01:09:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:05.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:05.496 [2024-07-23 01:09:49.531498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.496 01:09:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:05.496 [2024-07-23 01:09:49.531523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.496 qpair failed and we were unable to recover it. 00:30:05.496 [2024-07-23 01:09:49.531664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.496 01:09:49 -- common/autotest_common.sh@10 -- # set +x 00:30:05.496 [2024-07-23 01:09:49.531832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.496 [2024-07-23 01:09:49.531858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.496 qpair failed and we were unable to recover it. 00:30:05.496 [2024-07-23 01:09:49.532043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.496 [2024-07-23 01:09:49.532218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.496 [2024-07-23 01:09:49.532247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.496 qpair failed and we were unable to recover it. 00:30:05.496 [2024-07-23 01:09:49.532457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.496 [2024-07-23 01:09:49.532602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.496 [2024-07-23 01:09:49.532638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.496 qpair failed and we were unable to recover it. 00:30:05.496 [2024-07-23 01:09:49.532845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.496 [2024-07-23 01:09:49.533025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.496 [2024-07-23 01:09:49.533053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.496 qpair failed and we were unable to recover it. 00:30:05.496 [2024-07-23 01:09:49.533212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.496 [2024-07-23 01:09:49.533376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.496 [2024-07-23 01:09:49.533416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.496 qpair failed and we were unable to recover it. 00:30:05.496 [2024-07-23 01:09:49.533595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.496 [2024-07-23 01:09:49.533811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.496 [2024-07-23 01:09:49.533839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.496 qpair failed and we were unable to recover it. 00:30:05.496 [2024-07-23 01:09:49.534015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.496 [2024-07-23 01:09:49.534232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.496 [2024-07-23 01:09:49.534260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.496 qpair failed and we were unable to recover it. 00:30:05.496 [2024-07-23 01:09:49.534447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.496 [2024-07-23 01:09:49.534632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.496 [2024-07-23 01:09:49.534669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.496 qpair failed and we were unable to recover it. 00:30:05.496 [2024-07-23 01:09:49.538632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.496 [2024-07-23 01:09:49.538872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.496 [2024-07-23 01:09:49.538907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.496 qpair failed and we were unable to recover it. 00:30:05.496 [2024-07-23 01:09:49.539078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.496 [2024-07-23 01:09:49.539299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.496 [2024-07-23 01:09:49.539331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.496 qpair failed and we were unable to recover it. 00:30:05.496 [2024-07-23 01:09:49.539552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.496 [2024-07-23 01:09:49.539749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.496 [2024-07-23 01:09:49.539782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.496 qpair failed and we were unable to recover it. 00:30:05.496 [2024-07-23 01:09:49.539951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.496 [2024-07-23 01:09:49.540172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.496 [2024-07-23 01:09:49.540202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.496 qpair failed and we were unable to recover it. 00:30:05.496 [2024-07-23 01:09:49.540372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.496 [2024-07-23 01:09:49.540535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.496 [2024-07-23 01:09:49.540566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.496 qpair failed and we were unable to recover it. 00:30:05.496 [2024-07-23 01:09:49.540749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.496 [2024-07-23 01:09:49.540896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.496 [2024-07-23 01:09:49.540938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.496 qpair failed and we were unable to recover it. 00:30:05.496 [2024-07-23 01:09:49.541132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.496 [2024-07-23 01:09:49.541349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.496 [2024-07-23 01:09:49.541377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.496 qpair failed and we were unable to recover it. 00:30:05.496 [2024-07-23 01:09:49.541558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.496 [2024-07-23 01:09:49.541784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.496 [2024-07-23 01:09:49.541816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.496 qpair failed and we were unable to recover it. 00:30:05.496 [2024-07-23 01:09:49.542003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.496 [2024-07-23 01:09:49.542225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.496 [2024-07-23 01:09:49.542256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.496 qpair failed and we were unable to recover it. 00:30:05.496 [2024-07-23 01:09:49.542452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.496 [2024-07-23 01:09:49.542643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.496 [2024-07-23 01:09:49.542685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.497 qpair failed and we were unable to recover it. 00:30:05.497 [2024-07-23 01:09:49.542882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.497 [2024-07-23 01:09:49.543097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.497 [2024-07-23 01:09:49.543129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.497 qpair failed and we were unable to recover it. 00:30:05.497 [2024-07-23 01:09:49.543322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.497 [2024-07-23 01:09:49.543541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.497 [2024-07-23 01:09:49.543569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.497 qpair failed and we were unable to recover it. 00:30:05.497 [2024-07-23 01:09:49.543784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.497 [2024-07-23 01:09:49.543952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.497 [2024-07-23 01:09:49.543985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.497 qpair failed and we were unable to recover it. 00:30:05.497 [2024-07-23 01:09:49.544198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.497 [2024-07-23 01:09:49.544416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.497 [2024-07-23 01:09:49.544445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.497 qpair failed and we were unable to recover it. 00:30:05.497 [2024-07-23 01:09:49.544676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.497 [2024-07-23 01:09:49.544869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.497 [2024-07-23 01:09:49.544898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.497 qpair failed and we were unable to recover it. 00:30:05.497 [2024-07-23 01:09:49.545102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.497 [2024-07-23 01:09:49.545322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.497 [2024-07-23 01:09:49.545349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.497 qpair failed and we were unable to recover it. 00:30:05.497 [2024-07-23 01:09:49.545550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.497 [2024-07-23 01:09:49.545716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.497 [2024-07-23 01:09:49.545743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.497 qpair failed and we were unable to recover it. 00:30:05.497 [2024-07-23 01:09:49.545932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.497 [2024-07-23 01:09:49.546115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.497 [2024-07-23 01:09:49.546143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.497 qpair failed and we were unable to recover it. 00:30:05.497 [2024-07-23 01:09:49.546300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.497 [2024-07-23 01:09:49.546458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.497 [2024-07-23 01:09:49.546486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.497 qpair failed and we were unable to recover it. 00:30:05.497 [2024-07-23 01:09:49.546647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.497 [2024-07-23 01:09:49.546807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.497 [2024-07-23 01:09:49.546836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.497 qpair failed and we were unable to recover it. 00:30:05.497 [2024-07-23 01:09:49.547002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.497 [2024-07-23 01:09:49.547134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.497 [2024-07-23 01:09:49.547161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.497 qpair failed and we were unable to recover it. 00:30:05.497 [2024-07-23 01:09:49.547349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.497 [2024-07-23 01:09:49.547490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.497 [2024-07-23 01:09:49.547519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.497 qpair failed and we were unable to recover it. 00:30:05.497 [2024-07-23 01:09:49.547688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.497 [2024-07-23 01:09:49.547854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.497 [2024-07-23 01:09:49.547878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.497 qpair failed and we were unable to recover it. 00:30:05.497 [2024-07-23 01:09:49.548072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.497 [2024-07-23 01:09:49.548212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.497 [2024-07-23 01:09:49.548239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.497 qpair failed and we were unable to recover it. 00:30:05.497 [2024-07-23 01:09:49.548450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.497 [2024-07-23 01:09:49.548622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.497 [2024-07-23 01:09:49.548646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.497 qpair failed and we were unable to recover it. 00:30:05.497 [2024-07-23 01:09:49.548787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.497 [2024-07-23 01:09:49.548973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.497 [2024-07-23 01:09:49.549001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.497 qpair failed and we were unable to recover it. 00:30:05.497 [2024-07-23 01:09:49.549175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.497 [2024-07-23 01:09:49.549342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.497 [2024-07-23 01:09:49.549369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.497 qpair failed and we were unable to recover it. 00:30:05.497 [2024-07-23 01:09:49.549524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.497 [2024-07-23 01:09:49.549748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.497 [2024-07-23 01:09:49.549773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.497 qpair failed and we were unable to recover it. 00:30:05.497 [2024-07-23 01:09:49.549911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.497 [2024-07-23 01:09:49.550104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.497 [2024-07-23 01:09:49.550146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.497 qpair failed and we were unable to recover it. 00:30:05.497 [2024-07-23 01:09:49.550325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.497 [2024-07-23 01:09:49.550512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.497 [2024-07-23 01:09:49.550540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.497 qpair failed and we were unable to recover it. 00:30:05.497 [2024-07-23 01:09:49.550726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.497 [2024-07-23 01:09:49.550867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.497 [2024-07-23 01:09:49.550892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.497 qpair failed and we were unable to recover it. 00:30:05.497 [2024-07-23 01:09:49.551106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.497 [2024-07-23 01:09:49.551251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.497 [2024-07-23 01:09:49.551278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.497 qpair failed and we were unable to recover it. 00:30:05.497 [2024-07-23 01:09:49.551453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.497 [2024-07-23 01:09:49.551594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.497 [2024-07-23 01:09:49.551629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.497 qpair failed and we were unable to recover it. 00:30:05.497 [2024-07-23 01:09:49.551787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.497 [2024-07-23 01:09:49.551928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.497 [2024-07-23 01:09:49.551969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.497 qpair failed and we were unable to recover it. 00:30:05.497 [2024-07-23 01:09:49.552152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.497 [2024-07-23 01:09:49.552299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.497 [2024-07-23 01:09:49.552326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.497 qpair failed and we were unable to recover it. 00:30:05.497 [2024-07-23 01:09:49.552472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.497 [2024-07-23 01:09:49.552656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.497 [2024-07-23 01:09:49.552700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.497 qpair failed and we were unable to recover it. 00:30:05.497 [2024-07-23 01:09:49.552901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.497 [2024-07-23 01:09:49.553104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.498 [2024-07-23 01:09:49.553129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.498 qpair failed and we were unable to recover it. 00:30:05.498 [2024-07-23 01:09:49.553324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.498 [2024-07-23 01:09:49.553467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.498 [2024-07-23 01:09:49.553491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.498 qpair failed and we were unable to recover it. 00:30:05.498 [2024-07-23 01:09:49.553661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.498 [2024-07-23 01:09:49.553825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.498 [2024-07-23 01:09:49.553850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.498 qpair failed and we were unable to recover it. 00:30:05.498 [2024-07-23 01:09:49.554054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.498 [2024-07-23 01:09:49.554256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.498 [2024-07-23 01:09:49.554282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.498 qpair failed and we were unable to recover it. 00:30:05.498 [2024-07-23 01:09:49.554490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.498 [2024-07-23 01:09:49.554622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.498 [2024-07-23 01:09:49.554647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.498 qpair failed and we were unable to recover it. 00:30:05.498 [2024-07-23 01:09:49.554812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.498 [2024-07-23 01:09:49.554969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.498 [2024-07-23 01:09:49.555010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.498 qpair failed and we were unable to recover it. 00:30:05.498 [2024-07-23 01:09:49.555197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.498 [2024-07-23 01:09:49.555343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.498 [2024-07-23 01:09:49.555370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.498 qpair failed and we were unable to recover it. 00:30:05.498 [2024-07-23 01:09:49.555528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.498 [2024-07-23 01:09:49.555700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.498 [2024-07-23 01:09:49.555726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.498 qpair failed and we were unable to recover it. 00:30:05.498 [2024-07-23 01:09:49.555890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.498 [2024-07-23 01:09:49.556090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.498 [2024-07-23 01:09:49.556115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.498 qpair failed and we were unable to recover it. 00:30:05.498 [2024-07-23 01:09:49.556271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.498 [2024-07-23 01:09:49.556408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.498 [2024-07-23 01:09:49.556432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.498 qpair failed and we were unable to recover it. 00:30:05.498 [2024-07-23 01:09:49.556597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.498 [2024-07-23 01:09:49.556737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.498 [2024-07-23 01:09:49.556761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.498 qpair failed and we were unable to recover it. 00:30:05.498 [2024-07-23 01:09:49.556922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.498 [2024-07-23 01:09:49.557127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.498 [2024-07-23 01:09:49.557156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.498 qpair failed and we were unable to recover it. 00:30:05.498 [2024-07-23 01:09:49.557344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.498 [2024-07-23 01:09:49.557502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.498 [2024-07-23 01:09:49.557542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.498 qpair failed and we were unable to recover it. 00:30:05.498 [2024-07-23 01:09:49.557752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.498 [2024-07-23 01:09:49.557915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.498 [2024-07-23 01:09:49.557956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.498 qpair failed and we were unable to recover it. 00:30:05.498 [2024-07-23 01:09:49.558148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.498 [2024-07-23 01:09:49.558377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.498 [2024-07-23 01:09:49.558414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.498 qpair failed and we were unable to recover it. 00:30:05.498 [2024-07-23 01:09:49.558622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.498 [2024-07-23 01:09:49.558801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.498 [2024-07-23 01:09:49.558830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.498 qpair failed and we were unable to recover it. 00:30:05.498 [2024-07-23 01:09:49.559039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.498 [2024-07-23 01:09:49.559239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.498 [2024-07-23 01:09:49.559282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.498 qpair failed and we were unable to recover it. 00:30:05.498 [2024-07-23 01:09:49.559520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.498 [2024-07-23 01:09:49.559717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.498 [2024-07-23 01:09:49.559758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.498 qpair failed and we were unable to recover it. 00:30:05.498 [2024-07-23 01:09:49.559978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.498 [2024-07-23 01:09:49.560174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.498 [2024-07-23 01:09:49.560215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.498 qpair failed and we were unable to recover it. 00:30:05.498 [2024-07-23 01:09:49.560417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.498 [2024-07-23 01:09:49.560598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.498 [2024-07-23 01:09:49.560649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.498 qpair failed and we were unable to recover it. 00:30:05.498 [2024-07-23 01:09:49.560883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.498 [2024-07-23 01:09:49.561074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.498 [2024-07-23 01:09:49.561114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.498 qpair failed and we were unable to recover it. 00:30:05.498 [2024-07-23 01:09:49.561308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.498 [2024-07-23 01:09:49.561526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.498 [2024-07-23 01:09:49.561566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.498 qpair failed and we were unable to recover it. 00:30:05.498 [2024-07-23 01:09:49.561793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.498 [2024-07-23 01:09:49.561986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.498 [2024-07-23 01:09:49.562025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.498 qpair failed and we were unable to recover it. 00:30:05.498 [2024-07-23 01:09:49.562231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.498 [2024-07-23 01:09:49.562453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.498 [2024-07-23 01:09:49.562494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.498 qpair failed and we were unable to recover it. 00:30:05.499 [2024-07-23 01:09:49.562687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.499 [2024-07-23 01:09:49.562887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.499 [2024-07-23 01:09:49.562921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.499 qpair failed and we were unable to recover it. 00:30:05.499 [2024-07-23 01:09:49.563142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.499 [2024-07-23 01:09:49.563331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.499 [2024-07-23 01:09:49.563363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.499 qpair failed and we were unable to recover it. 00:30:05.499 [2024-07-23 01:09:49.563554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.499 [2024-07-23 01:09:49.563722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.499 [2024-07-23 01:09:49.563753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.499 qpair failed and we were unable to recover it. 00:30:05.499 [2024-07-23 01:09:49.563920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.499 [2024-07-23 01:09:49.564097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.499 [2024-07-23 01:09:49.564126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.499 qpair failed and we were unable to recover it. 00:30:05.499 [2024-07-23 01:09:49.564326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.499 [2024-07-23 01:09:49.564509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.499 [2024-07-23 01:09:49.564538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.499 qpair failed and we were unable to recover it. 00:30:05.499 [2024-07-23 01:09:49.564797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.499 [2024-07-23 01:09:49.564967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.499 [2024-07-23 01:09:49.564997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.499 qpair failed and we were unable to recover it. 00:30:05.499 [2024-07-23 01:09:49.565211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.499 [2024-07-23 01:09:49.565392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.499 [2024-07-23 01:09:49.565422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.499 qpair failed and we were unable to recover it. 00:30:05.499 [2024-07-23 01:09:49.565587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.499 [2024-07-23 01:09:49.565773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.499 [2024-07-23 01:09:49.565803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.499 qpair failed and we were unable to recover it. 00:30:05.499 [2024-07-23 01:09:49.566017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.499 [2024-07-23 01:09:49.566216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.499 [2024-07-23 01:09:49.566245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.499 qpair failed and we were unable to recover it. 00:30:05.499 [2024-07-23 01:09:49.566405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.499 [2024-07-23 01:09:49.566563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.499 [2024-07-23 01:09:49.566591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.499 qpair failed and we were unable to recover it. 00:30:05.499 [2024-07-23 01:09:49.566793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.499 [2024-07-23 01:09:49.566952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.499 [2024-07-23 01:09:49.566983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.499 qpair failed and we were unable to recover it. 00:30:05.499 [2024-07-23 01:09:49.567171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.499 [2024-07-23 01:09:49.567361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.499 [2024-07-23 01:09:49.567390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.499 qpair failed and we were unable to recover it. 00:30:05.499 [2024-07-23 01:09:49.567606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.499 [2024-07-23 01:09:49.567764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.499 [2024-07-23 01:09:49.567793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.499 qpair failed and we were unable to recover it. 00:30:05.499 [2024-07-23 01:09:49.567987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.499 [2024-07-23 01:09:49.568188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.499 [2024-07-23 01:09:49.568217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.499 qpair failed and we were unable to recover it. 00:30:05.499 [2024-07-23 01:09:49.568408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.499 [2024-07-23 01:09:49.568590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.499 [2024-07-23 01:09:49.568629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.499 qpair failed and we were unable to recover it. 00:30:05.499 [2024-07-23 01:09:49.568843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.499 [2024-07-23 01:09:49.569020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.499 [2024-07-23 01:09:49.569049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.499 qpair failed and we were unable to recover it. 00:30:05.499 [2024-07-23 01:09:49.569266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.499 [2024-07-23 01:09:49.569420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.499 [2024-07-23 01:09:49.569449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.499 qpair failed and we were unable to recover it. 00:30:05.499 [2024-07-23 01:09:49.569623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.499 [2024-07-23 01:09:49.569826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.499 [2024-07-23 01:09:49.569855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.499 qpair failed and we were unable to recover it. 00:30:05.499 [2024-07-23 01:09:49.570024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.499 [2024-07-23 01:09:49.570227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.499 [2024-07-23 01:09:49.570256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.499 qpair failed and we were unable to recover it. 00:30:05.499 [2024-07-23 01:09:49.570411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.499 [2024-07-23 01:09:49.570628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.499 [2024-07-23 01:09:49.570658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.499 qpair failed and we were unable to recover it. 00:30:05.499 [2024-07-23 01:09:49.570851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.499 [2024-07-23 01:09:49.571064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.499 [2024-07-23 01:09:49.571093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.499 qpair failed and we were unable to recover it. 00:30:05.499 [2024-07-23 01:09:49.571252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.499 [2024-07-23 01:09:49.571452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.499 [2024-07-23 01:09:49.571482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.499 qpair failed and we were unable to recover it. 00:30:05.499 [2024-07-23 01:09:49.571638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.499 [2024-07-23 01:09:49.571795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.499 [2024-07-23 01:09:49.571826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.499 qpair failed and we were unable to recover it. 00:30:05.499 [2024-07-23 01:09:49.571995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.499 [2024-07-23 01:09:49.572201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.499 [2024-07-23 01:09:49.572231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.499 qpair failed and we were unable to recover it. 00:30:05.499 [2024-07-23 01:09:49.572453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.499 [2024-07-23 01:09:49.572640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.499 [2024-07-23 01:09:49.572670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.499 qpair failed and we were unable to recover it. 00:30:05.499 [2024-07-23 01:09:49.572867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.499 [2024-07-23 01:09:49.573077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.499 [2024-07-23 01:09:49.573106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.499 qpair failed and we were unable to recover it. 00:30:05.499 [2024-07-23 01:09:49.573289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.499 [2024-07-23 01:09:49.573458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.499 [2024-07-23 01:09:49.573487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.499 qpair failed and we were unable to recover it. 00:30:05.499 [2024-07-23 01:09:49.573673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.499 [2024-07-23 01:09:49.573851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.499 [2024-07-23 01:09:49.573880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.499 qpair failed and we were unable to recover it. 00:30:05.499 [2024-07-23 01:09:49.574043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.500 [2024-07-23 01:09:49.574224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.500 [2024-07-23 01:09:49.574253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.500 qpair failed and we were unable to recover it. 00:30:05.500 [2024-07-23 01:09:49.574417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.500 [2024-07-23 01:09:49.574622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.500 [2024-07-23 01:09:49.574652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.500 qpair failed and we were unable to recover it. 00:30:05.500 [2024-07-23 01:09:49.574840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.500 [2024-07-23 01:09:49.575020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.500 [2024-07-23 01:09:49.575049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.500 qpair failed and we were unable to recover it. 00:30:05.500 [2024-07-23 01:09:49.575240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.500 [2024-07-23 01:09:49.575398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.500 [2024-07-23 01:09:49.575428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.500 qpair failed and we were unable to recover it. 00:30:05.500 [2024-07-23 01:09:49.575641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.500 [2024-07-23 01:09:49.575815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.500 [2024-07-23 01:09:49.575844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.500 qpair failed and we were unable to recover it. 00:30:05.500 [2024-07-23 01:09:49.576039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.500 [2024-07-23 01:09:49.576214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.500 [2024-07-23 01:09:49.576244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.500 qpair failed and we were unable to recover it. 00:30:05.500 [2024-07-23 01:09:49.576433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.500 [2024-07-23 01:09:49.576601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.500 [2024-07-23 01:09:49.576642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.500 qpair failed and we were unable to recover it. 00:30:05.500 [2024-07-23 01:09:49.576807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.500 [2024-07-23 01:09:49.576985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.500 [2024-07-23 01:09:49.577015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.500 qpair failed and we were unable to recover it. 00:30:05.500 [2024-07-23 01:09:49.577105] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:30:05.500 [2024-07-23 01:09:49.577184] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:05.500 [2024-07-23 01:09:49.577228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.500 [2024-07-23 01:09:49.577414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.500 [2024-07-23 01:09:49.577450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.500 qpair failed and we were unable to recover it. 00:30:05.500 [2024-07-23 01:09:49.577657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.500 [2024-07-23 01:09:49.577814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.500 [2024-07-23 01:09:49.577848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.500 qpair failed and we were unable to recover it. 00:30:05.500 [2024-07-23 01:09:49.578062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.500 [2024-07-23 01:09:49.578219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.500 [2024-07-23 01:09:49.578249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.500 qpair failed and we were unable to recover it. 00:30:05.500 [2024-07-23 01:09:49.578466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.500 [2024-07-23 01:09:49.578623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.500 [2024-07-23 01:09:49.578658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.500 qpair failed and we were unable to recover it. 00:30:05.500 [2024-07-23 01:09:49.578845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.500 [2024-07-23 01:09:49.579018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.500 [2024-07-23 01:09:49.579047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.500 qpair failed and we were unable to recover it. 00:30:05.500 [2024-07-23 01:09:49.579206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.500 [2024-07-23 01:09:49.579377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.500 [2024-07-23 01:09:49.579405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.500 qpair failed and we were unable to recover it. 00:30:05.500 [2024-07-23 01:09:49.579631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.500 [2024-07-23 01:09:49.579809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.500 [2024-07-23 01:09:49.579839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.500 qpair failed and we were unable to recover it. 00:30:05.500 [2024-07-23 01:09:49.580034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.500 [2024-07-23 01:09:49.580212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.500 [2024-07-23 01:09:49.580242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.500 qpair failed and we were unable to recover it. 00:30:05.500 [2024-07-23 01:09:49.580458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.500 [2024-07-23 01:09:49.580668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.500 [2024-07-23 01:09:49.580697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.500 qpair failed and we were unable to recover it. 00:30:05.500 [2024-07-23 01:09:49.580915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.500 [2024-07-23 01:09:49.581066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.500 [2024-07-23 01:09:49.581096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.500 qpair failed and we were unable to recover it. 00:30:05.500 [2024-07-23 01:09:49.581266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.500 [2024-07-23 01:09:49.581467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.500 [2024-07-23 01:09:49.581497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.500 qpair failed and we were unable to recover it. 00:30:05.500 [2024-07-23 01:09:49.581690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.500 [2024-07-23 01:09:49.581869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.500 [2024-07-23 01:09:49.581900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.500 qpair failed and we were unable to recover it. 00:30:05.500 [2024-07-23 01:09:49.582114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.500 [2024-07-23 01:09:49.582286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.500 [2024-07-23 01:09:49.582316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.500 qpair failed and we were unable to recover it. 00:30:05.500 [2024-07-23 01:09:49.582530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.500 [2024-07-23 01:09:49.582678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.500 [2024-07-23 01:09:49.582713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.500 qpair failed and we were unable to recover it. 00:30:05.500 [2024-07-23 01:09:49.582883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.500 [2024-07-23 01:09:49.583042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.500 [2024-07-23 01:09:49.583072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.500 qpair failed and we were unable to recover it. 00:30:05.500 [2024-07-23 01:09:49.583272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.500 [2024-07-23 01:09:49.583452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.500 [2024-07-23 01:09:49.583481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.500 qpair failed and we were unable to recover it. 00:30:05.500 [2024-07-23 01:09:49.583719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.500 [2024-07-23 01:09:49.583895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.500 [2024-07-23 01:09:49.583938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.500 qpair failed and we were unable to recover it. 00:30:05.500 [2024-07-23 01:09:49.584135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.500 [2024-07-23 01:09:49.584389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.500 [2024-07-23 01:09:49.584418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.500 qpair failed and we were unable to recover it. 00:30:05.500 [2024-07-23 01:09:49.584634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.500 [2024-07-23 01:09:49.584807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.500 [2024-07-23 01:09:49.584837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.500 qpair failed and we were unable to recover it. 00:30:05.500 [2024-07-23 01:09:49.585027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.500 [2024-07-23 01:09:49.585240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.500 [2024-07-23 01:09:49.585269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.501 qpair failed and we were unable to recover it. 00:30:05.501 [2024-07-23 01:09:49.585481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.501 [2024-07-23 01:09:49.585693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.501 [2024-07-23 01:09:49.585723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.501 qpair failed and we were unable to recover it. 00:30:05.501 [2024-07-23 01:09:49.585891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.501 [2024-07-23 01:09:49.586090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.501 [2024-07-23 01:09:49.586134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.501 qpair failed and we were unable to recover it. 00:30:05.501 [2024-07-23 01:09:49.586324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.501 [2024-07-23 01:09:49.586543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.501 [2024-07-23 01:09:49.586571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.501 qpair failed and we were unable to recover it. 00:30:05.501 [2024-07-23 01:09:49.586809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.501 [2024-07-23 01:09:49.586982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.501 [2024-07-23 01:09:49.587031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.501 qpair failed and we were unable to recover it. 00:30:05.501 [2024-07-23 01:09:49.587267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.501 [2024-07-23 01:09:49.587472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.501 [2024-07-23 01:09:49.587503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.501 qpair failed and we were unable to recover it. 00:30:05.501 [2024-07-23 01:09:49.587717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.501 [2024-07-23 01:09:49.587893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.501 [2024-07-23 01:09:49.587923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.501 qpair failed and we were unable to recover it. 00:30:05.501 [2024-07-23 01:09:49.588132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.501 [2024-07-23 01:09:49.588377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.501 [2024-07-23 01:09:49.588405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.501 qpair failed and we were unable to recover it. 00:30:05.501 [2024-07-23 01:09:49.588645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.501 [2024-07-23 01:09:49.588824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.501 [2024-07-23 01:09:49.588854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.501 qpair failed and we were unable to recover it. 00:30:05.501 [2024-07-23 01:09:49.589101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.501 [2024-07-23 01:09:49.589302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.501 [2024-07-23 01:09:49.589331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.501 qpair failed and we were unable to recover it. 00:30:05.501 [2024-07-23 01:09:49.589526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.501 [2024-07-23 01:09:49.589698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.501 [2024-07-23 01:09:49.589728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.501 qpair failed and we were unable to recover it. 00:30:05.501 [2024-07-23 01:09:49.590000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.501 [2024-07-23 01:09:49.590161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.501 [2024-07-23 01:09:49.590189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.501 qpair failed and we were unable to recover it. 00:30:05.501 [2024-07-23 01:09:49.590374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.501 [2024-07-23 01:09:49.590580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.501 [2024-07-23 01:09:49.590610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.501 qpair failed and we were unable to recover it. 00:30:05.501 [2024-07-23 01:09:49.590830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.501 [2024-07-23 01:09:49.591050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.501 [2024-07-23 01:09:49.591079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.501 qpair failed and we were unable to recover it. 00:30:05.501 [2024-07-23 01:09:49.591305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.501 [2024-07-23 01:09:49.591479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.501 [2024-07-23 01:09:49.591523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.501 qpair failed and we were unable to recover it. 00:30:05.501 [2024-07-23 01:09:49.591761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.501 [2024-07-23 01:09:49.591961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.501 [2024-07-23 01:09:49.591991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.501 qpair failed and we were unable to recover it. 00:30:05.501 [2024-07-23 01:09:49.592203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.501 [2024-07-23 01:09:49.592424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.501 [2024-07-23 01:09:49.592452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.501 qpair failed and we were unable to recover it. 00:30:05.501 [2024-07-23 01:09:49.592722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.501 [2024-07-23 01:09:49.592928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.501 [2024-07-23 01:09:49.592956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.501 qpair failed and we were unable to recover it. 00:30:05.501 [2024-07-23 01:09:49.593153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.501 [2024-07-23 01:09:49.593379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.501 [2024-07-23 01:09:49.593407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.501 qpair failed and we were unable to recover it. 00:30:05.501 [2024-07-23 01:09:49.593595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.501 [2024-07-23 01:09:49.593812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.501 [2024-07-23 01:09:49.593843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.501 qpair failed and we were unable to recover it. 00:30:05.501 [2024-07-23 01:09:49.594065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.501 [2024-07-23 01:09:49.594263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.501 [2024-07-23 01:09:49.594292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.501 qpair failed and we were unable to recover it. 00:30:05.501 [2024-07-23 01:09:49.594516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.501 [2024-07-23 01:09:49.594739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.501 [2024-07-23 01:09:49.594769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.501 qpair failed and we were unable to recover it. 00:30:05.501 [2024-07-23 01:09:49.594975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.501 [2024-07-23 01:09:49.595189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.501 [2024-07-23 01:09:49.595217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.501 qpair failed and we were unable to recover it. 00:30:05.501 [2024-07-23 01:09:49.595431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.501 [2024-07-23 01:09:49.595652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.501 [2024-07-23 01:09:49.595697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.501 qpair failed and we were unable to recover it. 00:30:05.501 [2024-07-23 01:09:49.595887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.501 [2024-07-23 01:09:49.596055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.501 [2024-07-23 01:09:49.596084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.501 qpair failed and we were unable to recover it. 00:30:05.501 [2024-07-23 01:09:49.596336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.501 [2024-07-23 01:09:49.596519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.501 [2024-07-23 01:09:49.596561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.501 qpair failed and we were unable to recover it. 00:30:05.501 [2024-07-23 01:09:49.596770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.501 [2024-07-23 01:09:49.596933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.501 [2024-07-23 01:09:49.596966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.501 qpair failed and we were unable to recover it. 00:30:05.501 [2024-07-23 01:09:49.597165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.501 [2024-07-23 01:09:49.597340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.501 [2024-07-23 01:09:49.597383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.501 qpair failed and we were unable to recover it. 00:30:05.501 [2024-07-23 01:09:49.597570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.501 [2024-07-23 01:09:49.597760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.501 [2024-07-23 01:09:49.597789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.501 qpair failed and we were unable to recover it. 00:30:05.501 [2024-07-23 01:09:49.597979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.502 [2024-07-23 01:09:49.598205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.502 [2024-07-23 01:09:49.598233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.502 qpair failed and we were unable to recover it. 00:30:05.502 [2024-07-23 01:09:49.598448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.502 [2024-07-23 01:09:49.598628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.502 [2024-07-23 01:09:49.598658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.502 qpair failed and we were unable to recover it. 00:30:05.502 [2024-07-23 01:09:49.598822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.502 [2024-07-23 01:09:49.599047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.502 [2024-07-23 01:09:49.599076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.502 qpair failed and we were unable to recover it. 00:30:05.502 [2024-07-23 01:09:49.599293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.502 [2024-07-23 01:09:49.599480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.502 [2024-07-23 01:09:49.599524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.502 qpair failed and we were unable to recover it. 00:30:05.502 [2024-07-23 01:09:49.599732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.502 [2024-07-23 01:09:49.599969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.502 [2024-07-23 01:09:49.599998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.502 qpair failed and we were unable to recover it. 00:30:05.502 [2024-07-23 01:09:49.600211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.502 [2024-07-23 01:09:49.600432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.502 [2024-07-23 01:09:49.600461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.502 qpair failed and we were unable to recover it. 00:30:05.502 [2024-07-23 01:09:49.600687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.502 [2024-07-23 01:09:49.600843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.502 [2024-07-23 01:09:49.600887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.502 qpair failed and we were unable to recover it. 00:30:05.502 [2024-07-23 01:09:49.601109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.502 [2024-07-23 01:09:49.601300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.502 [2024-07-23 01:09:49.601328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.502 qpair failed and we were unable to recover it. 00:30:05.502 [2024-07-23 01:09:49.601525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.502 [2024-07-23 01:09:49.601778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.502 [2024-07-23 01:09:49.601823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.502 qpair failed and we were unable to recover it. 00:30:05.502 [2024-07-23 01:09:49.602045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.502 [2024-07-23 01:09:49.602229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.502 [2024-07-23 01:09:49.602274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.502 qpair failed and we were unable to recover it. 00:30:05.502 [2024-07-23 01:09:49.602502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.502 [2024-07-23 01:09:49.602694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.502 [2024-07-23 01:09:49.602723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.502 qpair failed and we were unable to recover it. 00:30:05.502 [2024-07-23 01:09:49.602940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.502 [2024-07-23 01:09:49.603145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.502 [2024-07-23 01:09:49.603173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.502 qpair failed and we were unable to recover it. 00:30:05.502 [2024-07-23 01:09:49.603425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.502 [2024-07-23 01:09:49.603607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.502 [2024-07-23 01:09:49.603640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.502 qpair failed and we were unable to recover it. 00:30:05.502 [2024-07-23 01:09:49.603830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.502 [2024-07-23 01:09:49.604008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.502 [2024-07-23 01:09:49.604037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.502 qpair failed and we were unable to recover it. 00:30:05.502 [2024-07-23 01:09:49.604227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.502 [2024-07-23 01:09:49.604485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.502 [2024-07-23 01:09:49.604529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.502 qpair failed and we were unable to recover it. 00:30:05.502 [2024-07-23 01:09:49.604749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.502 [2024-07-23 01:09:49.604943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.502 [2024-07-23 01:09:49.604972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.502 qpair failed and we were unable to recover it. 00:30:05.502 [2024-07-23 01:09:49.605245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.502 [2024-07-23 01:09:49.605418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.502 [2024-07-23 01:09:49.605447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.502 qpair failed and we were unable to recover it. 00:30:05.502 [2024-07-23 01:09:49.605654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.502 [2024-07-23 01:09:49.605834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.502 [2024-07-23 01:09:49.605878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.502 qpair failed and we were unable to recover it. 00:30:05.502 [2024-07-23 01:09:49.606074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.502 [2024-07-23 01:09:49.606275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.502 [2024-07-23 01:09:49.606304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.502 qpair failed and we were unable to recover it. 00:30:05.502 [2024-07-23 01:09:49.606516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.502 [2024-07-23 01:09:49.606741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.502 [2024-07-23 01:09:49.606770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.502 qpair failed and we were unable to recover it. 00:30:05.502 [2024-07-23 01:09:49.606915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.502 [2024-07-23 01:09:49.607131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.502 [2024-07-23 01:09:49.607160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.502 qpair failed and we were unable to recover it. 00:30:05.502 [2024-07-23 01:09:49.607365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.502 [2024-07-23 01:09:49.607588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.502 [2024-07-23 01:09:49.607623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.502 qpair failed and we were unable to recover it. 00:30:05.502 [2024-07-23 01:09:49.607829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.502 [2024-07-23 01:09:49.607994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.502 [2024-07-23 01:09:49.608022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.502 qpair failed and we were unable to recover it. 00:30:05.502 [2024-07-23 01:09:49.608250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.502 [2024-07-23 01:09:49.608451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.502 [2024-07-23 01:09:49.608480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.502 qpair failed and we were unable to recover it. 00:30:05.502 [2024-07-23 01:09:49.608711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.502 [2024-07-23 01:09:49.608942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.502 [2024-07-23 01:09:49.608971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.502 qpair failed and we were unable to recover it. 00:30:05.502 [2024-07-23 01:09:49.609251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.502 [2024-07-23 01:09:49.609507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.502 [2024-07-23 01:09:49.609551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.502 qpair failed and we were unable to recover it. 00:30:05.502 [2024-07-23 01:09:49.609787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.502 [2024-07-23 01:09:49.610004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.502 [2024-07-23 01:09:49.610033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.502 qpair failed and we were unable to recover it. 00:30:05.502 [2024-07-23 01:09:49.610268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.503 [2024-07-23 01:09:49.610456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.503 [2024-07-23 01:09:49.610485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.503 qpair failed and we were unable to recover it. 00:30:05.503 [2024-07-23 01:09:49.610699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.503 [2024-07-23 01:09:49.610900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.503 [2024-07-23 01:09:49.610938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.503 qpair failed and we were unable to recover it. 00:30:05.503 [2024-07-23 01:09:49.611155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.503 [2024-07-23 01:09:49.611335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.503 [2024-07-23 01:09:49.611380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.503 qpair failed and we were unable to recover it. 00:30:05.503 [2024-07-23 01:09:49.611594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.503 [2024-07-23 01:09:49.611802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.503 [2024-07-23 01:09:49.611831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.503 qpair failed and we were unable to recover it. 00:30:05.503 [2024-07-23 01:09:49.612090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.503 [2024-07-23 01:09:49.612299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.503 [2024-07-23 01:09:49.612329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.503 qpair failed and we were unable to recover it. 00:30:05.503 [2024-07-23 01:09:49.612553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.503 [2024-07-23 01:09:49.612750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.503 [2024-07-23 01:09:49.612780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.503 qpair failed and we were unable to recover it. 00:30:05.503 [2024-07-23 01:09:49.613014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.503 [2024-07-23 01:09:49.613168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.503 [2024-07-23 01:09:49.613212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.503 qpair failed and we were unable to recover it. 00:30:05.503 [2024-07-23 01:09:49.613446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.503 [2024-07-23 01:09:49.613644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.503 [2024-07-23 01:09:49.613673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.503 qpair failed and we were unable to recover it. 00:30:05.503 [2024-07-23 01:09:49.613858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.503 [2024-07-23 01:09:49.614056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.503 [2024-07-23 01:09:49.614084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.503 qpair failed and we were unable to recover it. 00:30:05.503 [2024-07-23 01:09:49.614270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.503 [2024-07-23 01:09:49.614438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.503 [2024-07-23 01:09:49.614466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.503 qpair failed and we were unable to recover it. 00:30:05.503 [2024-07-23 01:09:49.614651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.503 [2024-07-23 01:09:49.614835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.503 [2024-07-23 01:09:49.614865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.503 qpair failed and we were unable to recover it. 00:30:05.503 [2024-07-23 01:09:49.615103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.503 [2024-07-23 01:09:49.615325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.503 [2024-07-23 01:09:49.615354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.503 qpair failed and we were unable to recover it. 00:30:05.503 [2024-07-23 01:09:49.615566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.503 [2024-07-23 01:09:49.615765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.503 [2024-07-23 01:09:49.615795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.503 qpair failed and we were unable to recover it. 00:30:05.503 [2024-07-23 01:09:49.616092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.503 [2024-07-23 01:09:49.616308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.503 [2024-07-23 01:09:49.616337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.503 qpair failed and we were unable to recover it. 00:30:05.503 [2024-07-23 01:09:49.616560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.503 [2024-07-23 01:09:49.616749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.503 [2024-07-23 01:09:49.616794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.503 qpair failed and we were unable to recover it. 00:30:05.503 [2024-07-23 01:09:49.617039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.503 [2024-07-23 01:09:49.617241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.503 [2024-07-23 01:09:49.617270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.503 qpair failed and we were unable to recover it. 00:30:05.503 [2024-07-23 01:09:49.617464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.503 EAL: No free 2048 kB hugepages reported on node 1 00:30:05.503 [2024-07-23 01:09:49.617720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.503 [2024-07-23 01:09:49.617754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.503 qpair failed and we were unable to recover it. 00:30:05.503 [2024-07-23 01:09:49.617954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.503 [2024-07-23 01:09:49.618163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.503 [2024-07-23 01:09:49.618192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.503 qpair failed and we were unable to recover it. 00:30:05.503 [2024-07-23 01:09:49.618428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.503 [2024-07-23 01:09:49.618629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.503 [2024-07-23 01:09:49.618658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.503 qpair failed and we were unable to recover it. 00:30:05.503 [2024-07-23 01:09:49.618936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.503 [2024-07-23 01:09:49.619120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.503 [2024-07-23 01:09:49.619165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.503 qpair failed and we were unable to recover it. 00:30:05.503 [2024-07-23 01:09:49.619363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.503 [2024-07-23 01:09:49.619601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.503 [2024-07-23 01:09:49.619642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.503 qpair failed and we were unable to recover it. 00:30:05.503 [2024-07-23 01:09:49.619817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.503 [2024-07-23 01:09:49.620041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.503 [2024-07-23 01:09:49.620069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.503 qpair failed and we were unable to recover it. 00:30:05.503 [2024-07-23 01:09:49.620305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.503 [2024-07-23 01:09:49.620485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.504 [2024-07-23 01:09:49.620515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.504 qpair failed and we were unable to recover it. 00:30:05.504 [2024-07-23 01:09:49.620688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.504 [2024-07-23 01:09:49.620868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.504 [2024-07-23 01:09:49.620897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.504 qpair failed and we were unable to recover it. 00:30:05.504 [2024-07-23 01:09:49.621062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.504 [2024-07-23 01:09:49.621240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.504 [2024-07-23 01:09:49.621269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.504 qpair failed and we were unable to recover it. 00:30:05.504 [2024-07-23 01:09:49.621435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.504 [2024-07-23 01:09:49.621645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.504 [2024-07-23 01:09:49.621675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.504 qpair failed and we were unable to recover it. 00:30:05.504 [2024-07-23 01:09:49.621864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.504 [2024-07-23 01:09:49.622049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.504 [2024-07-23 01:09:49.622093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.504 qpair failed and we were unable to recover it. 00:30:05.504 [2024-07-23 01:09:49.622294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.504 [2024-07-23 01:09:49.622493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.504 [2024-07-23 01:09:49.622523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.504 qpair failed and we were unable to recover it. 00:30:05.504 [2024-07-23 01:09:49.622692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.504 [2024-07-23 01:09:49.622862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.504 [2024-07-23 01:09:49.622891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.504 qpair failed and we were unable to recover it. 00:30:05.504 [2024-07-23 01:09:49.623123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.504 [2024-07-23 01:09:49.623306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.504 [2024-07-23 01:09:49.623336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.504 qpair failed and we were unable to recover it. 00:30:05.504 [2024-07-23 01:09:49.623558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.504 [2024-07-23 01:09:49.623742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.504 [2024-07-23 01:09:49.623772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.504 qpair failed and we were unable to recover it. 00:30:05.504 [2024-07-23 01:09:49.623954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.504 [2024-07-23 01:09:49.624182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.504 [2024-07-23 01:09:49.624211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.504 qpair failed and we were unable to recover it. 00:30:05.504 [2024-07-23 01:09:49.624439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.504 [2024-07-23 01:09:49.624617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.504 [2024-07-23 01:09:49.624647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.504 qpair failed and we were unable to recover it. 00:30:05.504 [2024-07-23 01:09:49.624812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.504 [2024-07-23 01:09:49.624988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.504 [2024-07-23 01:09:49.625017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.504 qpair failed and we were unable to recover it. 00:30:05.504 [2024-07-23 01:09:49.625177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.504 [2024-07-23 01:09:49.625352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.504 [2024-07-23 01:09:49.625381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.504 qpair failed and we were unable to recover it. 00:30:05.504 [2024-07-23 01:09:49.625537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.504 [2024-07-23 01:09:49.625699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.504 [2024-07-23 01:09:49.625729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.504 qpair failed and we were unable to recover it. 00:30:05.504 [2024-07-23 01:09:49.625975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.504 [2024-07-23 01:09:49.626124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.504 [2024-07-23 01:09:49.626168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.504 qpair failed and we were unable to recover it. 00:30:05.504 [2024-07-23 01:09:49.626380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.504 [2024-07-23 01:09:49.626586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.504 [2024-07-23 01:09:49.626623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.504 qpair failed and we were unable to recover it. 00:30:05.504 [2024-07-23 01:09:49.626816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.504 [2024-07-23 01:09:49.626993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.504 [2024-07-23 01:09:49.627022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.504 qpair failed and we were unable to recover it. 00:30:05.504 [2024-07-23 01:09:49.627209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.504 [2024-07-23 01:09:49.627389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.504 [2024-07-23 01:09:49.627418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.504 qpair failed and we were unable to recover it. 00:30:05.504 [2024-07-23 01:09:49.627583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.504 [2024-07-23 01:09:49.627763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.504 [2024-07-23 01:09:49.627792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.504 qpair failed and we were unable to recover it. 00:30:05.504 [2024-07-23 01:09:49.627999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.504 [2024-07-23 01:09:49.628223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.504 [2024-07-23 01:09:49.628251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.504 qpair failed and we were unable to recover it. 00:30:05.504 [2024-07-23 01:09:49.628457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.504 [2024-07-23 01:09:49.628678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.504 [2024-07-23 01:09:49.628709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.504 qpair failed and we were unable to recover it. 00:30:05.504 [2024-07-23 01:09:49.628898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.504 [2024-07-23 01:09:49.629242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.504 [2024-07-23 01:09:49.629271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.504 qpair failed and we were unable to recover it. 00:30:05.504 [2024-07-23 01:09:49.629436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.504 [2024-07-23 01:09:49.629619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.504 [2024-07-23 01:09:49.629648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.504 qpair failed and we were unable to recover it. 00:30:05.504 [2024-07-23 01:09:49.629883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.504 [2024-07-23 01:09:49.630082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.504 [2024-07-23 01:09:49.630111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.504 qpair failed and we were unable to recover it. 00:30:05.504 [2024-07-23 01:09:49.630325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.504 [2024-07-23 01:09:49.630510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.504 [2024-07-23 01:09:49.630538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.504 qpair failed and we were unable to recover it. 00:30:05.504 [2024-07-23 01:09:49.630766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.504 [2024-07-23 01:09:49.630966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.504 [2024-07-23 01:09:49.630996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.504 qpair failed and we were unable to recover it. 00:30:05.504 [2024-07-23 01:09:49.631230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.504 [2024-07-23 01:09:49.631382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.504 [2024-07-23 01:09:49.631411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.504 qpair failed and we were unable to recover it. 00:30:05.504 [2024-07-23 01:09:49.631648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.504 [2024-07-23 01:09:49.631825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.504 [2024-07-23 01:09:49.631854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.504 qpair failed and we were unable to recover it. 00:30:05.504 [2024-07-23 01:09:49.632044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.504 [2024-07-23 01:09:49.632222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.504 [2024-07-23 01:09:49.632251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.505 qpair failed and we were unable to recover it. 00:30:05.505 [2024-07-23 01:09:49.632456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.505 [2024-07-23 01:09:49.632593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.505 [2024-07-23 01:09:49.632646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.505 qpair failed and we were unable to recover it. 00:30:05.505 [2024-07-23 01:09:49.632815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.505 [2024-07-23 01:09:49.633030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.505 [2024-07-23 01:09:49.633059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.505 qpair failed and we were unable to recover it. 00:30:05.505 [2024-07-23 01:09:49.633250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.505 [2024-07-23 01:09:49.633426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.505 [2024-07-23 01:09:49.633469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.505 qpair failed and we were unable to recover it. 00:30:05.505 [2024-07-23 01:09:49.633687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.505 [2024-07-23 01:09:49.633861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.505 [2024-07-23 01:09:49.633907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.505 qpair failed and we were unable to recover it. 00:30:05.505 [2024-07-23 01:09:49.634104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.505 [2024-07-23 01:09:49.634302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.505 [2024-07-23 01:09:49.634331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.505 qpair failed and we were unable to recover it. 00:30:05.505 [2024-07-23 01:09:49.634542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.505 [2024-07-23 01:09:49.634730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.505 [2024-07-23 01:09:49.634760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.505 qpair failed and we were unable to recover it. 00:30:05.505 [2024-07-23 01:09:49.634978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.505 [2024-07-23 01:09:49.635145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.505 [2024-07-23 01:09:49.635174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.505 qpair failed and we were unable to recover it. 00:30:05.505 [2024-07-23 01:09:49.635382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.505 [2024-07-23 01:09:49.635536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.505 [2024-07-23 01:09:49.635580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.505 qpair failed and we were unable to recover it. 00:30:05.505 [2024-07-23 01:09:49.635813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.505 [2024-07-23 01:09:49.635994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.505 [2024-07-23 01:09:49.636023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.505 qpair failed and we were unable to recover it. 00:30:05.505 [2024-07-23 01:09:49.636224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.505 [2024-07-23 01:09:49.636426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.505 [2024-07-23 01:09:49.636455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.505 qpair failed and we were unable to recover it. 00:30:05.505 [2024-07-23 01:09:49.636690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.505 [2024-07-23 01:09:49.636860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.505 [2024-07-23 01:09:49.636903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.505 qpair failed and we were unable to recover it. 00:30:05.505 [2024-07-23 01:09:49.637108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.505 [2024-07-23 01:09:49.637257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.505 [2024-07-23 01:09:49.637286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.505 qpair failed and we were unable to recover it. 00:30:05.505 [2024-07-23 01:09:49.637503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.505 [2024-07-23 01:09:49.637688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.505 [2024-07-23 01:09:49.637718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.505 qpair failed and we were unable to recover it. 00:30:05.505 [2024-07-23 01:09:49.637926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.505 [2024-07-23 01:09:49.638072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.505 [2024-07-23 01:09:49.638116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.505 qpair failed and we were unable to recover it. 00:30:05.505 [2024-07-23 01:09:49.638300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.505 [2024-07-23 01:09:49.638508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.505 [2024-07-23 01:09:49.638537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.505 qpair failed and we were unable to recover it. 00:30:05.505 [2024-07-23 01:09:49.638709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.505 [2024-07-23 01:09:49.638889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.505 [2024-07-23 01:09:49.638919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.505 qpair failed and we were unable to recover it. 00:30:05.505 [2024-07-23 01:09:49.639084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.505 [2024-07-23 01:09:49.639290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.505 [2024-07-23 01:09:49.639320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.505 qpair failed and we were unable to recover it. 00:30:05.505 [2024-07-23 01:09:49.639576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.505 [2024-07-23 01:09:49.639766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.505 [2024-07-23 01:09:49.639795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.505 qpair failed and we were unable to recover it. 00:30:05.505 [2024-07-23 01:09:49.639989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.505 [2024-07-23 01:09:49.640209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.505 [2024-07-23 01:09:49.640239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.505 qpair failed and we were unable to recover it. 00:30:05.505 [2024-07-23 01:09:49.640457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.505 [2024-07-23 01:09:49.640628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.505 [2024-07-23 01:09:49.640658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.505 qpair failed and we were unable to recover it. 00:30:05.505 [2024-07-23 01:09:49.640851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.505 [2024-07-23 01:09:49.641028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.505 [2024-07-23 01:09:49.641058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.505 qpair failed and we were unable to recover it. 00:30:05.505 [2024-07-23 01:09:49.641221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.505 [2024-07-23 01:09:49.641408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.505 [2024-07-23 01:09:49.641437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.505 qpair failed and we were unable to recover it. 00:30:05.505 [2024-07-23 01:09:49.641648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.505 [2024-07-23 01:09:49.641826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.505 [2024-07-23 01:09:49.641856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.505 qpair failed and we were unable to recover it. 00:30:05.505 [2024-07-23 01:09:49.642035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.505 [2024-07-23 01:09:49.642216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.505 [2024-07-23 01:09:49.642245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.505 qpair failed and we were unable to recover it. 00:30:05.505 [2024-07-23 01:09:49.642452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.505 [2024-07-23 01:09:49.642664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.505 [2024-07-23 01:09:49.642693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.505 qpair failed and we were unable to recover it. 00:30:05.505 [2024-07-23 01:09:49.642892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.505 [2024-07-23 01:09:49.643125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.505 [2024-07-23 01:09:49.643153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.505 qpair failed and we were unable to recover it. 00:30:05.505 [2024-07-23 01:09:49.643362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.505 [2024-07-23 01:09:49.643548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.505 [2024-07-23 01:09:49.643592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.505 qpair failed and we were unable to recover it. 00:30:05.505 [2024-07-23 01:09:49.643778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.505 [2024-07-23 01:09:49.643984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.505 [2024-07-23 01:09:49.644014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.505 qpair failed and we were unable to recover it. 00:30:05.505 [2024-07-23 01:09:49.644227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.506 [2024-07-23 01:09:49.644407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.506 [2024-07-23 01:09:49.644451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.506 qpair failed and we were unable to recover it. 00:30:05.506 [2024-07-23 01:09:49.644653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.506 [2024-07-23 01:09:49.644831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.506 [2024-07-23 01:09:49.644859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.506 qpair failed and we were unable to recover it. 00:30:05.506 [2024-07-23 01:09:49.645109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.506 [2024-07-23 01:09:49.645314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.506 [2024-07-23 01:09:49.645343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.506 qpair failed and we were unable to recover it. 00:30:05.506 [2024-07-23 01:09:49.645556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.506 [2024-07-23 01:09:49.645742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.506 [2024-07-23 01:09:49.645771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.506 qpair failed and we were unable to recover it. 00:30:05.506 [2024-07-23 01:09:49.645990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.506 [2024-07-23 01:09:49.646167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.506 [2024-07-23 01:09:49.646196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.506 qpair failed and we were unable to recover it. 00:30:05.506 [2024-07-23 01:09:49.646365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.506 [2024-07-23 01:09:49.646543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.506 [2024-07-23 01:09:49.646572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.506 qpair failed and we were unable to recover it. 00:30:05.506 [2024-07-23 01:09:49.646744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.506 [2024-07-23 01:09:49.646933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.506 [2024-07-23 01:09:49.646962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.506 qpair failed and we were unable to recover it. 00:30:05.506 [2024-07-23 01:09:49.647122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.506 [2024-07-23 01:09:49.647293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.506 [2024-07-23 01:09:49.647337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.506 qpair failed and we were unable to recover it. 00:30:05.506 [2024-07-23 01:09:49.647541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.506 [2024-07-23 01:09:49.647716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.506 [2024-07-23 01:09:49.647746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.506 qpair failed and we were unable to recover it. 00:30:05.506 [2024-07-23 01:09:49.647929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.506 [2024-07-23 01:09:49.648230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.506 [2024-07-23 01:09:49.648259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.506 qpair failed and we were unable to recover it. 00:30:05.506 [2024-07-23 01:09:49.648462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.506 [2024-07-23 01:09:49.648610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.506 [2024-07-23 01:09:49.648678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.506 qpair failed and we were unable to recover it. 00:30:05.506 [2024-07-23 01:09:49.648888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.506 [2024-07-23 01:09:49.649071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.506 [2024-07-23 01:09:49.649115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.506 qpair failed and we were unable to recover it. 00:30:05.506 [2024-07-23 01:09:49.649314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.506 [2024-07-23 01:09:49.649513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.506 [2024-07-23 01:09:49.649541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.506 qpair failed and we were unable to recover it. 00:30:05.506 [2024-07-23 01:09:49.649755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.506 [2024-07-23 01:09:49.649929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.506 [2024-07-23 01:09:49.649972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.506 qpair failed and we were unable to recover it. 00:30:05.506 [2024-07-23 01:09:49.650192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.506 [2024-07-23 01:09:49.650333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.506 [2024-07-23 01:09:49.650362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.506 qpair failed and we were unable to recover it. 00:30:05.506 [2024-07-23 01:09:49.650549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.506 [2024-07-23 01:09:49.650767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.506 [2024-07-23 01:09:49.650797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.506 qpair failed and we were unable to recover it. 00:30:05.506 [2024-07-23 01:09:49.650990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.506 [2024-07-23 01:09:49.651190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.506 [2024-07-23 01:09:49.651219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.506 qpair failed and we were unable to recover it. 00:30:05.506 [2024-07-23 01:09:49.651406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.506 [2024-07-23 01:09:49.651551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.506 [2024-07-23 01:09:49.651580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.506 qpair failed and we were unable to recover it. 00:30:05.506 [2024-07-23 01:09:49.651804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.506 [2024-07-23 01:09:49.652012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.506 [2024-07-23 01:09:49.652053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.506 qpair failed and we were unable to recover it. 00:30:05.506 [2024-07-23 01:09:49.652244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.506 [2024-07-23 01:09:49.652454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.506 [2024-07-23 01:09:49.652494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.506 qpair failed and we were unable to recover it. 00:30:05.506 [2024-07-23 01:09:49.652705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.506 [2024-07-23 01:09:49.652881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.506 [2024-07-23 01:09:49.652935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.506 qpair failed and we were unable to recover it. 00:30:05.506 [2024-07-23 01:09:49.653163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.506 [2024-07-23 01:09:49.653356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.506 [2024-07-23 01:09:49.653397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.506 qpair failed and we were unable to recover it. 00:30:05.506 [2024-07-23 01:09:49.653669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.506 [2024-07-23 01:09:49.653820] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:05.506 [2024-07-23 01:09:49.653884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.506 [2024-07-23 01:09:49.653922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.506 qpair failed and we were unable to recover it. 00:30:05.506 [2024-07-23 01:09:49.654122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.506 [2024-07-23 01:09:49.654319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.506 [2024-07-23 01:09:49.654356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.506 qpair failed and we were unable to recover it. 00:30:05.506 [2024-07-23 01:09:49.654523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.506 [2024-07-23 01:09:49.654684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.506 [2024-07-23 01:09:49.654724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.506 qpair failed and we were unable to recover it. 00:30:05.506 [2024-07-23 01:09:49.654891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.506 [2024-07-23 01:09:49.655080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.506 [2024-07-23 01:09:49.655118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.506 qpair failed and we were unable to recover it. 00:30:05.506 [2024-07-23 01:09:49.655322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.506 [2024-07-23 01:09:49.655513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.506 [2024-07-23 01:09:49.655551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.506 qpair failed and we were unable to recover it. 00:30:05.506 [2024-07-23 01:09:49.655762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.506 [2024-07-23 01:09:49.655951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.506 [2024-07-23 01:09:49.655989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.506 qpair failed and we were unable to recover it. 00:30:05.507 [2024-07-23 01:09:49.656211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.507 [2024-07-23 01:09:49.656377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.507 [2024-07-23 01:09:49.656414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.507 qpair failed and we were unable to recover it. 00:30:05.507 [2024-07-23 01:09:49.656607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.507 [2024-07-23 01:09:49.656821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.507 [2024-07-23 01:09:49.656860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.507 qpair failed and we were unable to recover it. 00:30:05.507 [2024-07-23 01:09:49.657061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.507 [2024-07-23 01:09:49.657284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.507 [2024-07-23 01:09:49.657323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.507 qpair failed and we were unable to recover it. 00:30:05.507 [2024-07-23 01:09:49.657556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.507 [2024-07-23 01:09:49.657855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.507 [2024-07-23 01:09:49.657895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.507 qpair failed and we were unable to recover it. 00:30:05.507 [2024-07-23 01:09:49.658118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.507 [2024-07-23 01:09:49.658336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.507 [2024-07-23 01:09:49.658376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.507 qpair failed and we were unable to recover it. 00:30:05.507 [2024-07-23 01:09:49.658556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.507 [2024-07-23 01:09:49.658791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.507 [2024-07-23 01:09:49.658831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.507 qpair failed and we were unable to recover it. 00:30:05.507 [2024-07-23 01:09:49.659092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.507 [2024-07-23 01:09:49.659308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.507 [2024-07-23 01:09:49.659346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.507 qpair failed and we were unable to recover it. 00:30:05.507 [2024-07-23 01:09:49.659510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.507 [2024-07-23 01:09:49.659680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.507 [2024-07-23 01:09:49.659721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.507 qpair failed and we were unable to recover it. 00:30:05.507 [2024-07-23 01:09:49.659918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.507 [2024-07-23 01:09:49.660094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.507 [2024-07-23 01:09:49.660136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.507 qpair failed and we were unable to recover it. 00:30:05.507 [2024-07-23 01:09:49.660352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.507 [2024-07-23 01:09:49.660575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.507 [2024-07-23 01:09:49.660624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.507 qpair failed and we were unable to recover it. 00:30:05.507 [2024-07-23 01:09:49.660944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.507 [2024-07-23 01:09:49.661169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.507 [2024-07-23 01:09:49.661210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.507 qpair failed and we were unable to recover it. 00:30:05.507 [2024-07-23 01:09:49.661498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.507 [2024-07-23 01:09:49.661779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.507 [2024-07-23 01:09:49.661820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.507 qpair failed and we were unable to recover it. 00:30:05.507 [2024-07-23 01:09:49.662012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.507 [2024-07-23 01:09:49.662242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.507 [2024-07-23 01:09:49.662282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.507 qpair failed and we were unable to recover it. 00:30:05.507 [2024-07-23 01:09:49.662471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.507 [2024-07-23 01:09:49.662669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.507 [2024-07-23 01:09:49.662708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.507 qpair failed and we were unable to recover it. 00:30:05.507 [2024-07-23 01:09:49.662880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.507 [2024-07-23 01:09:49.663079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.507 [2024-07-23 01:09:49.663116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.507 qpair failed and we were unable to recover it. 00:30:05.507 [2024-07-23 01:09:49.663392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.507 [2024-07-23 01:09:49.663606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.507 [2024-07-23 01:09:49.663653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.507 qpair failed and we were unable to recover it. 00:30:05.507 [2024-07-23 01:09:49.663859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.507 [2024-07-23 01:09:49.664024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.507 [2024-07-23 01:09:49.664061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.507 qpair failed and we were unable to recover it. 00:30:05.507 [2024-07-23 01:09:49.664239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.507 [2024-07-23 01:09:49.664449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.507 [2024-07-23 01:09:49.664487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.507 qpair failed and we were unable to recover it. 00:30:05.507 [2024-07-23 01:09:49.664726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.507 [2024-07-23 01:09:49.664900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.507 [2024-07-23 01:09:49.664940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.507 qpair failed and we were unable to recover it. 00:30:05.507 [2024-07-23 01:09:49.665121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.507 [2024-07-23 01:09:49.665317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.507 [2024-07-23 01:09:49.665354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.507 qpair failed and we were unable to recover it. 00:30:05.507 [2024-07-23 01:09:49.665524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.507 [2024-07-23 01:09:49.665699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.507 [2024-07-23 01:09:49.665739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.507 qpair failed and we were unable to recover it. 00:30:05.507 [2024-07-23 01:09:49.665938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.507 [2024-07-23 01:09:49.666105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.507 [2024-07-23 01:09:49.666146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.507 qpair failed and we were unable to recover it. 00:30:05.507 [2024-07-23 01:09:49.666351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.507 [2024-07-23 01:09:49.666568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.507 [2024-07-23 01:09:49.666628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.508 qpair failed and we were unable to recover it. 00:30:05.508 [2024-07-23 01:09:49.666812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.508 [2024-07-23 01:09:49.667014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.508 [2024-07-23 01:09:49.667054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.508 qpair failed and we were unable to recover it. 00:30:05.783 [2024-07-23 01:09:49.667265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.783 [2024-07-23 01:09:49.667428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.783 [2024-07-23 01:09:49.667466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.783 qpair failed and we were unable to recover it. 00:30:05.783 [2024-07-23 01:09:49.667669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.783 [2024-07-23 01:09:49.667861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.783 [2024-07-23 01:09:49.667908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.783 qpair failed and we were unable to recover it. 00:30:05.783 [2024-07-23 01:09:49.668104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.783 [2024-07-23 01:09:49.668271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.783 [2024-07-23 01:09:49.668310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.783 qpair failed and we were unable to recover it. 00:30:05.783 [2024-07-23 01:09:49.668513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.783 [2024-07-23 01:09:49.668710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.783 [2024-07-23 01:09:49.668748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.783 qpair failed and we were unable to recover it. 00:30:05.783 [2024-07-23 01:09:49.668925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.783 [2024-07-23 01:09:49.669096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.783 [2024-07-23 01:09:49.669134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.783 qpair failed and we were unable to recover it. 00:30:05.783 [2024-07-23 01:09:49.669314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.783 [2024-07-23 01:09:49.669538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.783 [2024-07-23 01:09:49.669579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.783 qpair failed and we were unable to recover it. 00:30:05.783 [2024-07-23 01:09:49.669808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.783 [2024-07-23 01:09:49.670017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.783 [2024-07-23 01:09:49.670051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.783 qpair failed and we were unable to recover it. 00:30:05.783 [2024-07-23 01:09:49.670245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.783 [2024-07-23 01:09:49.670426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.783 [2024-07-23 01:09:49.670465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.783 qpair failed and we were unable to recover it. 00:30:05.783 [2024-07-23 01:09:49.670660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.783 [2024-07-23 01:09:49.670833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.784 [2024-07-23 01:09:49.670869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.784 qpair failed and we were unable to recover it. 00:30:05.784 [2024-07-23 01:09:49.671058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.784 [2024-07-23 01:09:49.671237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.784 [2024-07-23 01:09:49.671271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.784 qpair failed and we were unable to recover it. 00:30:05.784 [2024-07-23 01:09:49.671492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.784 [2024-07-23 01:09:49.671669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.784 [2024-07-23 01:09:49.671699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.784 qpair failed and we were unable to recover it. 00:30:05.784 [2024-07-23 01:09:49.671919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.784 [2024-07-23 01:09:49.672097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.784 [2024-07-23 01:09:49.672126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.784 qpair failed and we were unable to recover it. 00:30:05.784 [2024-07-23 01:09:49.672317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.784 [2024-07-23 01:09:49.672492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.784 [2024-07-23 01:09:49.672522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.784 qpair failed and we were unable to recover it. 00:30:05.784 [2024-07-23 01:09:49.672736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.784 [2024-07-23 01:09:49.672915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.784 [2024-07-23 01:09:49.672945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.784 qpair failed and we were unable to recover it. 00:30:05.784 [2024-07-23 01:09:49.673135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.784 [2024-07-23 01:09:49.673335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.784 [2024-07-23 01:09:49.673364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.784 qpair failed and we were unable to recover it. 00:30:05.784 [2024-07-23 01:09:49.673534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.784 [2024-07-23 01:09:49.673718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.784 [2024-07-23 01:09:49.673748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.784 qpair failed and we were unable to recover it. 00:30:05.784 [2024-07-23 01:09:49.673938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.784 [2024-07-23 01:09:49.674083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.784 [2024-07-23 01:09:49.674113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.784 qpair failed and we were unable to recover it. 00:30:05.784 [2024-07-23 01:09:49.674291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.784 [2024-07-23 01:09:49.674476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.784 [2024-07-23 01:09:49.674505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.784 qpair failed and we were unable to recover it. 00:30:05.784 [2024-07-23 01:09:49.674712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.784 [2024-07-23 01:09:49.674877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.784 [2024-07-23 01:09:49.674912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.784 qpair failed and we were unable to recover it. 00:30:05.784 [2024-07-23 01:09:49.675127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.784 [2024-07-23 01:09:49.675277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.784 [2024-07-23 01:09:49.675320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.784 qpair failed and we were unable to recover it. 00:30:05.784 [2024-07-23 01:09:49.675548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.784 [2024-07-23 01:09:49.675705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.784 [2024-07-23 01:09:49.675735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.784 qpair failed and we were unable to recover it. 00:30:05.784 [2024-07-23 01:09:49.675924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.784 [2024-07-23 01:09:49.676090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.784 [2024-07-23 01:09:49.676119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.784 qpair failed and we were unable to recover it. 00:30:05.784 [2024-07-23 01:09:49.676426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.784 [2024-07-23 01:09:49.676605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.784 [2024-07-23 01:09:49.676662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.784 qpair failed and we were unable to recover it. 00:30:05.784 [2024-07-23 01:09:49.676867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.784 [2024-07-23 01:09:49.677028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.784 [2024-07-23 01:09:49.677073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.784 qpair failed and we were unable to recover it. 00:30:05.784 [2024-07-23 01:09:49.677271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.784 [2024-07-23 01:09:49.677527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.784 [2024-07-23 01:09:49.677554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.784 qpair failed and we were unable to recover it. 00:30:05.784 [2024-07-23 01:09:49.677777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.784 [2024-07-23 01:09:49.677958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.784 [2024-07-23 01:09:49.677988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.784 qpair failed and we were unable to recover it. 00:30:05.784 [2024-07-23 01:09:49.678258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.784 [2024-07-23 01:09:49.678446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.784 [2024-07-23 01:09:49.678476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.784 qpair failed and we were unable to recover it. 00:30:05.784 [2024-07-23 01:09:49.678677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.784 [2024-07-23 01:09:49.678856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.784 [2024-07-23 01:09:49.678887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.784 qpair failed and we were unable to recover it. 00:30:05.784 [2024-07-23 01:09:49.679128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.784 [2024-07-23 01:09:49.679322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.784 [2024-07-23 01:09:49.679356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.784 qpair failed and we were unable to recover it. 00:30:05.784 [2024-07-23 01:09:49.679550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.784 [2024-07-23 01:09:49.679721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.784 [2024-07-23 01:09:49.679751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.784 qpair failed and we were unable to recover it. 00:30:05.784 [2024-07-23 01:09:49.679927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.784 [2024-07-23 01:09:49.680148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.784 [2024-07-23 01:09:49.680176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.784 qpair failed and we were unable to recover it. 00:30:05.784 [2024-07-23 01:09:49.680397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.784 [2024-07-23 01:09:49.680586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.784 [2024-07-23 01:09:49.680641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.784 qpair failed and we were unable to recover it. 00:30:05.784 [2024-07-23 01:09:49.680974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.784 [2024-07-23 01:09:49.681139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.784 [2024-07-23 01:09:49.681168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.784 qpair failed and we were unable to recover it. 00:30:05.784 [2024-07-23 01:09:49.681366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.784 [2024-07-23 01:09:49.681543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.784 [2024-07-23 01:09:49.681572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.784 qpair failed and we were unable to recover it. 00:30:05.784 [2024-07-23 01:09:49.681776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.784 [2024-07-23 01:09:49.681928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.784 [2024-07-23 01:09:49.681957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.784 qpair failed and we were unable to recover it. 00:30:05.784 [2024-07-23 01:09:49.682165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.784 [2024-07-23 01:09:49.682360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.784 [2024-07-23 01:09:49.682389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.784 qpair failed and we were unable to recover it. 00:30:05.784 [2024-07-23 01:09:49.682634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.784 [2024-07-23 01:09:49.682790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.784 [2024-07-23 01:09:49.682820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.785 qpair failed and we were unable to recover it. 00:30:05.785 [2024-07-23 01:09:49.682980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.785 [2024-07-23 01:09:49.683156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.785 [2024-07-23 01:09:49.683200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.785 qpair failed and we were unable to recover it. 00:30:05.785 [2024-07-23 01:09:49.683442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.785 [2024-07-23 01:09:49.683627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.785 [2024-07-23 01:09:49.683657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.785 qpair failed and we were unable to recover it. 00:30:05.785 [2024-07-23 01:09:49.683855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.785 [2024-07-23 01:09:49.684081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.785 [2024-07-23 01:09:49.684110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.785 qpair failed and we were unable to recover it. 00:30:05.785 [2024-07-23 01:09:49.684350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.785 [2024-07-23 01:09:49.684548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.785 [2024-07-23 01:09:49.684578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.785 qpair failed and we were unable to recover it. 00:30:05.785 [2024-07-23 01:09:49.684803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.785 [2024-07-23 01:09:49.685011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.785 [2024-07-23 01:09:49.685040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.785 qpair failed and we were unable to recover it. 00:30:05.785 [2024-07-23 01:09:49.685308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.785 [2024-07-23 01:09:49.685504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.785 [2024-07-23 01:09:49.685547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.785 qpair failed and we were unable to recover it. 00:30:05.785 [2024-07-23 01:09:49.685743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.785 [2024-07-23 01:09:49.685898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.785 [2024-07-23 01:09:49.685928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.785 qpair failed and we were unable to recover it. 00:30:05.785 [2024-07-23 01:09:49.686168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.785 [2024-07-23 01:09:49.686350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.785 [2024-07-23 01:09:49.686380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.785 qpair failed and we were unable to recover it. 00:30:05.785 [2024-07-23 01:09:49.686569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.785 [2024-07-23 01:09:49.686751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.785 [2024-07-23 01:09:49.686781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.785 qpair failed and we were unable to recover it. 00:30:05.785 [2024-07-23 01:09:49.687004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.785 [2024-07-23 01:09:49.687178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.785 [2024-07-23 01:09:49.687207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.785 qpair failed and we were unable to recover it. 00:30:05.785 [2024-07-23 01:09:49.687397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.785 [2024-07-23 01:09:49.687593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.785 [2024-07-23 01:09:49.687645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.785 qpair failed and we were unable to recover it. 00:30:05.785 [2024-07-23 01:09:49.687868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.785 [2024-07-23 01:09:49.688046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.785 [2024-07-23 01:09:49.688076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.785 qpair failed and we were unable to recover it. 00:30:05.785 [2024-07-23 01:09:49.688276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.785 [2024-07-23 01:09:49.688426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.785 [2024-07-23 01:09:49.688456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.785 qpair failed and we were unable to recover it. 00:30:05.785 [2024-07-23 01:09:49.688640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.785 [2024-07-23 01:09:49.688884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.785 [2024-07-23 01:09:49.688914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.785 qpair failed and we were unable to recover it. 00:30:05.785 [2024-07-23 01:09:49.689112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.785 [2024-07-23 01:09:49.689280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.785 [2024-07-23 01:09:49.689309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.785 qpair failed and we were unable to recover it. 00:30:05.785 [2024-07-23 01:09:49.689579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.785 [2024-07-23 01:09:49.689788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.785 [2024-07-23 01:09:49.689817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.785 qpair failed and we were unable to recover it. 00:30:05.785 [2024-07-23 01:09:49.690076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.785 [2024-07-23 01:09:49.690302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.785 [2024-07-23 01:09:49.690331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.785 qpair failed and we were unable to recover it. 00:30:05.785 [2024-07-23 01:09:49.690544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.785 [2024-07-23 01:09:49.690724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.785 [2024-07-23 01:09:49.690754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.785 qpair failed and we were unable to recover it. 00:30:05.785 [2024-07-23 01:09:49.690937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.785 [2024-07-23 01:09:49.691180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.785 [2024-07-23 01:09:49.691222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.785 qpair failed and we were unable to recover it. 00:30:05.785 [2024-07-23 01:09:49.691464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.785 [2024-07-23 01:09:49.691617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.785 [2024-07-23 01:09:49.691648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.785 qpair failed and we were unable to recover it. 00:30:05.785 [2024-07-23 01:09:49.691837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.785 [2024-07-23 01:09:49.692043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.785 [2024-07-23 01:09:49.692070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.785 qpair failed and we were unable to recover it. 00:30:05.785 [2024-07-23 01:09:49.692263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.785 [2024-07-23 01:09:49.692446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.785 [2024-07-23 01:09:49.692475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.785 qpair failed and we were unable to recover it. 00:30:05.785 [2024-07-23 01:09:49.692721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.785 [2024-07-23 01:09:49.692873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.785 [2024-07-23 01:09:49.692902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.785 qpair failed and we were unable to recover it. 00:30:05.785 [2024-07-23 01:09:49.693135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.785 [2024-07-23 01:09:49.693431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.785 [2024-07-23 01:09:49.693460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.785 qpair failed and we were unable to recover it. 00:30:05.785 [2024-07-23 01:09:49.693673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.785 [2024-07-23 01:09:49.693870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.785 [2024-07-23 01:09:49.693899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.785 qpair failed and we were unable to recover it. 00:30:05.785 [2024-07-23 01:09:49.694142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.785 [2024-07-23 01:09:49.694366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.785 [2024-07-23 01:09:49.694395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.785 qpair failed and we were unable to recover it. 00:30:05.785 [2024-07-23 01:09:49.694607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.785 [2024-07-23 01:09:49.694831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.785 [2024-07-23 01:09:49.694861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.785 qpair failed and we were unable to recover it. 00:30:05.785 [2024-07-23 01:09:49.695101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.785 [2024-07-23 01:09:49.695293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.785 [2024-07-23 01:09:49.695321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.785 qpair failed and we were unable to recover it. 00:30:05.785 [2024-07-23 01:09:49.695559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.785 [2024-07-23 01:09:49.695723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.786 [2024-07-23 01:09:49.695753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.786 qpair failed and we were unable to recover it. 00:30:05.786 [2024-07-23 01:09:49.695973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.786 [2024-07-23 01:09:49.696177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.786 [2024-07-23 01:09:49.696205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.786 qpair failed and we were unable to recover it. 00:30:05.786 [2024-07-23 01:09:49.696417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.786 [2024-07-23 01:09:49.696626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.786 [2024-07-23 01:09:49.696657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.786 qpair failed and we were unable to recover it. 00:30:05.786 [2024-07-23 01:09:49.696844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.786 [2024-07-23 01:09:49.696994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.786 [2024-07-23 01:09:49.697023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.786 qpair failed and we were unable to recover it. 00:30:05.786 [2024-07-23 01:09:49.697214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.786 [2024-07-23 01:09:49.697461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.786 [2024-07-23 01:09:49.697491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.786 qpair failed and we were unable to recover it. 00:30:05.786 [2024-07-23 01:09:49.697684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.786 [2024-07-23 01:09:49.697860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.786 [2024-07-23 01:09:49.697890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.786 qpair failed and we were unable to recover it. 00:30:05.786 [2024-07-23 01:09:49.698110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.786 [2024-07-23 01:09:49.698259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.786 [2024-07-23 01:09:49.698308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.786 qpair failed and we were unable to recover it. 00:30:05.786 [2024-07-23 01:09:49.698529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.786 [2024-07-23 01:09:49.698682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.786 [2024-07-23 01:09:49.698713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.786 qpair failed and we were unable to recover it. 00:30:05.786 [2024-07-23 01:09:49.698905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.786 [2024-07-23 01:09:49.699081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.786 [2024-07-23 01:09:49.699126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.786 qpair failed and we were unable to recover it. 00:30:05.786 [2024-07-23 01:09:49.699333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.786 [2024-07-23 01:09:49.699529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.786 [2024-07-23 01:09:49.699558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.786 qpair failed and we were unable to recover it. 00:30:05.786 [2024-07-23 01:09:49.699731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.786 [2024-07-23 01:09:49.699882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.786 [2024-07-23 01:09:49.699925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.786 qpair failed and we were unable to recover it. 00:30:05.786 [2024-07-23 01:09:49.700116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.786 [2024-07-23 01:09:49.700292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.786 [2024-07-23 01:09:49.700335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.786 qpair failed and we were unable to recover it. 00:30:05.786 [2024-07-23 01:09:49.700529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.786 [2024-07-23 01:09:49.700759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.786 [2024-07-23 01:09:49.700789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.786 qpair failed and we were unable to recover it. 00:30:05.786 [2024-07-23 01:09:49.701007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.786 [2024-07-23 01:09:49.701184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.786 [2024-07-23 01:09:49.701213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.786 qpair failed and we were unable to recover it. 00:30:05.786 [2024-07-23 01:09:49.701439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.786 [2024-07-23 01:09:49.701626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.786 [2024-07-23 01:09:49.701655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.786 qpair failed and we were unable to recover it. 00:30:05.786 [2024-07-23 01:09:49.701849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.786 [2024-07-23 01:09:49.702078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.786 [2024-07-23 01:09:49.702109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.786 qpair failed and we were unable to recover it. 00:30:05.786 [2024-07-23 01:09:49.702298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.786 [2024-07-23 01:09:49.702475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.786 [2024-07-23 01:09:49.702504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.786 qpair failed and we were unable to recover it. 00:30:05.786 [2024-07-23 01:09:49.702692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.786 [2024-07-23 01:09:49.702903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.786 [2024-07-23 01:09:49.702932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.786 qpair failed and we were unable to recover it. 00:30:05.786 [2024-07-23 01:09:49.703142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.786 [2024-07-23 01:09:49.703297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.786 [2024-07-23 01:09:49.703341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.786 qpair failed and we were unable to recover it. 00:30:05.786 [2024-07-23 01:09:49.703501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.786 [2024-07-23 01:09:49.703699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.786 [2024-07-23 01:09:49.703729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.786 qpair failed and we were unable to recover it. 00:30:05.786 [2024-07-23 01:09:49.703922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.786 [2024-07-23 01:09:49.704097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.786 [2024-07-23 01:09:49.704127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.786 qpair failed and we were unable to recover it. 00:30:05.786 [2024-07-23 01:09:49.704402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.786 [2024-07-23 01:09:49.704581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.786 [2024-07-23 01:09:49.704610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.786 qpair failed and we were unable to recover it. 00:30:05.786 [2024-07-23 01:09:49.704839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.786 [2024-07-23 01:09:49.705076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.786 [2024-07-23 01:09:49.705119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.786 qpair failed and we were unable to recover it. 00:30:05.786 [2024-07-23 01:09:49.705314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.786 [2024-07-23 01:09:49.705497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.786 [2024-07-23 01:09:49.705541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.786 qpair failed and we were unable to recover it. 00:30:05.786 [2024-07-23 01:09:49.705739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.786 [2024-07-23 01:09:49.705877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.786 [2024-07-23 01:09:49.705907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.786 qpair failed and we were unable to recover it. 00:30:05.786 [2024-07-23 01:09:49.706077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.786 [2024-07-23 01:09:49.706286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.786 [2024-07-23 01:09:49.706315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.786 qpair failed and we were unable to recover it. 00:30:05.786 [2024-07-23 01:09:49.706516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.786 [2024-07-23 01:09:49.706663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.786 [2024-07-23 01:09:49.706693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.786 qpair failed and we were unable to recover it. 00:30:05.786 [2024-07-23 01:09:49.706977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.786 [2024-07-23 01:09:49.707173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.786 [2024-07-23 01:09:49.707201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.786 qpair failed and we were unable to recover it. 00:30:05.786 [2024-07-23 01:09:49.707396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.786 [2024-07-23 01:09:49.707544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.786 [2024-07-23 01:09:49.707574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.787 qpair failed and we were unable to recover it. 00:30:05.787 [2024-07-23 01:09:49.707793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.787 [2024-07-23 01:09:49.708040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.787 [2024-07-23 01:09:49.708068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.787 qpair failed and we were unable to recover it. 00:30:05.787 [2024-07-23 01:09:49.708262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.787 [2024-07-23 01:09:49.708600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.787 [2024-07-23 01:09:49.708637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.787 qpair failed and we were unable to recover it. 00:30:05.787 [2024-07-23 01:09:49.708826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.787 [2024-07-23 01:09:49.709009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.787 [2024-07-23 01:09:49.709038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.787 qpair failed and we were unable to recover it. 00:30:05.787 [2024-07-23 01:09:49.709282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.787 [2024-07-23 01:09:49.709430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.787 [2024-07-23 01:09:49.709459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.787 qpair failed and we were unable to recover it. 00:30:05.787 [2024-07-23 01:09:49.709673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.787 [2024-07-23 01:09:49.709855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.787 [2024-07-23 01:09:49.709884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.787 qpair failed and we were unable to recover it. 00:30:05.787 [2024-07-23 01:09:49.710103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.787 [2024-07-23 01:09:49.710273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.787 [2024-07-23 01:09:49.710317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.787 qpair failed and we were unable to recover it. 00:30:05.787 [2024-07-23 01:09:49.710497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.787 [2024-07-23 01:09:49.710670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.787 [2024-07-23 01:09:49.710700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.787 qpair failed and we were unable to recover it. 00:30:05.787 [2024-07-23 01:09:49.710913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.787 [2024-07-23 01:09:49.711065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.787 [2024-07-23 01:09:49.711093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.787 qpair failed and we were unable to recover it. 00:30:05.787 [2024-07-23 01:09:49.711286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.787 [2024-07-23 01:09:49.711468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.787 [2024-07-23 01:09:49.711501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.787 qpair failed and we were unable to recover it. 00:30:05.787 [2024-07-23 01:09:49.711715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.787 [2024-07-23 01:09:49.711869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.787 [2024-07-23 01:09:49.711898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.787 qpair failed and we were unable to recover it. 00:30:05.787 [2024-07-23 01:09:49.712118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.787 [2024-07-23 01:09:49.712385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.787 [2024-07-23 01:09:49.712420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.787 qpair failed and we were unable to recover it. 00:30:05.787 [2024-07-23 01:09:49.712596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.787 [2024-07-23 01:09:49.712792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.787 [2024-07-23 01:09:49.712823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.787 qpair failed and we were unable to recover it. 00:30:05.787 [2024-07-23 01:09:49.713014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.787 [2024-07-23 01:09:49.713170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.787 [2024-07-23 01:09:49.713199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.787 qpair failed and we were unable to recover it. 00:30:05.787 [2024-07-23 01:09:49.713373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.787 [2024-07-23 01:09:49.713558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.787 [2024-07-23 01:09:49.713587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.787 qpair failed and we were unable to recover it. 00:30:05.787 [2024-07-23 01:09:49.713812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.787 [2024-07-23 01:09:49.713994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.787 [2024-07-23 01:09:49.714038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.787 qpair failed and we were unable to recover it. 00:30:05.787 [2024-07-23 01:09:49.714236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.787 [2024-07-23 01:09:49.714391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.787 [2024-07-23 01:09:49.714421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.787 qpair failed and we were unable to recover it. 00:30:05.787 [2024-07-23 01:09:49.714674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.787 [2024-07-23 01:09:49.714848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.787 [2024-07-23 01:09:49.714877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.787 qpair failed and we were unable to recover it. 00:30:05.787 [2024-07-23 01:09:49.715095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.787 [2024-07-23 01:09:49.715303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.787 [2024-07-23 01:09:49.715332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.787 qpair failed and we were unable to recover it. 00:30:05.787 [2024-07-23 01:09:49.715496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.787 [2024-07-23 01:09:49.715699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.787 [2024-07-23 01:09:49.715728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.787 qpair failed and we were unable to recover it. 00:30:05.787 [2024-07-23 01:09:49.715906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.787 [2024-07-23 01:09:49.716192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.787 [2024-07-23 01:09:49.716220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.787 qpair failed and we were unable to recover it. 00:30:05.787 [2024-07-23 01:09:49.716419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.787 [2024-07-23 01:09:49.716627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.787 [2024-07-23 01:09:49.716656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.787 qpair failed and we were unable to recover it. 00:30:05.787 [2024-07-23 01:09:49.716844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.787 [2024-07-23 01:09:49.717005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.787 [2024-07-23 01:09:49.717036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.787 qpair failed and we were unable to recover it. 00:30:05.787 [2024-07-23 01:09:49.717368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.787 [2024-07-23 01:09:49.717536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.787 [2024-07-23 01:09:49.717564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.787 qpair failed and we were unable to recover it. 00:30:05.787 [2024-07-23 01:09:49.717784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.787 [2024-07-23 01:09:49.717934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.787 [2024-07-23 01:09:49.717964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.787 qpair failed and we were unable to recover it. 00:30:05.787 [2024-07-23 01:09:49.718125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.787 [2024-07-23 01:09:49.718310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.787 [2024-07-23 01:09:49.718339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.787 qpair failed and we were unable to recover it. 00:30:05.787 [2024-07-23 01:09:49.718504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.787 [2024-07-23 01:09:49.718736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.787 [2024-07-23 01:09:49.718766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.787 qpair failed and we were unable to recover it. 00:30:05.787 [2024-07-23 01:09:49.718957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.787 [2024-07-23 01:09:49.719133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.787 [2024-07-23 01:09:49.719162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.787 qpair failed and we were unable to recover it. 00:30:05.787 [2024-07-23 01:09:49.719330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.787 [2024-07-23 01:09:49.719514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.787 [2024-07-23 01:09:49.719558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.787 qpair failed and we were unable to recover it. 00:30:05.787 [2024-07-23 01:09:49.719816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.787 [2024-07-23 01:09:49.720043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.788 [2024-07-23 01:09:49.720070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.788 qpair failed and we were unable to recover it. 00:30:05.788 [2024-07-23 01:09:49.720286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.788 [2024-07-23 01:09:49.720439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.788 [2024-07-23 01:09:49.720465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.788 qpair failed and we were unable to recover it. 00:30:05.788 [2024-07-23 01:09:49.720717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.788 [2024-07-23 01:09:49.720893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.788 [2024-07-23 01:09:49.720922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.788 qpair failed and we were unable to recover it. 00:30:05.788 [2024-07-23 01:09:49.721110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.788 [2024-07-23 01:09:49.721327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.788 [2024-07-23 01:09:49.721355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.788 qpair failed and we were unable to recover it. 00:30:05.788 [2024-07-23 01:09:49.721574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.788 [2024-07-23 01:09:49.721786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.788 [2024-07-23 01:09:49.721815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.788 qpair failed and we were unable to recover it. 00:30:05.788 [2024-07-23 01:09:49.722012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.788 [2024-07-23 01:09:49.722218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.788 [2024-07-23 01:09:49.722247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.788 qpair failed and we were unable to recover it. 00:30:05.788 [2024-07-23 01:09:49.722442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.788 [2024-07-23 01:09:49.722637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.788 [2024-07-23 01:09:49.722670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.788 qpair failed and we were unable to recover it. 00:30:05.788 [2024-07-23 01:09:49.722866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.788 [2024-07-23 01:09:49.723045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.788 [2024-07-23 01:09:49.723074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.788 qpair failed and we were unable to recover it. 00:30:05.788 [2024-07-23 01:09:49.723357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.788 [2024-07-23 01:09:49.723533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.788 [2024-07-23 01:09:49.723562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.788 qpair failed and we were unable to recover it. 00:30:05.788 [2024-07-23 01:09:49.723784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.788 [2024-07-23 01:09:49.723982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.788 [2024-07-23 01:09:49.724011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.788 qpair failed and we were unable to recover it. 00:30:05.788 [2024-07-23 01:09:49.724200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.788 [2024-07-23 01:09:49.724378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.788 [2024-07-23 01:09:49.724407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.788 qpair failed and we were unable to recover it. 00:30:05.788 [2024-07-23 01:09:49.724599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.788 [2024-07-23 01:09:49.724784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.788 [2024-07-23 01:09:49.724813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.788 qpair failed and we were unable to recover it. 00:30:05.788 [2024-07-23 01:09:49.724979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.788 [2024-07-23 01:09:49.725177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.788 [2024-07-23 01:09:49.725207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.788 qpair failed and we were unable to recover it. 00:30:05.788 [2024-07-23 01:09:49.725364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.788 [2024-07-23 01:09:49.725563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.788 [2024-07-23 01:09:49.725592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.788 qpair failed and we were unable to recover it. 00:30:05.788 [2024-07-23 01:09:49.725789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.788 [2024-07-23 01:09:49.725962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.788 [2024-07-23 01:09:49.725991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.788 qpair failed and we were unable to recover it. 00:30:05.788 [2024-07-23 01:09:49.726179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.788 [2024-07-23 01:09:49.726336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.788 [2024-07-23 01:09:49.726365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.788 qpair failed and we were unable to recover it. 00:30:05.788 [2024-07-23 01:09:49.726577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.788 [2024-07-23 01:09:49.726770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.788 [2024-07-23 01:09:49.726799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.788 qpair failed and we were unable to recover it. 00:30:05.788 [2024-07-23 01:09:49.727014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.788 [2024-07-23 01:09:49.727220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.788 [2024-07-23 01:09:49.727249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.788 qpair failed and we were unable to recover it. 00:30:05.788 [2024-07-23 01:09:49.727467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.788 [2024-07-23 01:09:49.727670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.788 [2024-07-23 01:09:49.727700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.788 qpair failed and we were unable to recover it. 00:30:05.788 [2024-07-23 01:09:49.727880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.788 [2024-07-23 01:09:49.728047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.788 [2024-07-23 01:09:49.728091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.788 qpair failed and we were unable to recover it. 00:30:05.788 [2024-07-23 01:09:49.728334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.788 [2024-07-23 01:09:49.728544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.788 [2024-07-23 01:09:49.728573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.788 qpair failed and we were unable to recover it. 00:30:05.788 [2024-07-23 01:09:49.728789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.788 [2024-07-23 01:09:49.728965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.788 [2024-07-23 01:09:49.728994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.788 qpair failed and we were unable to recover it. 00:30:05.788 [2024-07-23 01:09:49.729181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.788 [2024-07-23 01:09:49.729357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.788 [2024-07-23 01:09:49.729385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.788 qpair failed and we were unable to recover it. 00:30:05.788 [2024-07-23 01:09:49.729575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.788 [2024-07-23 01:09:49.729734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.788 [2024-07-23 01:09:49.729778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.788 qpair failed and we were unable to recover it. 00:30:05.789 [2024-07-23 01:09:49.729993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.789 [2024-07-23 01:09:49.730207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.789 [2024-07-23 01:09:49.730234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.789 qpair failed and we were unable to recover it. 00:30:05.789 [2024-07-23 01:09:49.730477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.789 [2024-07-23 01:09:49.730645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.789 [2024-07-23 01:09:49.730675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.789 qpair failed and we were unable to recover it. 00:30:05.789 [2024-07-23 01:09:49.730878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.789 [2024-07-23 01:09:49.731119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.789 [2024-07-23 01:09:49.731148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.789 qpair failed and we were unable to recover it. 00:30:05.789 [2024-07-23 01:09:49.731334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.789 [2024-07-23 01:09:49.731510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.789 [2024-07-23 01:09:49.731539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.789 qpair failed and we were unable to recover it. 00:30:05.789 [2024-07-23 01:09:49.731728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.789 [2024-07-23 01:09:49.731935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.789 [2024-07-23 01:09:49.731965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.789 qpair failed and we were unable to recover it. 00:30:05.789 [2024-07-23 01:09:49.732180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.789 [2024-07-23 01:09:49.732350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.789 [2024-07-23 01:09:49.732379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.789 qpair failed and we were unable to recover it. 00:30:05.789 [2024-07-23 01:09:49.732588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.789 [2024-07-23 01:09:49.732788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.789 [2024-07-23 01:09:49.732818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.789 qpair failed and we were unable to recover it. 00:30:05.789 [2024-07-23 01:09:49.733032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.789 [2024-07-23 01:09:49.733223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.789 [2024-07-23 01:09:49.733252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.789 qpair failed and we were unable to recover it. 00:30:05.789 [2024-07-23 01:09:49.733434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.789 [2024-07-23 01:09:49.733603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.789 [2024-07-23 01:09:49.733655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.789 qpair failed and we were unable to recover it. 00:30:05.789 [2024-07-23 01:09:49.733844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.789 [2024-07-23 01:09:49.733999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.789 [2024-07-23 01:09:49.734028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.789 qpair failed and we were unable to recover it. 00:30:05.789 [2024-07-23 01:09:49.734226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.789 [2024-07-23 01:09:49.734427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.789 [2024-07-23 01:09:49.734455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.789 qpair failed and we were unable to recover it. 00:30:05.789 [2024-07-23 01:09:49.734683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.789 [2024-07-23 01:09:49.734861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.789 [2024-07-23 01:09:49.734905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.789 qpair failed and we were unable to recover it. 00:30:05.789 [2024-07-23 01:09:49.735101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.789 [2024-07-23 01:09:49.735277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.789 [2024-07-23 01:09:49.735306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.789 qpair failed and we were unable to recover it. 00:30:05.789 [2024-07-23 01:09:49.735490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.789 [2024-07-23 01:09:49.735674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.789 [2024-07-23 01:09:49.735709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.789 qpair failed and we were unable to recover it. 00:30:05.789 [2024-07-23 01:09:49.735889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.789 [2024-07-23 01:09:49.736076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.789 [2024-07-23 01:09:49.736104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.789 qpair failed and we were unable to recover it. 00:30:05.789 [2024-07-23 01:09:49.736296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.789 [2024-07-23 01:09:49.736504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.789 [2024-07-23 01:09:49.736533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.789 qpair failed and we were unable to recover it. 00:30:05.789 [2024-07-23 01:09:49.736714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.789 [2024-07-23 01:09:49.736891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.789 [2024-07-23 01:09:49.736936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.789 qpair failed and we were unable to recover it. 00:30:05.789 [2024-07-23 01:09:49.737124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.789 [2024-07-23 01:09:49.737304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.789 [2024-07-23 01:09:49.737334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.789 qpair failed and we were unable to recover it. 00:30:05.789 [2024-07-23 01:09:49.737571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.789 [2024-07-23 01:09:49.737747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.789 [2024-07-23 01:09:49.737777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.789 qpair failed and we were unable to recover it. 00:30:05.789 [2024-07-23 01:09:49.737962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.789 [2024-07-23 01:09:49.738170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.789 [2024-07-23 01:09:49.738199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.789 qpair failed and we were unable to recover it. 00:30:05.789 [2024-07-23 01:09:49.738369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.789 [2024-07-23 01:09:49.738508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.789 [2024-07-23 01:09:49.738535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.789 qpair failed and we were unable to recover it. 00:30:05.789 [2024-07-23 01:09:49.738776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.789 [2024-07-23 01:09:49.738975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.789 [2024-07-23 01:09:49.739004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.789 qpair failed and we were unable to recover it. 00:30:05.789 [2024-07-23 01:09:49.739162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.789 [2024-07-23 01:09:49.739340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.789 [2024-07-23 01:09:49.739386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.789 qpair failed and we were unable to recover it. 00:30:05.789 [2024-07-23 01:09:49.739581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.789 [2024-07-23 01:09:49.739748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.789 [2024-07-23 01:09:49.739781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.789 qpair failed and we were unable to recover it. 00:30:05.789 [2024-07-23 01:09:49.739975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.789 [2024-07-23 01:09:49.740192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.789 [2024-07-23 01:09:49.740236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.789 qpair failed and we were unable to recover it. 00:30:05.789 [2024-07-23 01:09:49.740457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.789 [2024-07-23 01:09:49.740658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.789 [2024-07-23 01:09:49.740688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.789 qpair failed and we were unable to recover it. 00:30:05.789 [2024-07-23 01:09:49.740902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.789 [2024-07-23 01:09:49.741075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.789 [2024-07-23 01:09:49.741104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.789 qpair failed and we were unable to recover it. 00:30:05.789 [2024-07-23 01:09:49.741264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.789 [2024-07-23 01:09:49.741451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.789 [2024-07-23 01:09:49.741479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.789 qpair failed and we were unable to recover it. 00:30:05.789 [2024-07-23 01:09:49.741680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.789 [2024-07-23 01:09:49.741884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.790 [2024-07-23 01:09:49.741913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.790 qpair failed and we were unable to recover it. 00:30:05.790 [2024-07-23 01:09:49.742130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.790 [2024-07-23 01:09:49.742306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.790 [2024-07-23 01:09:49.742335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.790 qpair failed and we were unable to recover it. 00:30:05.790 [2024-07-23 01:09:49.742519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.790 [2024-07-23 01:09:49.742698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.790 [2024-07-23 01:09:49.742728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.790 qpair failed and we were unable to recover it. 00:30:05.790 [2024-07-23 01:09:49.742912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.790 [2024-07-23 01:09:49.743124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.790 [2024-07-23 01:09:49.743153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.790 qpair failed and we were unable to recover it. 00:30:05.790 [2024-07-23 01:09:49.743315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.790 [2024-07-23 01:09:49.743489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.790 [2024-07-23 01:09:49.743534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.790 qpair failed and we were unable to recover it. 00:30:05.790 [2024-07-23 01:09:49.743755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.790 [2024-07-23 01:09:49.743942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.790 [2024-07-23 01:09:49.743975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.790 qpair failed and we were unable to recover it. 00:30:05.790 [2024-07-23 01:09:49.744165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.790 [2024-07-23 01:09:49.744343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.790 [2024-07-23 01:09:49.744387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.790 qpair failed and we were unable to recover it. 00:30:05.790 [2024-07-23 01:09:49.744567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.790 [2024-07-23 01:09:49.744752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.790 [2024-07-23 01:09:49.744782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.790 qpair failed and we were unable to recover it. 00:30:05.790 [2024-07-23 01:09:49.744990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.790 [2024-07-23 01:09:49.745239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.790 [2024-07-23 01:09:49.745267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.790 qpair failed and we were unable to recover it. 00:30:05.790 [2024-07-23 01:09:49.745466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.790 [2024-07-23 01:09:49.745626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.790 [2024-07-23 01:09:49.745656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.790 qpair failed and we were unable to recover it. 00:30:05.790 [2024-07-23 01:09:49.745864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.790 [2024-07-23 01:09:49.746063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.790 [2024-07-23 01:09:49.746092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.790 qpair failed and we were unable to recover it. 00:30:05.790 [2024-07-23 01:09:49.746361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.790 [2024-07-23 01:09:49.746552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.790 [2024-07-23 01:09:49.746596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.790 qpair failed and we were unable to recover it. 00:30:05.790 [2024-07-23 01:09:49.746898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.790 [2024-07-23 01:09:49.747144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.790 [2024-07-23 01:09:49.747173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.790 qpair failed and we were unable to recover it. 00:30:05.790 [2024-07-23 01:09:49.747385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.790 [2024-07-23 01:09:49.747536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.790 [2024-07-23 01:09:49.747566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.790 qpair failed and we were unable to recover it. 00:30:05.790 [2024-07-23 01:09:49.747785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.790 [2024-07-23 01:09:49.747932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.790 [2024-07-23 01:09:49.747961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.790 qpair failed and we were unable to recover it. 00:30:05.790 [2024-07-23 01:09:49.748135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.790 [2024-07-23 01:09:49.748315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.790 [2024-07-23 01:09:49.748359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.790 qpair failed and we were unable to recover it. 00:30:05.790 [2024-07-23 01:09:49.748583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.790 [2024-07-23 01:09:49.748739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.790 [2024-07-23 01:09:49.748769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.790 qpair failed and we were unable to recover it. 00:30:05.790 [2024-07-23 01:09:49.748948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.790 [2024-07-23 01:09:49.749206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.790 [2024-07-23 01:09:49.749234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.790 qpair failed and we were unable to recover it. 00:30:05.790 [2024-07-23 01:09:49.749452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.790 [2024-07-23 01:09:49.749642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.790 [2024-07-23 01:09:49.749678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.790 qpair failed and we were unable to recover it. 00:30:05.790 [2024-07-23 01:09:49.749890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.790 [2024-07-23 01:09:49.750036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.790 [2024-07-23 01:09:49.750065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.790 qpair failed and we were unable to recover it. 00:30:05.790 [2024-07-23 01:09:49.750252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.790 [2024-07-23 01:09:49.750518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.790 [2024-07-23 01:09:49.750548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.790 qpair failed and we were unable to recover it. 00:30:05.790 [2024-07-23 01:09:49.750717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.790 [2024-07-23 01:09:49.750869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.790 [2024-07-23 01:09:49.750898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.790 qpair failed and we were unable to recover it. 00:30:05.790 [2024-07-23 01:09:49.751064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.790 [2024-07-23 01:09:49.751243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.790 [2024-07-23 01:09:49.751272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.790 qpair failed and we were unable to recover it. 00:30:05.790 [2024-07-23 01:09:49.751437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.790 [2024-07-23 01:09:49.751518] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:05.790 [2024-07-23 01:09:49.751585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.790 [2024-07-23 01:09:49.751623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.790 qpair failed and we were unable to recover it. 00:30:05.790 [2024-07-23 01:09:49.751656] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:05.790 [2024-07-23 01:09:49.751686] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:05.790 [2024-07-23 01:09:49.751699] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:05.790 [2024-07-23 01:09:49.751795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.790 [2024-07-23 01:09:49.751754] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:30:05.790 [2024-07-23 01:09:49.751784] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:30:05.790 [2024-07-23 01:09:49.751833] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:30:05.790 [2024-07-23 01:09:49.751836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:30:05.790 [2024-07-23 01:09:49.751959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.790 [2024-07-23 01:09:49.751988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.790 qpair failed and we were unable to recover it. 00:30:05.790 [2024-07-23 01:09:49.752177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.790 [2024-07-23 01:09:49.752349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.790 [2024-07-23 01:09:49.752379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.790 qpair failed and we were unable to recover it. 00:30:05.790 [2024-07-23 01:09:49.752539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.790 [2024-07-23 01:09:49.752694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.790 [2024-07-23 01:09:49.752731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.791 qpair failed and we were unable to recover it. 00:30:05.791 [2024-07-23 01:09:49.752920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.791 [2024-07-23 01:09:49.753065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.791 [2024-07-23 01:09:49.753094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.791 qpair failed and we were unable to recover it. 00:30:05.791 [2024-07-23 01:09:49.753275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.791 [2024-07-23 01:09:49.753474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.791 [2024-07-23 01:09:49.753503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.791 qpair failed and we were unable to recover it. 00:30:05.791 [2024-07-23 01:09:49.753682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.791 [2024-07-23 01:09:49.753832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.791 [2024-07-23 01:09:49.753861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.791 qpair failed and we were unable to recover it. 00:30:05.791 [2024-07-23 01:09:49.754060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.791 [2024-07-23 01:09:49.754244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.791 [2024-07-23 01:09:49.754273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.791 qpair failed and we were unable to recover it. 00:30:05.791 [2024-07-23 01:09:49.754459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.791 [2024-07-23 01:09:49.754659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.791 [2024-07-23 01:09:49.754689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.791 qpair failed and we were unable to recover it. 00:30:05.791 [2024-07-23 01:09:49.754892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.791 [2024-07-23 01:09:49.755097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.791 [2024-07-23 01:09:49.755126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.791 qpair failed and we were unable to recover it. 00:30:05.791 [2024-07-23 01:09:49.755314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.791 [2024-07-23 01:09:49.755514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.791 [2024-07-23 01:09:49.755548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.791 qpair failed and we were unable to recover it. 00:30:05.791 [2024-07-23 01:09:49.755741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.791 [2024-07-23 01:09:49.755917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.791 [2024-07-23 01:09:49.755947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.791 qpair failed and we were unable to recover it. 00:30:05.791 [2024-07-23 01:09:49.756157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.791 [2024-07-23 01:09:49.756333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.791 [2024-07-23 01:09:49.756362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.791 qpair failed and we were unable to recover it. 00:30:05.791 [2024-07-23 01:09:49.756532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.791 [2024-07-23 01:09:49.756709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.791 [2024-07-23 01:09:49.756739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.791 qpair failed and we were unable to recover it. 00:30:05.791 [2024-07-23 01:09:49.756925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.791 [2024-07-23 01:09:49.757100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.791 [2024-07-23 01:09:49.757129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.791 qpair failed and we were unable to recover it. 00:30:05.791 [2024-07-23 01:09:49.757317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.791 [2024-07-23 01:09:49.757518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.791 [2024-07-23 01:09:49.757548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.791 qpair failed and we were unable to recover it. 00:30:05.791 [2024-07-23 01:09:49.757711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.791 [2024-07-23 01:09:49.757863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.791 [2024-07-23 01:09:49.757893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.791 qpair failed and we were unable to recover it. 00:30:05.791 [2024-07-23 01:09:49.758180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.791 [2024-07-23 01:09:49.758358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.791 [2024-07-23 01:09:49.758387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.791 qpair failed and we were unable to recover it. 00:30:05.791 [2024-07-23 01:09:49.758573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.791 [2024-07-23 01:09:49.758783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.791 [2024-07-23 01:09:49.758813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.791 qpair failed and we were unable to recover it. 00:30:05.791 [2024-07-23 01:09:49.758972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.791 [2024-07-23 01:09:49.759142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.791 [2024-07-23 01:09:49.759170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.791 qpair failed and we were unable to recover it. 00:30:05.791 [2024-07-23 01:09:49.759361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.791 [2024-07-23 01:09:49.759537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.791 [2024-07-23 01:09:49.759567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.791 qpair failed and we were unable to recover it. 00:30:05.791 [2024-07-23 01:09:49.759836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.791 [2024-07-23 01:09:49.760039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.791 [2024-07-23 01:09:49.760068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.791 qpair failed and we were unable to recover it. 00:30:05.791 [2024-07-23 01:09:49.760235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.791 [2024-07-23 01:09:49.760406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.791 [2024-07-23 01:09:49.760435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.791 qpair failed and we were unable to recover it. 00:30:05.791 [2024-07-23 01:09:49.760628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.791 [2024-07-23 01:09:49.760842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.791 [2024-07-23 01:09:49.760871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.791 qpair failed and we were unable to recover it. 00:30:05.791 [2024-07-23 01:09:49.761038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.791 [2024-07-23 01:09:49.761213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.791 [2024-07-23 01:09:49.761242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.791 qpair failed and we were unable to recover it. 00:30:05.791 [2024-07-23 01:09:49.761419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.791 [2024-07-23 01:09:49.761562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.791 [2024-07-23 01:09:49.761591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.791 qpair failed and we were unable to recover it. 00:30:05.791 [2024-07-23 01:09:49.761776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.791 [2024-07-23 01:09:49.761937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.791 [2024-07-23 01:09:49.761966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.791 qpair failed and we were unable to recover it. 00:30:05.791 [2024-07-23 01:09:49.762129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.791 [2024-07-23 01:09:49.762305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.791 [2024-07-23 01:09:49.762335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.791 qpair failed and we were unable to recover it. 00:30:05.791 [2024-07-23 01:09:49.762497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.791 [2024-07-23 01:09:49.762663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.791 [2024-07-23 01:09:49.762693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.791 qpair failed and we were unable to recover it. 00:30:05.791 [2024-07-23 01:09:49.762858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.791 [2024-07-23 01:09:49.763038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.791 [2024-07-23 01:09:49.763067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.791 qpair failed and we were unable to recover it. 00:30:05.791 [2024-07-23 01:09:49.763278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.791 [2024-07-23 01:09:49.763525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.791 [2024-07-23 01:09:49.763554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.791 qpair failed and we were unable to recover it. 00:30:05.791 [2024-07-23 01:09:49.763749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.791 [2024-07-23 01:09:49.763952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.791 [2024-07-23 01:09:49.763981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.791 qpair failed and we were unable to recover it. 00:30:05.791 [2024-07-23 01:09:49.764138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.791 [2024-07-23 01:09:49.764289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.792 [2024-07-23 01:09:49.764318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.792 qpair failed and we were unable to recover it. 00:30:05.792 [2024-07-23 01:09:49.764494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.792 [2024-07-23 01:09:49.764657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.792 [2024-07-23 01:09:49.764687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.792 qpair failed and we were unable to recover it. 00:30:05.792 [2024-07-23 01:09:49.764906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.792 [2024-07-23 01:09:49.765084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.792 [2024-07-23 01:09:49.765112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.792 qpair failed and we were unable to recover it. 00:30:05.792 [2024-07-23 01:09:49.765295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.792 [2024-07-23 01:09:49.765467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.792 [2024-07-23 01:09:49.765495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.792 qpair failed and we were unable to recover it. 00:30:05.792 [2024-07-23 01:09:49.765657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.792 [2024-07-23 01:09:49.765834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.792 [2024-07-23 01:09:49.765862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.792 qpair failed and we were unable to recover it. 00:30:05.792 [2024-07-23 01:09:49.766040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.792 [2024-07-23 01:09:49.766243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.792 [2024-07-23 01:09:49.766272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.792 qpair failed and we were unable to recover it. 00:30:05.792 [2024-07-23 01:09:49.766459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.792 [2024-07-23 01:09:49.766610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.792 [2024-07-23 01:09:49.766643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.792 qpair failed and we were unable to recover it. 00:30:05.792 [2024-07-23 01:09:49.766816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.792 [2024-07-23 01:09:49.767031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.792 [2024-07-23 01:09:49.767059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.792 qpair failed and we were unable to recover it. 00:30:05.792 [2024-07-23 01:09:49.767222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.792 [2024-07-23 01:09:49.767365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.792 [2024-07-23 01:09:49.767393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.792 qpair failed and we were unable to recover it. 00:30:05.792 [2024-07-23 01:09:49.767576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.792 [2024-07-23 01:09:49.767754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.792 [2024-07-23 01:09:49.767783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.792 qpair failed and we were unable to recover it. 00:30:05.792 [2024-07-23 01:09:49.767967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.792 [2024-07-23 01:09:49.768202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.792 [2024-07-23 01:09:49.768230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.792 qpair failed and we were unable to recover it. 00:30:05.792 [2024-07-23 01:09:49.768441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.792 [2024-07-23 01:09:49.768584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.792 [2024-07-23 01:09:49.768620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.792 qpair failed and we were unable to recover it. 00:30:05.792 [2024-07-23 01:09:49.768779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.792 [2024-07-23 01:09:49.768952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.792 [2024-07-23 01:09:49.768981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.792 qpair failed and we were unable to recover it. 00:30:05.792 [2024-07-23 01:09:49.769153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.792 [2024-07-23 01:09:49.769294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.792 [2024-07-23 01:09:49.769322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.792 qpair failed and we were unable to recover it. 00:30:05.792 [2024-07-23 01:09:49.769531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.792 [2024-07-23 01:09:49.769702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.792 [2024-07-23 01:09:49.769731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.792 qpair failed and we were unable to recover it. 00:30:05.792 [2024-07-23 01:09:49.769927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.792 [2024-07-23 01:09:49.770094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.792 [2024-07-23 01:09:49.770122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.792 qpair failed and we were unable to recover it. 00:30:05.792 [2024-07-23 01:09:49.770302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.792 [2024-07-23 01:09:49.770476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.792 [2024-07-23 01:09:49.770504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.792 qpair failed and we were unable to recover it. 00:30:05.792 [2024-07-23 01:09:49.770656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.792 [2024-07-23 01:09:49.770826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.792 [2024-07-23 01:09:49.770854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.792 qpair failed and we were unable to recover it. 00:30:05.792 [2024-07-23 01:09:49.771041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.792 [2024-07-23 01:09:49.771191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.792 [2024-07-23 01:09:49.771219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.792 qpair failed and we were unable to recover it. 00:30:05.792 [2024-07-23 01:09:49.771412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.792 [2024-07-23 01:09:49.771561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.792 [2024-07-23 01:09:49.771590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.792 qpair failed and we were unable to recover it. 00:30:05.792 [2024-07-23 01:09:49.771775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.792 [2024-07-23 01:09:49.771944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.792 [2024-07-23 01:09:49.771972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.792 qpair failed and we were unable to recover it. 00:30:05.792 [2024-07-23 01:09:49.772138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.792 [2024-07-23 01:09:49.772281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.792 [2024-07-23 01:09:49.772310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.792 qpair failed and we were unable to recover it. 00:30:05.792 [2024-07-23 01:09:49.772468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.792 [2024-07-23 01:09:49.772631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.792 [2024-07-23 01:09:49.772660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.792 qpair failed and we were unable to recover it. 00:30:05.792 [2024-07-23 01:09:49.772845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.792 [2024-07-23 01:09:49.773024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.792 [2024-07-23 01:09:49.773053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.792 qpair failed and we were unable to recover it. 00:30:05.792 [2024-07-23 01:09:49.773241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.792 [2024-07-23 01:09:49.773391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.792 [2024-07-23 01:09:49.773419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.792 qpair failed and we were unable to recover it. 00:30:05.793 [2024-07-23 01:09:49.773603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.793 [2024-07-23 01:09:49.773759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.793 [2024-07-23 01:09:49.773788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.793 qpair failed and we were unable to recover it. 00:30:05.793 [2024-07-23 01:09:49.773946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.793 [2024-07-23 01:09:49.774147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.793 [2024-07-23 01:09:49.774175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.793 qpair failed and we were unable to recover it. 00:30:05.793 [2024-07-23 01:09:49.774361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.793 [2024-07-23 01:09:49.774529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.793 [2024-07-23 01:09:49.774558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.793 qpair failed and we were unable to recover it. 00:30:05.793 [2024-07-23 01:09:49.774721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.793 [2024-07-23 01:09:49.774889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.793 [2024-07-23 01:09:49.774918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.793 qpair failed and we were unable to recover it. 00:30:05.793 [2024-07-23 01:09:49.775108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.793 [2024-07-23 01:09:49.775275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.793 [2024-07-23 01:09:49.775303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.793 qpair failed and we were unable to recover it. 00:30:05.793 [2024-07-23 01:09:49.775511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.793 [2024-07-23 01:09:49.775681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.793 [2024-07-23 01:09:49.775711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.793 qpair failed and we were unable to recover it. 00:30:05.793 [2024-07-23 01:09:49.775963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.793 [2024-07-23 01:09:49.776210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.793 [2024-07-23 01:09:49.776239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.793 qpair failed and we were unable to recover it. 00:30:05.793 [2024-07-23 01:09:49.776428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.793 [2024-07-23 01:09:49.776600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.793 [2024-07-23 01:09:49.776634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.793 qpair failed and we were unable to recover it. 00:30:05.793 [2024-07-23 01:09:49.776782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.793 [2024-07-23 01:09:49.776922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.793 [2024-07-23 01:09:49.776950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.793 qpair failed and we were unable to recover it. 00:30:05.793 [2024-07-23 01:09:49.777138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.793 [2024-07-23 01:09:49.777284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.793 [2024-07-23 01:09:49.777312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.793 qpair failed and we were unable to recover it. 00:30:05.793 [2024-07-23 01:09:49.777470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.793 [2024-07-23 01:09:49.777641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.793 [2024-07-23 01:09:49.777669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.793 qpair failed and we were unable to recover it. 00:30:05.793 [2024-07-23 01:09:49.777833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.793 [2024-07-23 01:09:49.777985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.793 [2024-07-23 01:09:49.778013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.793 qpair failed and we were unable to recover it. 00:30:05.793 [2024-07-23 01:09:49.778226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.793 [2024-07-23 01:09:49.778397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.793 [2024-07-23 01:09:49.778425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.793 qpair failed and we were unable to recover it. 00:30:05.793 [2024-07-23 01:09:49.778588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.793 [2024-07-23 01:09:49.778784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.793 [2024-07-23 01:09:49.778812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.793 qpair failed and we were unable to recover it. 00:30:05.793 [2024-07-23 01:09:49.779010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.793 [2024-07-23 01:09:49.779173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.793 [2024-07-23 01:09:49.779202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.793 qpair failed and we were unable to recover it. 00:30:05.793 [2024-07-23 01:09:49.779362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.793 [2024-07-23 01:09:49.779534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.793 [2024-07-23 01:09:49.779563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.793 qpair failed and we were unable to recover it. 00:30:05.793 [2024-07-23 01:09:49.779751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.793 [2024-07-23 01:09:49.779917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.793 [2024-07-23 01:09:49.779946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.793 qpair failed and we were unable to recover it. 00:30:05.793 [2024-07-23 01:09:49.780098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.793 [2024-07-23 01:09:49.780267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.793 [2024-07-23 01:09:49.780296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.793 qpair failed and we were unable to recover it. 00:30:05.793 [2024-07-23 01:09:49.780456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.793 [2024-07-23 01:09:49.780633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.793 [2024-07-23 01:09:49.780662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.793 qpair failed and we were unable to recover it. 00:30:05.793 [2024-07-23 01:09:49.780828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.793 [2024-07-23 01:09:49.780982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.793 [2024-07-23 01:09:49.781011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.793 qpair failed and we were unable to recover it. 00:30:05.793 [2024-07-23 01:09:49.781191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.793 [2024-07-23 01:09:49.781392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.793 [2024-07-23 01:09:49.781421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.793 qpair failed and we were unable to recover it. 00:30:05.793 [2024-07-23 01:09:49.781626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.793 [2024-07-23 01:09:49.781802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.793 [2024-07-23 01:09:49.781830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.793 qpair failed and we were unable to recover it. 00:30:05.793 [2024-07-23 01:09:49.781981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.793 [2024-07-23 01:09:49.782130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.793 [2024-07-23 01:09:49.782159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.793 qpair failed and we were unable to recover it. 00:30:05.793 [2024-07-23 01:09:49.782346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.793 [2024-07-23 01:09:49.782489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.793 [2024-07-23 01:09:49.782517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:05.793 qpair failed and we were unable to recover it. 00:30:05.793 [2024-07-23 01:09:49.782754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.793 [2024-07-23 01:09:49.782934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.793 [2024-07-23 01:09:49.782977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.793 qpair failed and we were unable to recover it. 00:30:05.793 [2024-07-23 01:09:49.783183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.793 [2024-07-23 01:09:49.783376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.793 [2024-07-23 01:09:49.783418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.793 qpair failed and we were unable to recover it. 00:30:05.793 [2024-07-23 01:09:49.783637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.793 [2024-07-23 01:09:49.783807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.793 [2024-07-23 01:09:49.783846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.793 qpair failed and we were unable to recover it. 00:30:05.793 [2024-07-23 01:09:49.784050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.793 [2024-07-23 01:09:49.784218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.793 [2024-07-23 01:09:49.784258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.793 qpair failed and we were unable to recover it. 00:30:05.793 [2024-07-23 01:09:49.784463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.793 [2024-07-23 01:09:49.784638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.794 [2024-07-23 01:09:49.784677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.794 qpair failed and we were unable to recover it. 00:30:05.794 [2024-07-23 01:09:49.784874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.794 [2024-07-23 01:09:49.785066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.794 [2024-07-23 01:09:49.785107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.794 qpair failed and we were unable to recover it. 00:30:05.794 [2024-07-23 01:09:49.785289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.794 [2024-07-23 01:09:49.785496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.794 [2024-07-23 01:09:49.785536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.794 qpair failed and we were unable to recover it. 00:30:05.794 [2024-07-23 01:09:49.785738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.794 [2024-07-23 01:09:49.785897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.794 [2024-07-23 01:09:49.785938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.794 qpair failed and we were unable to recover it. 00:30:05.794 [2024-07-23 01:09:49.786133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.794 [2024-07-23 01:09:49.786297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.794 [2024-07-23 01:09:49.786336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.794 qpair failed and we were unable to recover it. 00:30:05.794 [2024-07-23 01:09:49.786506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.794 [2024-07-23 01:09:49.786666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.794 [2024-07-23 01:09:49.786705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:05.794 qpair failed and we were unable to recover it. 00:30:05.794 [2024-07-23 01:09:49.786895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.794 [2024-07-23 01:09:49.787073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.794 [2024-07-23 01:09:49.787101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.794 qpair failed and we were unable to recover it. 00:30:05.794 [2024-07-23 01:09:49.787250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.794 [2024-07-23 01:09:49.787388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.794 [2024-07-23 01:09:49.787413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.794 qpair failed and we were unable to recover it. 00:30:05.794 [2024-07-23 01:09:49.787570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.794 [2024-07-23 01:09:49.787739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.794 [2024-07-23 01:09:49.787765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.794 qpair failed and we were unable to recover it. 00:30:05.794 [2024-07-23 01:09:49.787934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.794 [2024-07-23 01:09:49.788093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.794 [2024-07-23 01:09:49.788117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.794 qpair failed and we were unable to recover it. 00:30:05.794 [2024-07-23 01:09:49.788311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.794 [2024-07-23 01:09:49.788469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.794 [2024-07-23 01:09:49.788494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.794 qpair failed and we were unable to recover it. 00:30:05.794 [2024-07-23 01:09:49.788643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.794 [2024-07-23 01:09:49.788788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.794 [2024-07-23 01:09:49.788815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.794 qpair failed and we were unable to recover it. 00:30:05.794 [2024-07-23 01:09:49.788980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.794 [2024-07-23 01:09:49.789111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.794 [2024-07-23 01:09:49.789135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.794 qpair failed and we were unable to recover it. 00:30:05.794 [2024-07-23 01:09:49.789377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.794 [2024-07-23 01:09:49.789507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.794 [2024-07-23 01:09:49.789531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.794 qpair failed and we were unable to recover it. 00:30:05.794 [2024-07-23 01:09:49.789697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.794 [2024-07-23 01:09:49.789834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.794 [2024-07-23 01:09:49.789859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.794 qpair failed and we were unable to recover it. 00:30:05.794 [2024-07-23 01:09:49.789994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.794 [2024-07-23 01:09:49.790134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.794 [2024-07-23 01:09:49.790159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.794 qpair failed and we were unable to recover it. 00:30:05.794 [2024-07-23 01:09:49.790321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.794 [2024-07-23 01:09:49.790516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.794 [2024-07-23 01:09:49.790541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.794 qpair failed and we were unable to recover it. 00:30:05.794 [2024-07-23 01:09:49.790697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.794 [2024-07-23 01:09:49.790838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.794 [2024-07-23 01:09:49.790862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.794 qpair failed and we were unable to recover it. 00:30:05.794 [2024-07-23 01:09:49.791028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.794 [2024-07-23 01:09:49.791173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.794 [2024-07-23 01:09:49.791198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.794 qpair failed and we were unable to recover it. 00:30:05.794 [2024-07-23 01:09:49.791330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.794 [2024-07-23 01:09:49.791470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.794 [2024-07-23 01:09:49.791495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.794 qpair failed and we were unable to recover it. 00:30:05.794 [2024-07-23 01:09:49.791631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.794 [2024-07-23 01:09:49.791792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.794 [2024-07-23 01:09:49.791816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.794 qpair failed and we were unable to recover it. 00:30:05.794 [2024-07-23 01:09:49.791977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.794 [2024-07-23 01:09:49.792147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.794 [2024-07-23 01:09:49.792173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.794 qpair failed and we were unable to recover it. 00:30:05.794 [2024-07-23 01:09:49.792310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.794 [2024-07-23 01:09:49.792470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.794 [2024-07-23 01:09:49.792495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.794 qpair failed and we were unable to recover it. 00:30:05.794 [2024-07-23 01:09:49.792631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.794 [2024-07-23 01:09:49.792779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.794 [2024-07-23 01:09:49.792804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.794 qpair failed and we were unable to recover it. 00:30:05.794 [2024-07-23 01:09:49.792929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.794 [2024-07-23 01:09:49.793059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.794 [2024-07-23 01:09:49.793084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.794 qpair failed and we were unable to recover it. 00:30:05.794 [2024-07-23 01:09:49.793241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.794 [2024-07-23 01:09:49.793387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.794 [2024-07-23 01:09:49.793411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.794 qpair failed and we were unable to recover it. 00:30:05.794 [2024-07-23 01:09:49.793535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.794 [2024-07-23 01:09:49.793710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.794 [2024-07-23 01:09:49.793739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.794 qpair failed and we were unable to recover it. 00:30:05.794 [2024-07-23 01:09:49.793912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.794 [2024-07-23 01:09:49.794050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.794 [2024-07-23 01:09:49.794074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.794 qpair failed and we were unable to recover it. 00:30:05.794 [2024-07-23 01:09:49.794227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.794 [2024-07-23 01:09:49.794359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.794 [2024-07-23 01:09:49.794383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.794 qpair failed and we were unable to recover it. 00:30:05.794 [2024-07-23 01:09:49.794548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.795 [2024-07-23 01:09:49.794729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.795 [2024-07-23 01:09:49.794754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.795 qpair failed and we were unable to recover it. 00:30:05.795 [2024-07-23 01:09:49.794891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.795 [2024-07-23 01:09:49.795014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.795 [2024-07-23 01:09:49.795038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.795 qpair failed and we were unable to recover it. 00:30:05.795 [2024-07-23 01:09:49.795178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.795 [2024-07-23 01:09:49.795335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.795 [2024-07-23 01:09:49.795360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.795 qpair failed and we were unable to recover it. 00:30:05.795 [2024-07-23 01:09:49.795527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.795 [2024-07-23 01:09:49.795679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.795 [2024-07-23 01:09:49.795704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.795 qpair failed and we were unable to recover it. 00:30:05.795 [2024-07-23 01:09:49.795835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.795 [2024-07-23 01:09:49.795963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.795 [2024-07-23 01:09:49.795989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.795 qpair failed and we were unable to recover it. 00:30:05.795 [2024-07-23 01:09:49.796177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.795 [2024-07-23 01:09:49.796311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.795 [2024-07-23 01:09:49.796335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.795 qpair failed and we were unable to recover it. 00:30:05.795 [2024-07-23 01:09:49.796479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.795 [2024-07-23 01:09:49.796611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.795 [2024-07-23 01:09:49.796643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.795 qpair failed and we were unable to recover it. 00:30:05.795 [2024-07-23 01:09:49.796807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.795 [2024-07-23 01:09:49.796943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.795 [2024-07-23 01:09:49.796967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.795 qpair failed and we were unable to recover it. 00:30:05.795 [2024-07-23 01:09:49.797100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.795 [2024-07-23 01:09:49.797226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.795 [2024-07-23 01:09:49.797250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.795 qpair failed and we were unable to recover it. 00:30:05.795 [2024-07-23 01:09:49.797410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.795 [2024-07-23 01:09:49.797643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.795 [2024-07-23 01:09:49.797668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.795 qpair failed and we were unable to recover it. 00:30:05.795 [2024-07-23 01:09:49.797815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.795 [2024-07-23 01:09:49.797947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.795 [2024-07-23 01:09:49.797974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.795 qpair failed and we were unable to recover it. 00:30:05.795 [2024-07-23 01:09:49.798117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.795 [2024-07-23 01:09:49.798281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.795 [2024-07-23 01:09:49.798306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.795 qpair failed and we were unable to recover it. 00:30:05.795 [2024-07-23 01:09:49.798434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.795 [2024-07-23 01:09:49.798570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.795 [2024-07-23 01:09:49.798594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.795 qpair failed and we were unable to recover it. 00:30:05.795 [2024-07-23 01:09:49.798830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.795 [2024-07-23 01:09:49.799020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.795 [2024-07-23 01:09:49.799045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.795 qpair failed and we were unable to recover it. 00:30:05.795 [2024-07-23 01:09:49.799184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.795 [2024-07-23 01:09:49.799313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.795 [2024-07-23 01:09:49.799338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.795 qpair failed and we were unable to recover it. 00:30:05.795 [2024-07-23 01:09:49.799474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.795 [2024-07-23 01:09:49.799619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.795 [2024-07-23 01:09:49.799644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.795 qpair failed and we were unable to recover it. 00:30:05.795 [2024-07-23 01:09:49.799770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.795 [2024-07-23 01:09:49.799927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.795 [2024-07-23 01:09:49.799952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.795 qpair failed and we were unable to recover it. 00:30:05.795 [2024-07-23 01:09:49.800104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.795 [2024-07-23 01:09:49.800237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.795 [2024-07-23 01:09:49.800262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.795 qpair failed and we were unable to recover it. 00:30:05.795 [2024-07-23 01:09:49.800406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.795 [2024-07-23 01:09:49.800599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.795 [2024-07-23 01:09:49.800628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.795 qpair failed and we were unable to recover it. 00:30:05.795 [2024-07-23 01:09:49.800768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.795 [2024-07-23 01:09:49.800937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.795 [2024-07-23 01:09:49.800961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.795 qpair failed and we were unable to recover it. 00:30:05.795 [2024-07-23 01:09:49.801095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.795 [2024-07-23 01:09:49.801237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.795 [2024-07-23 01:09:49.801261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.795 qpair failed and we were unable to recover it. 00:30:05.795 [2024-07-23 01:09:49.801408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.795 [2024-07-23 01:09:49.801540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.795 [2024-07-23 01:09:49.801564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.795 qpair failed and we were unable to recover it. 00:30:05.795 [2024-07-23 01:09:49.801731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.795 [2024-07-23 01:09:49.801891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.795 [2024-07-23 01:09:49.801916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.795 qpair failed and we were unable to recover it. 00:30:05.795 [2024-07-23 01:09:49.802158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.795 [2024-07-23 01:09:49.802334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.795 [2024-07-23 01:09:49.802359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.795 qpair failed and we were unable to recover it. 00:30:05.795 [2024-07-23 01:09:49.802494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.795 [2024-07-23 01:09:49.802655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.795 [2024-07-23 01:09:49.802681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.795 qpair failed and we were unable to recover it. 00:30:05.795 [2024-07-23 01:09:49.802843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.795 [2024-07-23 01:09:49.803085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.795 [2024-07-23 01:09:49.803109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.795 qpair failed and we were unable to recover it. 00:30:05.795 [2024-07-23 01:09:49.803244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.795 [2024-07-23 01:09:49.803405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.795 [2024-07-23 01:09:49.803430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.795 qpair failed and we were unable to recover it. 00:30:05.795 [2024-07-23 01:09:49.803564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.795 [2024-07-23 01:09:49.803696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.795 [2024-07-23 01:09:49.803721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.795 qpair failed and we were unable to recover it. 00:30:05.795 [2024-07-23 01:09:49.803888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.795 [2024-07-23 01:09:49.804027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.795 [2024-07-23 01:09:49.804051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.795 qpair failed and we were unable to recover it. 00:30:05.796 [2024-07-23 01:09:49.804218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.796 [2024-07-23 01:09:49.804358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.796 [2024-07-23 01:09:49.804382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.796 qpair failed and we were unable to recover it. 00:30:05.796 [2024-07-23 01:09:49.804514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.796 [2024-07-23 01:09:49.804645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.796 [2024-07-23 01:09:49.804670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.796 qpair failed and we were unable to recover it. 00:30:05.796 [2024-07-23 01:09:49.804814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.796 [2024-07-23 01:09:49.804979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.796 [2024-07-23 01:09:49.805004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.796 qpair failed and we were unable to recover it. 00:30:05.796 [2024-07-23 01:09:49.805132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.796 [2024-07-23 01:09:49.805296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.796 [2024-07-23 01:09:49.805320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.796 qpair failed and we were unable to recover it. 00:30:05.796 [2024-07-23 01:09:49.805473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.796 [2024-07-23 01:09:49.805600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.796 [2024-07-23 01:09:49.805639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.796 qpair failed and we were unable to recover it. 00:30:05.796 [2024-07-23 01:09:49.805771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.796 [2024-07-23 01:09:49.805911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.796 [2024-07-23 01:09:49.805935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.796 qpair failed and we were unable to recover it. 00:30:05.796 [2024-07-23 01:09:49.806080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.796 [2024-07-23 01:09:49.806214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.796 [2024-07-23 01:09:49.806239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.796 qpair failed and we were unable to recover it. 00:30:05.796 [2024-07-23 01:09:49.806381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.796 [2024-07-23 01:09:49.806515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.796 [2024-07-23 01:09:49.806539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.796 qpair failed and we were unable to recover it. 00:30:05.796 [2024-07-23 01:09:49.806716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.796 [2024-07-23 01:09:49.806933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.796 [2024-07-23 01:09:49.806957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.796 qpair failed and we were unable to recover it. 00:30:05.796 [2024-07-23 01:09:49.807129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.796 [2024-07-23 01:09:49.807296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.796 [2024-07-23 01:09:49.807320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.796 qpair failed and we were unable to recover it. 00:30:05.796 [2024-07-23 01:09:49.807480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.796 [2024-07-23 01:09:49.807617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.796 [2024-07-23 01:09:49.807643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.796 qpair failed and we were unable to recover it. 00:30:05.796 [2024-07-23 01:09:49.807781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.796 [2024-07-23 01:09:49.807920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.796 [2024-07-23 01:09:49.807944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.796 qpair failed and we were unable to recover it. 00:30:05.796 [2024-07-23 01:09:49.808081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.796 [2024-07-23 01:09:49.808219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.796 [2024-07-23 01:09:49.808243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.796 qpair failed and we were unable to recover it. 00:30:05.796 [2024-07-23 01:09:49.808407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.796 [2024-07-23 01:09:49.808547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.796 [2024-07-23 01:09:49.808572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.796 qpair failed and we were unable to recover it. 00:30:05.796 [2024-07-23 01:09:49.808734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.796 [2024-07-23 01:09:49.808882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.796 [2024-07-23 01:09:49.808906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.796 qpair failed and we were unable to recover it. 00:30:05.796 [2024-07-23 01:09:49.809039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.796 [2024-07-23 01:09:49.809180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.796 [2024-07-23 01:09:49.809204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.796 qpair failed and we were unable to recover it. 00:30:05.796 [2024-07-23 01:09:49.809329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.796 [2024-07-23 01:09:49.809489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.796 [2024-07-23 01:09:49.809513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.796 qpair failed and we were unable to recover it. 00:30:05.796 [2024-07-23 01:09:49.809659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.796 [2024-07-23 01:09:49.809804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.796 [2024-07-23 01:09:49.809829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.796 qpair failed and we were unable to recover it. 00:30:05.796 [2024-07-23 01:09:49.809954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.796 [2024-07-23 01:09:49.810096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.796 [2024-07-23 01:09:49.810121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.796 qpair failed and we were unable to recover it. 00:30:05.796 [2024-07-23 01:09:49.810281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.796 [2024-07-23 01:09:49.810420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.796 [2024-07-23 01:09:49.810449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.796 qpair failed and we were unable to recover it. 00:30:05.796 [2024-07-23 01:09:49.810595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.796 [2024-07-23 01:09:49.810766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.796 [2024-07-23 01:09:49.810791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.796 qpair failed and we were unable to recover it. 00:30:05.796 [2024-07-23 01:09:49.810932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.796 [2024-07-23 01:09:49.811092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.796 [2024-07-23 01:09:49.811117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.796 qpair failed and we were unable to recover it. 00:30:05.796 [2024-07-23 01:09:49.811276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.796 [2024-07-23 01:09:49.811399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.796 [2024-07-23 01:09:49.811423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.796 qpair failed and we were unable to recover it. 00:30:05.796 [2024-07-23 01:09:49.811571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.796 [2024-07-23 01:09:49.811764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.796 [2024-07-23 01:09:49.811790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.796 qpair failed and we were unable to recover it. 00:30:05.796 [2024-07-23 01:09:49.811925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.796 [2024-07-23 01:09:49.812077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.796 [2024-07-23 01:09:49.812102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.796 qpair failed and we were unable to recover it. 00:30:05.796 [2024-07-23 01:09:49.812264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.796 [2024-07-23 01:09:49.812416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.796 [2024-07-23 01:09:49.812440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.796 qpair failed and we were unable to recover it. 00:30:05.796 [2024-07-23 01:09:49.812570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.796 [2024-07-23 01:09:49.812706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.796 [2024-07-23 01:09:49.812731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.796 qpair failed and we were unable to recover it. 00:30:05.796 [2024-07-23 01:09:49.812910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.796 [2024-07-23 01:09:49.813062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.796 [2024-07-23 01:09:49.813086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.796 qpair failed and we were unable to recover it. 00:30:05.796 [2024-07-23 01:09:49.813237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.796 [2024-07-23 01:09:49.813376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.796 [2024-07-23 01:09:49.813403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.797 qpair failed and we were unable to recover it. 00:30:05.797 [2024-07-23 01:09:49.813536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.797 [2024-07-23 01:09:49.813772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.797 [2024-07-23 01:09:49.813798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.797 qpair failed and we were unable to recover it. 00:30:05.797 [2024-07-23 01:09:49.813947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.797 [2024-07-23 01:09:49.814105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.797 [2024-07-23 01:09:49.814130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.797 qpair failed and we were unable to recover it. 00:30:05.797 [2024-07-23 01:09:49.814263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.797 [2024-07-23 01:09:49.814402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.797 [2024-07-23 01:09:49.814427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.797 qpair failed and we were unable to recover it. 00:30:05.797 [2024-07-23 01:09:49.814559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.797 [2024-07-23 01:09:49.814694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.797 [2024-07-23 01:09:49.814719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.797 qpair failed and we were unable to recover it. 00:30:05.797 [2024-07-23 01:09:49.814845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.797 [2024-07-23 01:09:49.815013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.797 [2024-07-23 01:09:49.815037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.797 qpair failed and we were unable to recover it. 00:30:05.797 [2024-07-23 01:09:49.815164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.797 [2024-07-23 01:09:49.815297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.797 [2024-07-23 01:09:49.815322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.797 qpair failed and we were unable to recover it. 00:30:05.797 [2024-07-23 01:09:49.815456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.797 [2024-07-23 01:09:49.815620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.797 [2024-07-23 01:09:49.815645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.797 qpair failed and we were unable to recover it. 00:30:05.797 [2024-07-23 01:09:49.815823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.797 [2024-07-23 01:09:49.815951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.797 [2024-07-23 01:09:49.815976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.797 qpair failed and we were unable to recover it. 00:30:05.797 [2024-07-23 01:09:49.816149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.797 [2024-07-23 01:09:49.816318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.797 [2024-07-23 01:09:49.816343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.797 qpair failed and we were unable to recover it. 00:30:05.797 [2024-07-23 01:09:49.816514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.797 [2024-07-23 01:09:49.816675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.797 [2024-07-23 01:09:49.816699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.797 qpair failed and we were unable to recover it. 00:30:05.797 [2024-07-23 01:09:49.816837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.797 [2024-07-23 01:09:49.817012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.797 [2024-07-23 01:09:49.817036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.797 qpair failed and we were unable to recover it. 00:30:05.797 [2024-07-23 01:09:49.817177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.797 [2024-07-23 01:09:49.817343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.797 [2024-07-23 01:09:49.817367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.797 qpair failed and we were unable to recover it. 00:30:05.797 [2024-07-23 01:09:49.817516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.797 [2024-07-23 01:09:49.817682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.797 [2024-07-23 01:09:49.817708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.797 qpair failed and we were unable to recover it. 00:30:05.797 [2024-07-23 01:09:49.817871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.797 [2024-07-23 01:09:49.818034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.797 [2024-07-23 01:09:49.818059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.797 qpair failed and we were unable to recover it. 00:30:05.797 [2024-07-23 01:09:49.818189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.797 [2024-07-23 01:09:49.818334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.797 [2024-07-23 01:09:49.818358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.797 qpair failed and we were unable to recover it. 00:30:05.797 [2024-07-23 01:09:49.818495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.797 [2024-07-23 01:09:49.818629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.797 [2024-07-23 01:09:49.818655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.797 qpair failed and we were unable to recover it. 00:30:05.797 [2024-07-23 01:09:49.818899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.797 [2024-07-23 01:09:49.819044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.797 [2024-07-23 01:09:49.819068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.797 qpair failed and we were unable to recover it. 00:30:05.797 [2024-07-23 01:09:49.819204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.797 [2024-07-23 01:09:49.819333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.797 [2024-07-23 01:09:49.819357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.797 qpair failed and we were unable to recover it. 00:30:05.797 [2024-07-23 01:09:49.819524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.797 [2024-07-23 01:09:49.819680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.797 [2024-07-23 01:09:49.819705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.797 qpair failed and we were unable to recover it. 00:30:05.797 [2024-07-23 01:09:49.819843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.797 [2024-07-23 01:09:49.819975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.797 [2024-07-23 01:09:49.820000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.797 qpair failed and we were unable to recover it. 00:30:05.797 [2024-07-23 01:09:49.820160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.797 [2024-07-23 01:09:49.820305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.797 [2024-07-23 01:09:49.820329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.797 qpair failed and we were unable to recover it. 00:30:05.797 [2024-07-23 01:09:49.820497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.797 [2024-07-23 01:09:49.820645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.797 [2024-07-23 01:09:49.820670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.797 qpair failed and we were unable to recover it. 00:30:05.797 [2024-07-23 01:09:49.820828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.797 [2024-07-23 01:09:49.820960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.797 [2024-07-23 01:09:49.820984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.797 qpair failed and we were unable to recover it. 00:30:05.797 [2024-07-23 01:09:49.821148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.797 [2024-07-23 01:09:49.821274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.797 [2024-07-23 01:09:49.821298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.797 qpair failed and we were unable to recover it. 00:30:05.797 [2024-07-23 01:09:49.821443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.798 [2024-07-23 01:09:49.821579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.798 [2024-07-23 01:09:49.821606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.798 qpair failed and we were unable to recover it. 00:30:05.798 [2024-07-23 01:09:49.821758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.798 [2024-07-23 01:09:49.821900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.798 [2024-07-23 01:09:49.821925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.798 qpair failed and we were unable to recover it. 00:30:05.798 [2024-07-23 01:09:49.822113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.798 [2024-07-23 01:09:49.822265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.798 [2024-07-23 01:09:49.822290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.798 qpair failed and we were unable to recover it. 00:30:05.798 [2024-07-23 01:09:49.822416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.798 [2024-07-23 01:09:49.822545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.798 [2024-07-23 01:09:49.822570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.798 qpair failed and we were unable to recover it. 00:30:05.798 [2024-07-23 01:09:49.822714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.798 [2024-07-23 01:09:49.822955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.798 [2024-07-23 01:09:49.822979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.798 qpair failed and we were unable to recover it. 00:30:05.798 [2024-07-23 01:09:49.823143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.798 [2024-07-23 01:09:49.823301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.798 [2024-07-23 01:09:49.823328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.798 qpair failed and we were unable to recover it. 00:30:05.798 [2024-07-23 01:09:49.823472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.798 [2024-07-23 01:09:49.823670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.798 [2024-07-23 01:09:49.823695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.798 qpair failed and we were unable to recover it. 00:30:05.798 [2024-07-23 01:09:49.823869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.798 [2024-07-23 01:09:49.824131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.798 [2024-07-23 01:09:49.824156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.798 qpair failed and we were unable to recover it. 00:30:05.798 [2024-07-23 01:09:49.824314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.798 [2024-07-23 01:09:49.824473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.798 [2024-07-23 01:09:49.824499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.798 qpair failed and we were unable to recover it. 00:30:05.798 [2024-07-23 01:09:49.824636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.798 [2024-07-23 01:09:49.824776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.798 [2024-07-23 01:09:49.824803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.798 qpair failed and we were unable to recover it. 00:30:05.798 [2024-07-23 01:09:49.824932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.798 [2024-07-23 01:09:49.825095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.798 [2024-07-23 01:09:49.825119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.798 qpair failed and we were unable to recover it. 00:30:05.798 [2024-07-23 01:09:49.825278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.798 [2024-07-23 01:09:49.825438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.798 [2024-07-23 01:09:49.825462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.798 qpair failed and we were unable to recover it. 00:30:05.798 [2024-07-23 01:09:49.825628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.798 [2024-07-23 01:09:49.825776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.798 [2024-07-23 01:09:49.825800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.798 qpair failed and we were unable to recover it. 00:30:05.798 [2024-07-23 01:09:49.825967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.798 [2024-07-23 01:09:49.826111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.798 [2024-07-23 01:09:49.826135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.798 qpair failed and we were unable to recover it. 00:30:05.798 [2024-07-23 01:09:49.826274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.798 [2024-07-23 01:09:49.826423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.798 [2024-07-23 01:09:49.826449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.798 qpair failed and we were unable to recover it. 00:30:05.798 [2024-07-23 01:09:49.826585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.798 [2024-07-23 01:09:49.826775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.798 [2024-07-23 01:09:49.826800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.798 qpair failed and we were unable to recover it. 00:30:05.798 [2024-07-23 01:09:49.826949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.798 [2024-07-23 01:09:49.827084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.798 [2024-07-23 01:09:49.827109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.798 qpair failed and we were unable to recover it. 00:30:05.798 [2024-07-23 01:09:49.827353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.798 [2024-07-23 01:09:49.827489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.798 [2024-07-23 01:09:49.827519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.798 qpair failed and we were unable to recover it. 00:30:05.798 [2024-07-23 01:09:49.827672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.798 [2024-07-23 01:09:49.827809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.798 [2024-07-23 01:09:49.827834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.798 qpair failed and we were unable to recover it. 00:30:05.798 [2024-07-23 01:09:49.827973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.798 [2024-07-23 01:09:49.828108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.798 [2024-07-23 01:09:49.828133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.798 qpair failed and we were unable to recover it. 00:30:05.798 [2024-07-23 01:09:49.828297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.798 [2024-07-23 01:09:49.828463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.798 [2024-07-23 01:09:49.828487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.798 qpair failed and we were unable to recover it. 00:30:05.798 [2024-07-23 01:09:49.828652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.798 [2024-07-23 01:09:49.828864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.798 [2024-07-23 01:09:49.828888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.798 qpair failed and we were unable to recover it. 00:30:05.798 [2024-07-23 01:09:49.829059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.798 [2024-07-23 01:09:49.829325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.798 [2024-07-23 01:09:49.829349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.798 qpair failed and we were unable to recover it. 00:30:05.798 [2024-07-23 01:09:49.829517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.798 [2024-07-23 01:09:49.829651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.798 [2024-07-23 01:09:49.829676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.798 qpair failed and we were unable to recover it. 00:30:05.798 [2024-07-23 01:09:49.829802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.798 [2024-07-23 01:09:49.829930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.798 [2024-07-23 01:09:49.829955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.798 qpair failed and we were unable to recover it. 00:30:05.798 [2024-07-23 01:09:49.830079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.798 [2024-07-23 01:09:49.830238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.798 [2024-07-23 01:09:49.830263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.798 qpair failed and we were unable to recover it. 00:30:05.798 [2024-07-23 01:09:49.830450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.798 [2024-07-23 01:09:49.830580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.798 [2024-07-23 01:09:49.830604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.798 qpair failed and we were unable to recover it. 00:30:05.798 [2024-07-23 01:09:49.830776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.798 [2024-07-23 01:09:49.830948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.798 [2024-07-23 01:09:49.830977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.798 qpair failed and we were unable to recover it. 00:30:05.798 [2024-07-23 01:09:49.831109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.798 [2024-07-23 01:09:49.831242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.798 [2024-07-23 01:09:49.831266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.799 qpair failed and we were unable to recover it. 00:30:05.799 [2024-07-23 01:09:49.831398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.799 [2024-07-23 01:09:49.831596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.799 [2024-07-23 01:09:49.831627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.799 qpair failed and we were unable to recover it. 00:30:05.799 [2024-07-23 01:09:49.831771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.799 [2024-07-23 01:09:49.831908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.799 [2024-07-23 01:09:49.831933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.799 qpair failed and we were unable to recover it. 00:30:05.799 [2024-07-23 01:09:49.832080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.799 [2024-07-23 01:09:49.832214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.799 [2024-07-23 01:09:49.832238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.799 qpair failed and we were unable to recover it. 00:30:05.799 [2024-07-23 01:09:49.832368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.799 [2024-07-23 01:09:49.832529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.799 [2024-07-23 01:09:49.832555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.799 qpair failed and we were unable to recover it. 00:30:05.799 [2024-07-23 01:09:49.832695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.799 [2024-07-23 01:09:49.832843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.799 [2024-07-23 01:09:49.832868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.799 qpair failed and we were unable to recover it. 00:30:05.799 [2024-07-23 01:09:49.833034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.799 [2024-07-23 01:09:49.833274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.799 [2024-07-23 01:09:49.833298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.799 qpair failed and we were unable to recover it. 00:30:05.799 [2024-07-23 01:09:49.833433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.799 [2024-07-23 01:09:49.833562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.799 [2024-07-23 01:09:49.833587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.799 qpair failed and we were unable to recover it. 00:30:05.799 [2024-07-23 01:09:49.833778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.799 [2024-07-23 01:09:49.833918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.799 [2024-07-23 01:09:49.833944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.799 qpair failed and we were unable to recover it. 00:30:05.799 [2024-07-23 01:09:49.834107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.799 [2024-07-23 01:09:49.834233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.799 [2024-07-23 01:09:49.834258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.799 qpair failed and we were unable to recover it. 00:30:05.799 [2024-07-23 01:09:49.834402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.799 [2024-07-23 01:09:49.834560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.799 [2024-07-23 01:09:49.834584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.799 qpair failed and we were unable to recover it. 00:30:05.799 [2024-07-23 01:09:49.834767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.799 [2024-07-23 01:09:49.834926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.799 [2024-07-23 01:09:49.834950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.799 qpair failed and we were unable to recover it. 00:30:05.799 [2024-07-23 01:09:49.835198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.799 [2024-07-23 01:09:49.835330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.799 [2024-07-23 01:09:49.835356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.799 qpair failed and we were unable to recover it. 00:30:05.799 [2024-07-23 01:09:49.835534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.799 [2024-07-23 01:09:49.835669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.799 [2024-07-23 01:09:49.835694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.799 qpair failed and we were unable to recover it. 00:30:05.799 [2024-07-23 01:09:49.835830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.799 [2024-07-23 01:09:49.836008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.799 [2024-07-23 01:09:49.836032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.799 qpair failed and we were unable to recover it. 00:30:05.799 [2024-07-23 01:09:49.836195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.799 [2024-07-23 01:09:49.836356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.799 [2024-07-23 01:09:49.836380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.799 qpair failed and we were unable to recover it. 00:30:05.799 [2024-07-23 01:09:49.836578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.799 [2024-07-23 01:09:49.836725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.799 [2024-07-23 01:09:49.836750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.799 qpair failed and we were unable to recover it. 00:30:05.799 [2024-07-23 01:09:49.836882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.799 [2024-07-23 01:09:49.837011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.799 [2024-07-23 01:09:49.837035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.799 qpair failed and we were unable to recover it. 00:30:05.799 [2024-07-23 01:09:49.837200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.799 [2024-07-23 01:09:49.837333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.799 [2024-07-23 01:09:49.837357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.799 qpair failed and we were unable to recover it. 00:30:05.799 [2024-07-23 01:09:49.837503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.799 [2024-07-23 01:09:49.837662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.799 [2024-07-23 01:09:49.837688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.799 qpair failed and we were unable to recover it. 00:30:05.799 [2024-07-23 01:09:49.837863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.799 [2024-07-23 01:09:49.838063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.799 [2024-07-23 01:09:49.838088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.799 qpair failed and we were unable to recover it. 00:30:05.799 [2024-07-23 01:09:49.838255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.799 [2024-07-23 01:09:49.838416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.799 [2024-07-23 01:09:49.838441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.799 qpair failed and we were unable to recover it. 00:30:05.799 [2024-07-23 01:09:49.838606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.799 [2024-07-23 01:09:49.838757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.799 [2024-07-23 01:09:49.838782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.799 qpair failed and we were unable to recover it. 00:30:05.799 [2024-07-23 01:09:49.838914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.799 [2024-07-23 01:09:49.839091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.799 [2024-07-23 01:09:49.839116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.799 qpair failed and we were unable to recover it. 00:30:05.799 [2024-07-23 01:09:49.839244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.799 [2024-07-23 01:09:49.839452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.799 [2024-07-23 01:09:49.839477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.799 qpair failed and we were unable to recover it. 00:30:05.799 [2024-07-23 01:09:49.839639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.799 [2024-07-23 01:09:49.839784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.799 [2024-07-23 01:09:49.839808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.799 qpair failed and we were unable to recover it. 00:30:05.799 [2024-07-23 01:09:49.839960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.799 [2024-07-23 01:09:49.840106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.799 [2024-07-23 01:09:49.840130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.799 qpair failed and we were unable to recover it. 00:30:05.799 [2024-07-23 01:09:49.840296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.799 [2024-07-23 01:09:49.840457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.799 [2024-07-23 01:09:49.840482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.799 qpair failed and we were unable to recover it. 00:30:05.799 [2024-07-23 01:09:49.840623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.799 [2024-07-23 01:09:49.840758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.799 [2024-07-23 01:09:49.840783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.799 qpair failed and we were unable to recover it. 00:30:05.799 [2024-07-23 01:09:49.840951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.800 [2024-07-23 01:09:49.841088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.800 [2024-07-23 01:09:49.841114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.800 qpair failed and we were unable to recover it. 00:30:05.800 [2024-07-23 01:09:49.841241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.800 [2024-07-23 01:09:49.841376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.800 [2024-07-23 01:09:49.841401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.800 qpair failed and we were unable to recover it. 00:30:05.800 [2024-07-23 01:09:49.841535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.800 [2024-07-23 01:09:49.841670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.800 [2024-07-23 01:09:49.841695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.800 qpair failed and we were unable to recover it. 00:30:05.800 [2024-07-23 01:09:49.841841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.800 [2024-07-23 01:09:49.841982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.800 [2024-07-23 01:09:49.842008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.800 qpair failed and we were unable to recover it. 00:30:05.800 [2024-07-23 01:09:49.842181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.800 [2024-07-23 01:09:49.842326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.800 [2024-07-23 01:09:49.842351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.800 qpair failed and we were unable to recover it. 00:30:05.800 [2024-07-23 01:09:49.842540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.800 [2024-07-23 01:09:49.842685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.800 [2024-07-23 01:09:49.842709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.800 qpair failed and we were unable to recover it. 00:30:05.800 [2024-07-23 01:09:49.842840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.800 [2024-07-23 01:09:49.843007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.800 [2024-07-23 01:09:49.843032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.800 qpair failed and we were unable to recover it. 00:30:05.800 [2024-07-23 01:09:49.843164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.800 [2024-07-23 01:09:49.843297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.800 [2024-07-23 01:09:49.843321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.800 qpair failed and we were unable to recover it. 00:30:05.800 [2024-07-23 01:09:49.843513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.800 [2024-07-23 01:09:49.843665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.800 [2024-07-23 01:09:49.843690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.800 qpair failed and we were unable to recover it. 00:30:05.800 [2024-07-23 01:09:49.843827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.800 [2024-07-23 01:09:49.843980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.800 [2024-07-23 01:09:49.844005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.800 qpair failed and we were unable to recover it. 00:30:05.800 [2024-07-23 01:09:49.844137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.800 [2024-07-23 01:09:49.844273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.800 [2024-07-23 01:09:49.844297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.800 qpair failed and we were unable to recover it. 00:30:05.800 [2024-07-23 01:09:49.844462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.800 [2024-07-23 01:09:49.844627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.800 [2024-07-23 01:09:49.844653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.800 qpair failed and we were unable to recover it. 00:30:05.800 [2024-07-23 01:09:49.844817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.800 [2024-07-23 01:09:49.844997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.800 [2024-07-23 01:09:49.845021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.800 qpair failed and we were unable to recover it. 00:30:05.800 [2024-07-23 01:09:49.845168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.800 [2024-07-23 01:09:49.845334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.800 [2024-07-23 01:09:49.845358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.800 qpair failed and we were unable to recover it. 00:30:05.800 [2024-07-23 01:09:49.845488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.800 [2024-07-23 01:09:49.845667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.800 [2024-07-23 01:09:49.845692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.800 qpair failed and we were unable to recover it. 00:30:05.800 [2024-07-23 01:09:49.845833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.800 [2024-07-23 01:09:49.845975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.800 [2024-07-23 01:09:49.845999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.800 qpair failed and we were unable to recover it. 00:30:05.800 [2024-07-23 01:09:49.846158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.800 [2024-07-23 01:09:49.846287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.800 [2024-07-23 01:09:49.846311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.800 qpair failed and we were unable to recover it. 00:30:05.800 [2024-07-23 01:09:49.846474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.800 [2024-07-23 01:09:49.846625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.800 [2024-07-23 01:09:49.846659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.800 qpair failed and we were unable to recover it. 00:30:05.800 [2024-07-23 01:09:49.846794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.800 [2024-07-23 01:09:49.846974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.800 [2024-07-23 01:09:49.846999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.800 qpair failed and we were unable to recover it. 00:30:05.800 [2024-07-23 01:09:49.847158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.800 [2024-07-23 01:09:49.847399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.800 [2024-07-23 01:09:49.847425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.800 qpair failed and we were unable to recover it. 00:30:05.800 [2024-07-23 01:09:49.847590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.800 [2024-07-23 01:09:49.847740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.800 [2024-07-23 01:09:49.847765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.800 qpair failed and we were unable to recover it. 00:30:05.800 [2024-07-23 01:09:49.847905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.800 [2024-07-23 01:09:49.848048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.800 [2024-07-23 01:09:49.848076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.800 qpair failed and we were unable to recover it. 00:30:05.800 [2024-07-23 01:09:49.848240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.800 [2024-07-23 01:09:49.848398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.800 [2024-07-23 01:09:49.848423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.800 qpair failed and we were unable to recover it. 00:30:05.800 [2024-07-23 01:09:49.848558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.800 [2024-07-23 01:09:49.848706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.800 [2024-07-23 01:09:49.848731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.800 qpair failed and we were unable to recover it. 00:30:05.800 [2024-07-23 01:09:49.848879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.800 [2024-07-23 01:09:49.849027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.800 [2024-07-23 01:09:49.849051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.800 qpair failed and we were unable to recover it. 00:30:05.800 [2024-07-23 01:09:49.849242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.800 [2024-07-23 01:09:49.849404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.800 [2024-07-23 01:09:49.849427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.800 qpair failed and we were unable to recover it. 00:30:05.800 [2024-07-23 01:09:49.849564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.800 [2024-07-23 01:09:49.849766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.800 [2024-07-23 01:09:49.849791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.800 qpair failed and we were unable to recover it. 00:30:05.800 [2024-07-23 01:09:49.849922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.800 [2024-07-23 01:09:49.850083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.800 [2024-07-23 01:09:49.850108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.800 qpair failed and we were unable to recover it. 00:30:05.800 [2024-07-23 01:09:49.850264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.800 [2024-07-23 01:09:49.850428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.800 [2024-07-23 01:09:49.850452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.800 qpair failed and we were unable to recover it. 00:30:05.801 [2024-07-23 01:09:49.850589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.801 [2024-07-23 01:09:49.850771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.801 [2024-07-23 01:09:49.850796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.801 qpair failed and we were unable to recover it. 00:30:05.801 [2024-07-23 01:09:49.850959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.801 [2024-07-23 01:09:49.851096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.801 [2024-07-23 01:09:49.851120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.801 qpair failed and we were unable to recover it. 00:30:05.801 [2024-07-23 01:09:49.851282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.801 [2024-07-23 01:09:49.851424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.801 [2024-07-23 01:09:49.851448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.801 qpair failed and we were unable to recover it. 00:30:05.801 [2024-07-23 01:09:49.851584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.801 [2024-07-23 01:09:49.851731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.801 [2024-07-23 01:09:49.851756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.801 qpair failed and we were unable to recover it. 00:30:05.801 [2024-07-23 01:09:49.851891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.801 [2024-07-23 01:09:49.852049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.801 [2024-07-23 01:09:49.852073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.801 qpair failed and we were unable to recover it. 00:30:05.801 [2024-07-23 01:09:49.852235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.801 [2024-07-23 01:09:49.852368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.801 [2024-07-23 01:09:49.852392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.801 qpair failed and we were unable to recover it. 00:30:05.801 [2024-07-23 01:09:49.852527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.801 [2024-07-23 01:09:49.852692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.801 [2024-07-23 01:09:49.852717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.801 qpair failed and we were unable to recover it. 00:30:05.801 [2024-07-23 01:09:49.852856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.801 [2024-07-23 01:09:49.852989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.801 [2024-07-23 01:09:49.853015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.801 qpair failed and we were unable to recover it. 00:30:05.801 [2024-07-23 01:09:49.853258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.801 [2024-07-23 01:09:49.853415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.801 [2024-07-23 01:09:49.853439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.801 qpair failed and we were unable to recover it. 00:30:05.801 [2024-07-23 01:09:49.853595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.801 [2024-07-23 01:09:49.853787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.801 [2024-07-23 01:09:49.853812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.801 qpair failed and we were unable to recover it. 00:30:05.801 [2024-07-23 01:09:49.853944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.801 [2024-07-23 01:09:49.854074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.801 [2024-07-23 01:09:49.854098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.801 qpair failed and we were unable to recover it. 00:30:05.801 [2024-07-23 01:09:49.854239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.801 [2024-07-23 01:09:49.854414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.801 [2024-07-23 01:09:49.854438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.801 qpair failed and we were unable to recover it. 00:30:05.801 [2024-07-23 01:09:49.854626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.801 [2024-07-23 01:09:49.854779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.801 [2024-07-23 01:09:49.854807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.801 qpair failed and we were unable to recover it. 00:30:05.801 [2024-07-23 01:09:49.854991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.801 [2024-07-23 01:09:49.855117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.801 [2024-07-23 01:09:49.855141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.801 qpair failed and we were unable to recover it. 00:30:05.801 [2024-07-23 01:09:49.855272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.801 [2024-07-23 01:09:49.855432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.801 [2024-07-23 01:09:49.855456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.801 qpair failed and we were unable to recover it. 00:30:05.801 [2024-07-23 01:09:49.855631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.801 [2024-07-23 01:09:49.855763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.801 [2024-07-23 01:09:49.855789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.801 qpair failed and we were unable to recover it. 00:30:05.801 [2024-07-23 01:09:49.855930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.801 [2024-07-23 01:09:49.856059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.801 [2024-07-23 01:09:49.856083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.801 qpair failed and we were unable to recover it. 00:30:05.801 [2024-07-23 01:09:49.856210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.801 [2024-07-23 01:09:49.856365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.801 [2024-07-23 01:09:49.856389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.801 qpair failed and we were unable to recover it. 00:30:05.801 [2024-07-23 01:09:49.856528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.801 [2024-07-23 01:09:49.856696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.801 [2024-07-23 01:09:49.856721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.801 qpair failed and we were unable to recover it. 00:30:05.801 [2024-07-23 01:09:49.856884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.801 [2024-07-23 01:09:49.857019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.801 [2024-07-23 01:09:49.857044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.801 qpair failed and we were unable to recover it. 00:30:05.801 [2024-07-23 01:09:49.857197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.801 [2024-07-23 01:09:49.857356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.801 [2024-07-23 01:09:49.857381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.801 qpair failed and we were unable to recover it. 00:30:05.801 [2024-07-23 01:09:49.857508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.801 [2024-07-23 01:09:49.857670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.801 [2024-07-23 01:09:49.857695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.801 qpair failed and we were unable to recover it. 00:30:05.801 [2024-07-23 01:09:49.857836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.801 [2024-07-23 01:09:49.857990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.801 [2024-07-23 01:09:49.858015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.801 qpair failed and we were unable to recover it. 00:30:05.801 [2024-07-23 01:09:49.858157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.801 [2024-07-23 01:09:49.858315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.801 [2024-07-23 01:09:49.858340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.801 qpair failed and we were unable to recover it. 00:30:05.801 [2024-07-23 01:09:49.858474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.801 [2024-07-23 01:09:49.858604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.801 [2024-07-23 01:09:49.858637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.801 qpair failed and we were unable to recover it. 00:30:05.801 [2024-07-23 01:09:49.858769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.801 [2024-07-23 01:09:49.858905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.801 [2024-07-23 01:09:49.858929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.801 qpair failed and we were unable to recover it. 00:30:05.801 [2024-07-23 01:09:49.859070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.801 [2024-07-23 01:09:49.859261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.801 [2024-07-23 01:09:49.859285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.801 qpair failed and we were unable to recover it. 00:30:05.801 [2024-07-23 01:09:49.859442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.801 [2024-07-23 01:09:49.859575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.801 [2024-07-23 01:09:49.859601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.801 qpair failed and we were unable to recover it. 00:30:05.801 [2024-07-23 01:09:49.859750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.801 [2024-07-23 01:09:49.859891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.801 [2024-07-23 01:09:49.859917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.802 qpair failed and we were unable to recover it. 00:30:05.802 [2024-07-23 01:09:49.860110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.802 [2024-07-23 01:09:49.860257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.802 [2024-07-23 01:09:49.860283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.802 qpair failed and we were unable to recover it. 00:30:05.802 [2024-07-23 01:09:49.860455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.802 [2024-07-23 01:09:49.860580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.802 [2024-07-23 01:09:49.860604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.802 qpair failed and we were unable to recover it. 00:30:05.802 [2024-07-23 01:09:49.860777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.802 [2024-07-23 01:09:49.860924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.802 [2024-07-23 01:09:49.860948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.802 qpair failed and we were unable to recover it. 00:30:05.802 [2024-07-23 01:09:49.861106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.802 [2024-07-23 01:09:49.861250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.802 [2024-07-23 01:09:49.861274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.802 qpair failed and we were unable to recover it. 00:30:05.802 [2024-07-23 01:09:49.861400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.802 [2024-07-23 01:09:49.861550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.802 [2024-07-23 01:09:49.861574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.802 qpair failed and we were unable to recover it. 00:30:05.802 [2024-07-23 01:09:49.861714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.802 [2024-07-23 01:09:49.861850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.802 [2024-07-23 01:09:49.861875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.802 qpair failed and we were unable to recover it. 00:30:05.802 [2024-07-23 01:09:49.862039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.802 [2024-07-23 01:09:49.862230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.802 [2024-07-23 01:09:49.862254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.802 qpair failed and we were unable to recover it. 00:30:05.802 [2024-07-23 01:09:49.862432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.802 [2024-07-23 01:09:49.862560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.802 [2024-07-23 01:09:49.862584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.802 qpair failed and we were unable to recover it. 00:30:05.802 [2024-07-23 01:09:49.862755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.802 [2024-07-23 01:09:49.862924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.802 [2024-07-23 01:09:49.862949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.802 qpair failed and we were unable to recover it. 00:30:05.802 [2024-07-23 01:09:49.863078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.802 [2024-07-23 01:09:49.863239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.802 [2024-07-23 01:09:49.863263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.802 qpair failed and we were unable to recover it. 00:30:05.802 [2024-07-23 01:09:49.863423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.802 [2024-07-23 01:09:49.863545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.802 [2024-07-23 01:09:49.863570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.802 qpair failed and we were unable to recover it. 00:30:05.802 [2024-07-23 01:09:49.863717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.802 [2024-07-23 01:09:49.863850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.802 [2024-07-23 01:09:49.863874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.802 qpair failed and we were unable to recover it. 00:30:05.802 [2024-07-23 01:09:49.864036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.802 [2024-07-23 01:09:49.864193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.802 [2024-07-23 01:09:49.864217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.802 qpair failed and we were unable to recover it. 00:30:05.802 [2024-07-23 01:09:49.864344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.802 [2024-07-23 01:09:49.864480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.802 [2024-07-23 01:09:49.864506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.802 qpair failed and we were unable to recover it. 00:30:05.802 [2024-07-23 01:09:49.864657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.802 [2024-07-23 01:09:49.864787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.802 [2024-07-23 01:09:49.864815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.802 qpair failed and we were unable to recover it. 00:30:05.802 [2024-07-23 01:09:49.864949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.802 [2024-07-23 01:09:49.865112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.802 [2024-07-23 01:09:49.865136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.802 qpair failed and we were unable to recover it. 00:30:05.802 [2024-07-23 01:09:49.865293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.802 [2024-07-23 01:09:49.865453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.802 [2024-07-23 01:09:49.865477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.802 qpair failed and we were unable to recover it. 00:30:05.802 [2024-07-23 01:09:49.865642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.802 [2024-07-23 01:09:49.865773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.802 [2024-07-23 01:09:49.865798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.802 qpair failed and we were unable to recover it. 00:30:05.802 [2024-07-23 01:09:49.865962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.802 [2024-07-23 01:09:49.866091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.802 [2024-07-23 01:09:49.866116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.802 qpair failed and we were unable to recover it. 00:30:05.802 [2024-07-23 01:09:49.866274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.802 [2024-07-23 01:09:49.866410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.802 [2024-07-23 01:09:49.866434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.802 qpair failed and we were unable to recover it. 00:30:05.802 [2024-07-23 01:09:49.866564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.802 [2024-07-23 01:09:49.866711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.802 [2024-07-23 01:09:49.866737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.802 qpair failed and we were unable to recover it. 00:30:05.802 [2024-07-23 01:09:49.866908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.802 [2024-07-23 01:09:49.867042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.802 [2024-07-23 01:09:49.867066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.802 qpair failed and we were unable to recover it. 00:30:05.802 [2024-07-23 01:09:49.867195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.802 [2024-07-23 01:09:49.867322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.802 [2024-07-23 01:09:49.867346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.802 qpair failed and we were unable to recover it. 00:30:05.802 [2024-07-23 01:09:49.867497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.802 [2024-07-23 01:09:49.867646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.802 [2024-07-23 01:09:49.867671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.802 qpair failed and we were unable to recover it. 00:30:05.802 [2024-07-23 01:09:49.867806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.802 [2024-07-23 01:09:49.867935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.803 [2024-07-23 01:09:49.867959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.803 qpair failed and we were unable to recover it. 00:30:05.803 [2024-07-23 01:09:49.868129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.803 [2024-07-23 01:09:49.868267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.803 [2024-07-23 01:09:49.868291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.803 qpair failed and we were unable to recover it. 00:30:05.803 [2024-07-23 01:09:49.868449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.803 [2024-07-23 01:09:49.868620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.803 [2024-07-23 01:09:49.868645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.803 qpair failed and we were unable to recover it. 00:30:05.803 [2024-07-23 01:09:49.868778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.803 [2024-07-23 01:09:49.868908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.803 [2024-07-23 01:09:49.868933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.803 qpair failed and we were unable to recover it. 00:30:05.803 [2024-07-23 01:09:49.869070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.803 [2024-07-23 01:09:49.869218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.803 [2024-07-23 01:09:49.869243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.803 qpair failed and we were unable to recover it. 00:30:05.803 [2024-07-23 01:09:49.869379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.803 [2024-07-23 01:09:49.869511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.803 [2024-07-23 01:09:49.869535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.803 qpair failed and we were unable to recover it. 00:30:05.803 [2024-07-23 01:09:49.869684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.803 [2024-07-23 01:09:49.869818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.803 [2024-07-23 01:09:49.869842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.803 qpair failed and we were unable to recover it. 00:30:05.803 [2024-07-23 01:09:49.869973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.803 [2024-07-23 01:09:49.870099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.803 [2024-07-23 01:09:49.870123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.803 qpair failed and we were unable to recover it. 00:30:05.803 [2024-07-23 01:09:49.870261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.803 [2024-07-23 01:09:49.870420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.803 [2024-07-23 01:09:49.870445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.803 qpair failed and we were unable to recover it. 00:30:05.803 [2024-07-23 01:09:49.870577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.803 [2024-07-23 01:09:49.870743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.803 [2024-07-23 01:09:49.870768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.803 qpair failed and we were unable to recover it. 00:30:05.803 [2024-07-23 01:09:49.870899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.803 [2024-07-23 01:09:49.871057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.803 [2024-07-23 01:09:49.871081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.803 qpair failed and we were unable to recover it. 00:30:05.803 [2024-07-23 01:09:49.871216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.803 [2024-07-23 01:09:49.871342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.803 [2024-07-23 01:09:49.871366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.803 qpair failed and we were unable to recover it. 00:30:05.803 [2024-07-23 01:09:49.871541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.803 [2024-07-23 01:09:49.871670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.803 [2024-07-23 01:09:49.871695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.803 qpair failed and we were unable to recover it. 00:30:05.803 [2024-07-23 01:09:49.871826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.803 [2024-07-23 01:09:49.871958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.803 [2024-07-23 01:09:49.871982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.803 qpair failed and we were unable to recover it. 00:30:05.803 [2024-07-23 01:09:49.872112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.803 [2024-07-23 01:09:49.872272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.803 [2024-07-23 01:09:49.872296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.803 qpair failed and we were unable to recover it. 00:30:05.803 [2024-07-23 01:09:49.872427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.803 [2024-07-23 01:09:49.872561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.803 [2024-07-23 01:09:49.872585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.803 qpair failed and we were unable to recover it. 00:30:05.803 [2024-07-23 01:09:49.872744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.803 [2024-07-23 01:09:49.872890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.803 [2024-07-23 01:09:49.872914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.803 qpair failed and we were unable to recover it. 00:30:05.803 [2024-07-23 01:09:49.873066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.803 [2024-07-23 01:09:49.873227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.803 [2024-07-23 01:09:49.873251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.803 qpair failed and we were unable to recover it. 00:30:05.803 [2024-07-23 01:09:49.873440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.803 [2024-07-23 01:09:49.873599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.803 [2024-07-23 01:09:49.873652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.803 qpair failed and we were unable to recover it. 00:30:05.803 [2024-07-23 01:09:49.873812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.803 [2024-07-23 01:09:49.873948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.803 [2024-07-23 01:09:49.873972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.803 qpair failed and we were unable to recover it. 00:30:05.803 [2024-07-23 01:09:49.874146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.803 [2024-07-23 01:09:49.874305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.803 [2024-07-23 01:09:49.874329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.803 qpair failed and we were unable to recover it. 00:30:05.803 [2024-07-23 01:09:49.874474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.803 [2024-07-23 01:09:49.874629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.803 [2024-07-23 01:09:49.874654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.803 qpair failed and we were unable to recover it. 00:30:05.803 [2024-07-23 01:09:49.874801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.803 [2024-07-23 01:09:49.874951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.803 [2024-07-23 01:09:49.874975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.803 qpair failed and we were unable to recover it. 00:30:05.803 [2024-07-23 01:09:49.875102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.803 [2024-07-23 01:09:49.875232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.803 [2024-07-23 01:09:49.875256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.803 qpair failed and we were unable to recover it. 00:30:05.803 [2024-07-23 01:09:49.875385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.803 [2024-07-23 01:09:49.875532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.803 [2024-07-23 01:09:49.875557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.803 qpair failed and we were unable to recover it. 00:30:05.803 [2024-07-23 01:09:49.875708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.803 [2024-07-23 01:09:49.875837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.803 [2024-07-23 01:09:49.875861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.803 qpair failed and we were unable to recover it. 00:30:05.803 [2024-07-23 01:09:49.876009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.803 [2024-07-23 01:09:49.876131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.803 [2024-07-23 01:09:49.876155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.803 qpair failed and we were unable to recover it. 00:30:05.803 [2024-07-23 01:09:49.876318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.803 [2024-07-23 01:09:49.876478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.803 [2024-07-23 01:09:49.876502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.803 qpair failed and we were unable to recover it. 00:30:05.803 [2024-07-23 01:09:49.876637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.803 [2024-07-23 01:09:49.876768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.803 [2024-07-23 01:09:49.876792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.803 qpair failed and we were unable to recover it. 00:30:05.803 [2024-07-23 01:09:49.876919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.804 [2024-07-23 01:09:49.877059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.804 [2024-07-23 01:09:49.877083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.804 qpair failed and we were unable to recover it. 00:30:05.804 [2024-07-23 01:09:49.877250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.804 [2024-07-23 01:09:49.877397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.804 [2024-07-23 01:09:49.877421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.804 qpair failed and we were unable to recover it. 00:30:05.804 [2024-07-23 01:09:49.877582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.804 [2024-07-23 01:09:49.877733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.804 [2024-07-23 01:09:49.877759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.804 qpair failed and we were unable to recover it. 00:30:05.804 [2024-07-23 01:09:49.877890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.804 [2024-07-23 01:09:49.878047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.804 [2024-07-23 01:09:49.878072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.804 qpair failed and we were unable to recover it. 00:30:05.804 [2024-07-23 01:09:49.878238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.804 [2024-07-23 01:09:49.878381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.804 [2024-07-23 01:09:49.878404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.804 qpair failed and we were unable to recover it. 00:30:05.804 [2024-07-23 01:09:49.878532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.804 [2024-07-23 01:09:49.878663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.804 [2024-07-23 01:09:49.878688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.804 qpair failed and we were unable to recover it. 00:30:05.804 [2024-07-23 01:09:49.878817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.804 [2024-07-23 01:09:49.878945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.804 [2024-07-23 01:09:49.878969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.804 qpair failed and we were unable to recover it. 00:30:05.804 [2024-07-23 01:09:49.879099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.804 [2024-07-23 01:09:49.879260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.804 [2024-07-23 01:09:49.879284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.804 qpair failed and we were unable to recover it. 00:30:05.804 [2024-07-23 01:09:49.879475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.804 [2024-07-23 01:09:49.879604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.804 [2024-07-23 01:09:49.879634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.804 qpair failed and we were unable to recover it. 00:30:05.804 [2024-07-23 01:09:49.879780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.804 [2024-07-23 01:09:49.879917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.804 [2024-07-23 01:09:49.879941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.804 qpair failed and we were unable to recover it. 00:30:05.804 [2024-07-23 01:09:49.880078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.804 [2024-07-23 01:09:49.880236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.804 [2024-07-23 01:09:49.880260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.804 qpair failed and we were unable to recover it. 00:30:05.804 [2024-07-23 01:09:49.880421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.804 [2024-07-23 01:09:49.880543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.804 [2024-07-23 01:09:49.880568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.804 qpair failed and we were unable to recover it. 00:30:05.804 [2024-07-23 01:09:49.880735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.804 [2024-07-23 01:09:49.880867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.804 [2024-07-23 01:09:49.880897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.804 qpair failed and we were unable to recover it. 00:30:05.804 [2024-07-23 01:09:49.881032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.804 [2024-07-23 01:09:49.881161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.804 [2024-07-23 01:09:49.881185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.804 qpair failed and we were unable to recover it. 00:30:05.804 [2024-07-23 01:09:49.881330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.804 [2024-07-23 01:09:49.881491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.804 [2024-07-23 01:09:49.881516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.804 qpair failed and we were unable to recover it. 00:30:05.804 [2024-07-23 01:09:49.881649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.804 [2024-07-23 01:09:49.881811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.804 [2024-07-23 01:09:49.881836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.804 qpair failed and we were unable to recover it. 00:30:05.804 [2024-07-23 01:09:49.881978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.804 [2024-07-23 01:09:49.882108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.804 [2024-07-23 01:09:49.882132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.804 qpair failed and we were unable to recover it. 00:30:05.804 [2024-07-23 01:09:49.882261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.804 [2024-07-23 01:09:49.882417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.804 [2024-07-23 01:09:49.882442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.804 qpair failed and we were unable to recover it. 00:30:05.804 [2024-07-23 01:09:49.882595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.804 [2024-07-23 01:09:49.882769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.804 [2024-07-23 01:09:49.882793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.804 qpair failed and we were unable to recover it. 00:30:05.804 [2024-07-23 01:09:49.882932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.804 [2024-07-23 01:09:49.883063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.804 [2024-07-23 01:09:49.883087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.804 qpair failed and we were unable to recover it. 00:30:05.804 [2024-07-23 01:09:49.883229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.804 [2024-07-23 01:09:49.883368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.804 [2024-07-23 01:09:49.883392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.804 qpair failed and we were unable to recover it. 00:30:05.804 [2024-07-23 01:09:49.883539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.804 [2024-07-23 01:09:49.883665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.804 [2024-07-23 01:09:49.883691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.804 qpair failed and we were unable to recover it. 00:30:05.804 [2024-07-23 01:09:49.883833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.804 [2024-07-23 01:09:49.883968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.804 [2024-07-23 01:09:49.883996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.804 qpair failed and we were unable to recover it. 00:30:05.804 [2024-07-23 01:09:49.884159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.804 [2024-07-23 01:09:49.884314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.804 [2024-07-23 01:09:49.884338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.804 qpair failed and we were unable to recover it. 00:30:05.804 [2024-07-23 01:09:49.884512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.804 [2024-07-23 01:09:49.884695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.804 [2024-07-23 01:09:49.884720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.804 qpair failed and we were unable to recover it. 00:30:05.804 [2024-07-23 01:09:49.884880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.804 [2024-07-23 01:09:49.885024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.804 [2024-07-23 01:09:49.885048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.804 qpair failed and we were unable to recover it. 00:30:05.804 [2024-07-23 01:09:49.885206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.804 [2024-07-23 01:09:49.885339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.804 [2024-07-23 01:09:49.885365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.804 qpair failed and we were unable to recover it. 00:30:05.804 [2024-07-23 01:09:49.885515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.804 [2024-07-23 01:09:49.885656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.804 [2024-07-23 01:09:49.885683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.804 qpair failed and we were unable to recover it. 00:30:05.804 [2024-07-23 01:09:49.885828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.804 [2024-07-23 01:09:49.885957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.804 [2024-07-23 01:09:49.885982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.804 qpair failed and we were unable to recover it. 00:30:05.805 [2024-07-23 01:09:49.886146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.805 [2024-07-23 01:09:49.886338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.805 [2024-07-23 01:09:49.886362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.805 qpair failed and we were unable to recover it. 00:30:05.805 [2024-07-23 01:09:49.886498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.805 [2024-07-23 01:09:49.886663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.805 [2024-07-23 01:09:49.886688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.805 qpair failed and we were unable to recover it. 00:30:05.805 [2024-07-23 01:09:49.886864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.805 [2024-07-23 01:09:49.886995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.805 [2024-07-23 01:09:49.887020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.805 qpair failed and we were unable to recover it. 00:30:05.805 [2024-07-23 01:09:49.887178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.805 [2024-07-23 01:09:49.887314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.805 [2024-07-23 01:09:49.887338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.805 qpair failed and we were unable to recover it. 00:30:05.805 [2024-07-23 01:09:49.887479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.805 [2024-07-23 01:09:49.887643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.805 [2024-07-23 01:09:49.887668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.805 qpair failed and we were unable to recover it. 00:30:05.805 [2024-07-23 01:09:49.887837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.805 [2024-07-23 01:09:49.887979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.805 [2024-07-23 01:09:49.888003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.805 qpair failed and we were unable to recover it. 00:30:05.805 [2024-07-23 01:09:49.888135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.805 [2024-07-23 01:09:49.888268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.805 [2024-07-23 01:09:49.888294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.805 qpair failed and we were unable to recover it. 00:30:05.805 [2024-07-23 01:09:49.888453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.805 [2024-07-23 01:09:49.888582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.805 [2024-07-23 01:09:49.888606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.805 qpair failed and we were unable to recover it. 00:30:05.805 [2024-07-23 01:09:49.888775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.805 [2024-07-23 01:09:49.888942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.805 [2024-07-23 01:09:49.888966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.805 qpair failed and we were unable to recover it. 00:30:05.805 [2024-07-23 01:09:49.889123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.805 [2024-07-23 01:09:49.889253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.805 [2024-07-23 01:09:49.889277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.805 qpair failed and we were unable to recover it. 00:30:05.805 [2024-07-23 01:09:49.889438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.805 [2024-07-23 01:09:49.889579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.805 [2024-07-23 01:09:49.889603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.805 qpair failed and we were unable to recover it. 00:30:05.805 [2024-07-23 01:09:49.889754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.805 [2024-07-23 01:09:49.889887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.805 [2024-07-23 01:09:49.889911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.805 qpair failed and we were unable to recover it. 00:30:05.805 [2024-07-23 01:09:49.890069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.805 [2024-07-23 01:09:49.890212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.805 [2024-07-23 01:09:49.890237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.805 qpair failed and we were unable to recover it. 00:30:05.805 [2024-07-23 01:09:49.890373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.805 [2024-07-23 01:09:49.890563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.805 [2024-07-23 01:09:49.890588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.805 qpair failed and we were unable to recover it. 00:30:05.805 [2024-07-23 01:09:49.890750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.805 [2024-07-23 01:09:49.890906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.805 [2024-07-23 01:09:49.890930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.805 qpair failed and we were unable to recover it. 00:30:05.805 [2024-07-23 01:09:49.891092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.805 [2024-07-23 01:09:49.891230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.805 [2024-07-23 01:09:49.891256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.805 qpair failed and we were unable to recover it. 00:30:05.805 [2024-07-23 01:09:49.891419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.805 [2024-07-23 01:09:49.891609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.805 [2024-07-23 01:09:49.891639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.805 qpair failed and we were unable to recover it. 00:30:05.805 [2024-07-23 01:09:49.891771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.805 [2024-07-23 01:09:49.891916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.805 [2024-07-23 01:09:49.891940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.805 qpair failed and we were unable to recover it. 00:30:05.805 [2024-07-23 01:09:49.892079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.805 [2024-07-23 01:09:49.892238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.805 [2024-07-23 01:09:49.892262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.805 qpair failed and we were unable to recover it. 00:30:05.805 [2024-07-23 01:09:49.892448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.805 [2024-07-23 01:09:49.892577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.805 [2024-07-23 01:09:49.892601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.805 qpair failed and we were unable to recover it. 00:30:05.805 [2024-07-23 01:09:49.892738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.805 [2024-07-23 01:09:49.892881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.805 [2024-07-23 01:09:49.892906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.805 qpair failed and we were unable to recover it. 00:30:05.805 [2024-07-23 01:09:49.893039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.805 [2024-07-23 01:09:49.893200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.805 [2024-07-23 01:09:49.893224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.805 qpair failed and we were unable to recover it. 00:30:05.805 [2024-07-23 01:09:49.893382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.805 [2024-07-23 01:09:49.893547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.805 [2024-07-23 01:09:49.893571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.805 qpair failed and we were unable to recover it. 00:30:05.805 [2024-07-23 01:09:49.893723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.805 [2024-07-23 01:09:49.893856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.805 [2024-07-23 01:09:49.893880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.805 qpair failed and we were unable to recover it. 00:30:05.805 [2024-07-23 01:09:49.894049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.805 [2024-07-23 01:09:49.894183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.805 [2024-07-23 01:09:49.894207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.805 qpair failed and we were unable to recover it. 00:30:05.805 [2024-07-23 01:09:49.894337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.805 [2024-07-23 01:09:49.894480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.805 [2024-07-23 01:09:49.894504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.805 qpair failed and we were unable to recover it. 00:30:05.805 [2024-07-23 01:09:49.894681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.805 [2024-07-23 01:09:49.894806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.805 [2024-07-23 01:09:49.894830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.805 qpair failed and we were unable to recover it. 00:30:05.805 [2024-07-23 01:09:49.894992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.805 [2024-07-23 01:09:49.895140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.805 [2024-07-23 01:09:49.895164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.805 qpair failed and we were unable to recover it. 00:30:05.805 [2024-07-23 01:09:49.895313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.805 [2024-07-23 01:09:49.895474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.806 [2024-07-23 01:09:49.895499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.806 qpair failed and we were unable to recover it. 00:30:05.806 [2024-07-23 01:09:49.895665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.806 [2024-07-23 01:09:49.895807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.806 [2024-07-23 01:09:49.895831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.806 qpair failed and we were unable to recover it. 00:30:05.806 [2024-07-23 01:09:49.895974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.806 [2024-07-23 01:09:49.896115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.806 [2024-07-23 01:09:49.896141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.806 qpair failed and we were unable to recover it. 00:30:05.806 [2024-07-23 01:09:49.896304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.806 [2024-07-23 01:09:49.896470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.806 [2024-07-23 01:09:49.896496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.806 qpair failed and we were unable to recover it. 00:30:05.806 [2024-07-23 01:09:49.896631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.806 [2024-07-23 01:09:49.896760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.806 [2024-07-23 01:09:49.896785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.806 qpair failed and we were unable to recover it. 00:30:05.806 [2024-07-23 01:09:49.896936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.806 [2024-07-23 01:09:49.897066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.806 [2024-07-23 01:09:49.897092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.806 qpair failed and we were unable to recover it. 00:30:05.806 [2024-07-23 01:09:49.897237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.806 [2024-07-23 01:09:49.897374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.806 [2024-07-23 01:09:49.897398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.806 qpair failed and we were unable to recover it. 00:30:05.806 [2024-07-23 01:09:49.897539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.806 [2024-07-23 01:09:49.897684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.806 [2024-07-23 01:09:49.897709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.806 qpair failed and we were unable to recover it. 00:30:05.806 [2024-07-23 01:09:49.897866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.806 [2024-07-23 01:09:49.898028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.806 [2024-07-23 01:09:49.898054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.806 qpair failed and we were unable to recover it. 00:30:05.806 [2024-07-23 01:09:49.898225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.806 [2024-07-23 01:09:49.898381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.806 [2024-07-23 01:09:49.898404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.806 qpair failed and we were unable to recover it. 00:30:05.806 [2024-07-23 01:09:49.898591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.806 [2024-07-23 01:09:49.898733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.806 [2024-07-23 01:09:49.898757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.806 qpair failed and we were unable to recover it. 00:30:05.806 [2024-07-23 01:09:49.898893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.806 [2024-07-23 01:09:49.899021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.806 [2024-07-23 01:09:49.899045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.806 qpair failed and we were unable to recover it. 00:30:05.806 [2024-07-23 01:09:49.899237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.806 [2024-07-23 01:09:49.899370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.806 [2024-07-23 01:09:49.899394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.806 qpair failed and we were unable to recover it. 00:30:05.806 [2024-07-23 01:09:49.899524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.806 [2024-07-23 01:09:49.899686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.806 [2024-07-23 01:09:49.899711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.806 qpair failed and we were unable to recover it. 00:30:05.806 [2024-07-23 01:09:49.899851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.806 [2024-07-23 01:09:49.899987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.806 [2024-07-23 01:09:49.900011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.806 qpair failed and we were unable to recover it. 00:30:05.806 [2024-07-23 01:09:49.900141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.806 [2024-07-23 01:09:49.900278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.806 [2024-07-23 01:09:49.900304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.806 qpair failed and we were unable to recover it. 00:30:05.806 [2024-07-23 01:09:49.900465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.806 [2024-07-23 01:09:49.900589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.806 [2024-07-23 01:09:49.900624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.806 qpair failed and we were unable to recover it. 00:30:05.806 [2024-07-23 01:09:49.900821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.806 [2024-07-23 01:09:49.900955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.806 [2024-07-23 01:09:49.900979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.806 qpair failed and we were unable to recover it. 00:30:05.806 [2024-07-23 01:09:49.901103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.806 [2024-07-23 01:09:49.901235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.806 [2024-07-23 01:09:49.901261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.806 qpair failed and we were unable to recover it. 00:30:05.806 [2024-07-23 01:09:49.901390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.806 [2024-07-23 01:09:49.901538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.806 [2024-07-23 01:09:49.901562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.806 qpair failed and we were unable to recover it. 00:30:05.806 [2024-07-23 01:09:49.901693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.806 [2024-07-23 01:09:49.901824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.806 [2024-07-23 01:09:49.901849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.806 qpair failed and we were unable to recover it. 00:30:05.806 [2024-07-23 01:09:49.902017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.806 [2024-07-23 01:09:49.902152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.806 [2024-07-23 01:09:49.902176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.806 qpair failed and we were unable to recover it. 00:30:05.806 [2024-07-23 01:09:49.902335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.806 [2024-07-23 01:09:49.902472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.806 [2024-07-23 01:09:49.902496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.806 qpair failed and we were unable to recover it. 00:30:05.806 [2024-07-23 01:09:49.902643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.806 [2024-07-23 01:09:49.902784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.806 [2024-07-23 01:09:49.902808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.806 qpair failed and we were unable to recover it. 00:30:05.806 [2024-07-23 01:09:49.902958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.806 [2024-07-23 01:09:49.903118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.806 [2024-07-23 01:09:49.903142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.806 qpair failed and we were unable to recover it. 00:30:05.806 [2024-07-23 01:09:49.903291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.806 [2024-07-23 01:09:49.903437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.807 [2024-07-23 01:09:49.903461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.807 qpair failed and we were unable to recover it. 00:30:05.807 [2024-07-23 01:09:49.903624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.807 [2024-07-23 01:09:49.903756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.807 [2024-07-23 01:09:49.903781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.807 qpair failed and we were unable to recover it. 00:30:05.807 [2024-07-23 01:09:49.903939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.807 [2024-07-23 01:09:49.904100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.807 [2024-07-23 01:09:49.904124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.807 qpair failed and we were unable to recover it. 00:30:05.807 [2024-07-23 01:09:49.904249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.807 [2024-07-23 01:09:49.904381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.807 [2024-07-23 01:09:49.904405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.807 qpair failed and we were unable to recover it. 00:30:05.807 [2024-07-23 01:09:49.904564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.807 [2024-07-23 01:09:49.904727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.807 [2024-07-23 01:09:49.904753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.807 qpair failed and we were unable to recover it. 00:30:05.807 [2024-07-23 01:09:49.904881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.807 [2024-07-23 01:09:49.905030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.807 [2024-07-23 01:09:49.905056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.807 qpair failed and we were unable to recover it. 00:30:05.807 [2024-07-23 01:09:49.905212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.807 [2024-07-23 01:09:49.905339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.807 [2024-07-23 01:09:49.905364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.807 qpair failed and we were unable to recover it. 00:30:05.807 [2024-07-23 01:09:49.905525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.807 [2024-07-23 01:09:49.905672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.807 [2024-07-23 01:09:49.905697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.807 qpair failed and we were unable to recover it. 00:30:05.807 [2024-07-23 01:09:49.905851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.807 [2024-07-23 01:09:49.906006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.807 [2024-07-23 01:09:49.906030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.807 qpair failed and we were unable to recover it. 00:30:05.807 [2024-07-23 01:09:49.906190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.807 [2024-07-23 01:09:49.906318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.807 [2024-07-23 01:09:49.906343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.807 qpair failed and we were unable to recover it. 00:30:05.807 [2024-07-23 01:09:49.906517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.807 [2024-07-23 01:09:49.906663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.807 [2024-07-23 01:09:49.906688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.807 qpair failed and we were unable to recover it. 00:30:05.807 [2024-07-23 01:09:49.906851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.807 [2024-07-23 01:09:49.906975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.807 [2024-07-23 01:09:49.907000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.807 qpair failed and we were unable to recover it. 00:30:05.807 [2024-07-23 01:09:49.907140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.807 [2024-07-23 01:09:49.907285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.807 [2024-07-23 01:09:49.907312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.807 qpair failed and we were unable to recover it. 00:30:05.807 [2024-07-23 01:09:49.907473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.807 [2024-07-23 01:09:49.907603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.807 [2024-07-23 01:09:49.907634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.807 qpair failed and we were unable to recover it. 00:30:05.807 [2024-07-23 01:09:49.907788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.807 [2024-07-23 01:09:49.907939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.807 [2024-07-23 01:09:49.907963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.807 qpair failed and we were unable to recover it. 00:30:05.807 [2024-07-23 01:09:49.908112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.807 [2024-07-23 01:09:49.908245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.807 [2024-07-23 01:09:49.908269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.807 qpair failed and we were unable to recover it. 00:30:05.807 [2024-07-23 01:09:49.908440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.807 [2024-07-23 01:09:49.908577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.807 [2024-07-23 01:09:49.908601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.807 qpair failed and we were unable to recover it. 00:30:05.807 [2024-07-23 01:09:49.908777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.807 [2024-07-23 01:09:49.908908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.807 [2024-07-23 01:09:49.908934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.807 qpair failed and we were unable to recover it. 00:30:05.807 [2024-07-23 01:09:49.909066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.807 [2024-07-23 01:09:49.909195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.807 [2024-07-23 01:09:49.909220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.807 qpair failed and we were unable to recover it. 00:30:05.807 [2024-07-23 01:09:49.909409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.807 [2024-07-23 01:09:49.909533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.807 [2024-07-23 01:09:49.909557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.807 qpair failed and we were unable to recover it. 00:30:05.807 [2024-07-23 01:09:49.909726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.807 [2024-07-23 01:09:49.909852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.807 [2024-07-23 01:09:49.909876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.807 qpair failed and we were unable to recover it. 00:30:05.807 [2024-07-23 01:09:49.910007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.807 [2024-07-23 01:09:49.910137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.807 [2024-07-23 01:09:49.910161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.807 qpair failed and we were unable to recover it. 00:30:05.807 [2024-07-23 01:09:49.910297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.807 [2024-07-23 01:09:49.910429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.807 [2024-07-23 01:09:49.910455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.807 qpair failed and we were unable to recover it. 00:30:05.807 [2024-07-23 01:09:49.910602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.807 [2024-07-23 01:09:49.910765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.807 [2024-07-23 01:09:49.910790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.807 qpair failed and we were unable to recover it. 00:30:05.807 [2024-07-23 01:09:49.910924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.807 [2024-07-23 01:09:49.911056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.807 [2024-07-23 01:09:49.911080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.807 qpair failed and we were unable to recover it. 00:30:05.807 [2024-07-23 01:09:49.911246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.807 [2024-07-23 01:09:49.911389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.807 [2024-07-23 01:09:49.911413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.807 qpair failed and we were unable to recover it. 00:30:05.807 [2024-07-23 01:09:49.911574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.807 [2024-07-23 01:09:49.911734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.807 [2024-07-23 01:09:49.911759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.807 qpair failed and we were unable to recover it. 00:30:05.807 [2024-07-23 01:09:49.911907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.807 [2024-07-23 01:09:49.912063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.807 [2024-07-23 01:09:49.912087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.807 qpair failed and we were unable to recover it. 00:30:05.807 [2024-07-23 01:09:49.912215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.807 [2024-07-23 01:09:49.912355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.807 [2024-07-23 01:09:49.912379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.807 qpair failed and we were unable to recover it. 00:30:05.807 [2024-07-23 01:09:49.912516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.808 [2024-07-23 01:09:49.912653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.808 [2024-07-23 01:09:49.912679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.808 qpair failed and we were unable to recover it. 00:30:05.808 [2024-07-23 01:09:49.912825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.808 [2024-07-23 01:09:49.912974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.808 [2024-07-23 01:09:49.912998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.808 qpair failed and we were unable to recover it. 00:30:05.808 [2024-07-23 01:09:49.913142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.808 [2024-07-23 01:09:49.913277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.808 [2024-07-23 01:09:49.913303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.808 qpair failed and we were unable to recover it. 00:30:05.808 [2024-07-23 01:09:49.913434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.808 [2024-07-23 01:09:49.913597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.808 [2024-07-23 01:09:49.913629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.808 qpair failed and we were unable to recover it. 00:30:05.808 [2024-07-23 01:09:49.913798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.808 [2024-07-23 01:09:49.913932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.808 [2024-07-23 01:09:49.913957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.808 qpair failed and we were unable to recover it. 00:30:05.808 [2024-07-23 01:09:49.914116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.808 [2024-07-23 01:09:49.914247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.808 [2024-07-23 01:09:49.914271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.808 qpair failed and we were unable to recover it. 00:30:05.808 [2024-07-23 01:09:49.914429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.808 [2024-07-23 01:09:49.914584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.808 [2024-07-23 01:09:49.914609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.808 qpair failed and we were unable to recover it. 00:30:05.808 [2024-07-23 01:09:49.914743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.808 [2024-07-23 01:09:49.914875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.808 [2024-07-23 01:09:49.914899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.808 qpair failed and we were unable to recover it. 00:30:05.808 [2024-07-23 01:09:49.915065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.808 [2024-07-23 01:09:49.915225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.808 [2024-07-23 01:09:49.915249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.808 qpair failed and we were unable to recover it. 00:30:05.808 [2024-07-23 01:09:49.915378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.808 [2024-07-23 01:09:49.915502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.808 [2024-07-23 01:09:49.915526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.808 qpair failed and we were unable to recover it. 00:30:05.808 [2024-07-23 01:09:49.915656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.808 [2024-07-23 01:09:49.915812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.808 [2024-07-23 01:09:49.915836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.808 qpair failed and we were unable to recover it. 00:30:05.808 [2024-07-23 01:09:49.915985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.808 [2024-07-23 01:09:49.916146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.808 [2024-07-23 01:09:49.916172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.808 qpair failed and we were unable to recover it. 00:30:05.808 [2024-07-23 01:09:49.916336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.808 [2024-07-23 01:09:49.916504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.808 [2024-07-23 01:09:49.916528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.808 qpair failed and we were unable to recover it. 00:30:05.808 [2024-07-23 01:09:49.916663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.808 [2024-07-23 01:09:49.916801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.808 [2024-07-23 01:09:49.916829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.808 qpair failed and we were unable to recover it. 00:30:05.808 [2024-07-23 01:09:49.916962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.808 [2024-07-23 01:09:49.917086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.808 [2024-07-23 01:09:49.917110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.808 qpair failed and we were unable to recover it. 00:30:05.808 [2024-07-23 01:09:49.917240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.808 [2024-07-23 01:09:49.917375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.808 [2024-07-23 01:09:49.917400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.808 qpair failed and we were unable to recover it. 00:30:05.808 [2024-07-23 01:09:49.917532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.808 [2024-07-23 01:09:49.917691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.808 [2024-07-23 01:09:49.917716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.808 qpair failed and we were unable to recover it. 00:30:05.808 [2024-07-23 01:09:49.917868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.808 [2024-07-23 01:09:49.918041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.808 [2024-07-23 01:09:49.918065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.808 qpair failed and we were unable to recover it. 00:30:05.808 [2024-07-23 01:09:49.918254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.808 [2024-07-23 01:09:49.918396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.808 [2024-07-23 01:09:49.918420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.808 qpair failed and we were unable to recover it. 00:30:05.808 [2024-07-23 01:09:49.918547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.808 [2024-07-23 01:09:49.918725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.808 [2024-07-23 01:09:49.918750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.808 qpair failed and we were unable to recover it. 00:30:05.808 [2024-07-23 01:09:49.918911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.808 [2024-07-23 01:09:49.919047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.808 [2024-07-23 01:09:49.919071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.808 qpair failed and we were unable to recover it. 00:30:05.808 [2024-07-23 01:09:49.919234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.808 [2024-07-23 01:09:49.919362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.808 [2024-07-23 01:09:49.919386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.808 qpair failed and we were unable to recover it. 00:30:05.808 [2024-07-23 01:09:49.919546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.808 [2024-07-23 01:09:49.919721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.808 [2024-07-23 01:09:49.919746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.808 qpair failed and we were unable to recover it. 00:30:05.808 [2024-07-23 01:09:49.919896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.808 [2024-07-23 01:09:49.920028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.808 [2024-07-23 01:09:49.920053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.808 qpair failed and we were unable to recover it. 00:30:05.808 [2024-07-23 01:09:49.920215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.808 [2024-07-23 01:09:49.920373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.808 [2024-07-23 01:09:49.920397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.808 qpair failed and we were unable to recover it. 00:30:05.808 [2024-07-23 01:09:49.920525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.808 [2024-07-23 01:09:49.920671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.808 [2024-07-23 01:09:49.920697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.808 qpair failed and we were unable to recover it. 00:30:05.808 [2024-07-23 01:09:49.920860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.808 [2024-07-23 01:09:49.920995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.808 [2024-07-23 01:09:49.921021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.808 qpair failed and we were unable to recover it. 00:30:05.808 [2024-07-23 01:09:49.921186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.808 [2024-07-23 01:09:49.921327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.808 [2024-07-23 01:09:49.921352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.808 qpair failed and we were unable to recover it. 00:30:05.808 [2024-07-23 01:09:49.921547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.808 [2024-07-23 01:09:49.921692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.808 [2024-07-23 01:09:49.921717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.809 qpair failed and we were unable to recover it. 00:30:05.809 [2024-07-23 01:09:49.921846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.809 [2024-07-23 01:09:49.922031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.809 [2024-07-23 01:09:49.922056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.809 qpair failed and we were unable to recover it. 00:30:05.809 [2024-07-23 01:09:49.922221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.809 [2024-07-23 01:09:49.922356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.809 [2024-07-23 01:09:49.922381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.809 qpair failed and we were unable to recover it. 00:30:05.809 [2024-07-23 01:09:49.922529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.809 [2024-07-23 01:09:49.922689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.809 [2024-07-23 01:09:49.922713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.809 qpair failed and we were unable to recover it. 00:30:05.809 [2024-07-23 01:09:49.922888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.809 [2024-07-23 01:09:49.923055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.809 [2024-07-23 01:09:49.923079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.809 qpair failed and we were unable to recover it. 00:30:05.809 [2024-07-23 01:09:49.923240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.809 [2024-07-23 01:09:49.923374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.809 [2024-07-23 01:09:49.923398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.809 qpair failed and we were unable to recover it. 00:30:05.809 [2024-07-23 01:09:49.923550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.809 [2024-07-23 01:09:49.923687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.809 [2024-07-23 01:09:49.923712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.809 qpair failed and we were unable to recover it. 00:30:05.809 [2024-07-23 01:09:49.923878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.809 [2024-07-23 01:09:49.924036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.809 [2024-07-23 01:09:49.924060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.809 qpair failed and we were unable to recover it. 00:30:05.809 [2024-07-23 01:09:49.924197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.809 [2024-07-23 01:09:49.924353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.809 [2024-07-23 01:09:49.924377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.809 qpair failed and we were unable to recover it. 00:30:05.809 [2024-07-23 01:09:49.924521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.809 [2024-07-23 01:09:49.924665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.809 [2024-07-23 01:09:49.924691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.809 qpair failed and we were unable to recover it. 00:30:05.809 [2024-07-23 01:09:49.924851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.809 [2024-07-23 01:09:49.925000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.809 [2024-07-23 01:09:49.925026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.809 qpair failed and we were unable to recover it. 00:30:05.809 [2024-07-23 01:09:49.925158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.809 [2024-07-23 01:09:49.925333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.809 [2024-07-23 01:09:49.925358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.809 qpair failed and we were unable to recover it. 00:30:05.809 [2024-07-23 01:09:49.925488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.809 [2024-07-23 01:09:49.925638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.809 [2024-07-23 01:09:49.925664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.809 qpair failed and we were unable to recover it. 00:30:05.809 [2024-07-23 01:09:49.925808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.809 [2024-07-23 01:09:49.925981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.809 [2024-07-23 01:09:49.926005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.809 qpair failed and we were unable to recover it. 00:30:05.809 [2024-07-23 01:09:49.926154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.809 [2024-07-23 01:09:49.926283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.809 [2024-07-23 01:09:49.926308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.809 qpair failed and we were unable to recover it. 00:30:05.809 [2024-07-23 01:09:49.926485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.809 [2024-07-23 01:09:49.926620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.809 [2024-07-23 01:09:49.926644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.809 qpair failed and we were unable to recover it. 00:30:05.809 [2024-07-23 01:09:49.926776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.809 [2024-07-23 01:09:49.926902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.809 [2024-07-23 01:09:49.926928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.809 qpair failed and we were unable to recover it. 00:30:05.809 [2024-07-23 01:09:49.927086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.809 [2024-07-23 01:09:49.927219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.809 [2024-07-23 01:09:49.927243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.809 qpair failed and we were unable to recover it. 00:30:05.809 [2024-07-23 01:09:49.927377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.809 [2024-07-23 01:09:49.927506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.809 [2024-07-23 01:09:49.927531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.809 qpair failed and we were unable to recover it. 00:30:05.809 [2024-07-23 01:09:49.927695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.809 [2024-07-23 01:09:49.927832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.809 [2024-07-23 01:09:49.927857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.809 qpair failed and we were unable to recover it. 00:30:05.809 [2024-07-23 01:09:49.928003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.809 [2024-07-23 01:09:49.928164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.809 [2024-07-23 01:09:49.928188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.809 qpair failed and we were unable to recover it. 00:30:05.809 [2024-07-23 01:09:49.928321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.809 [2024-07-23 01:09:49.928488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.809 [2024-07-23 01:09:49.928513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.809 qpair failed and we were unable to recover it. 00:30:05.809 [2024-07-23 01:09:49.928707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.809 [2024-07-23 01:09:49.928843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.809 [2024-07-23 01:09:49.928867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.809 qpair failed and we were unable to recover it. 00:30:05.809 [2024-07-23 01:09:49.929029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.809 [2024-07-23 01:09:49.929216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.809 [2024-07-23 01:09:49.929240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.809 qpair failed and we were unable to recover it. 00:30:05.809 [2024-07-23 01:09:49.929392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.809 [2024-07-23 01:09:49.929532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.809 [2024-07-23 01:09:49.929556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.809 qpair failed and we were unable to recover it. 00:30:05.809 [2024-07-23 01:09:49.929711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.809 [2024-07-23 01:09:49.929872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.809 [2024-07-23 01:09:49.929896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.809 qpair failed and we were unable to recover it. 00:30:05.809 [2024-07-23 01:09:49.930058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.809 [2024-07-23 01:09:49.930202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.809 [2024-07-23 01:09:49.930227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.809 qpair failed and we were unable to recover it. 00:30:05.809 [2024-07-23 01:09:49.930367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.809 [2024-07-23 01:09:49.930494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.809 [2024-07-23 01:09:49.930518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.809 qpair failed and we were unable to recover it. 00:30:05.809 [2024-07-23 01:09:49.930660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.809 [2024-07-23 01:09:49.930822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.809 [2024-07-23 01:09:49.930847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.809 qpair failed and we were unable to recover it. 00:30:05.809 [2024-07-23 01:09:49.930973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.809 [2024-07-23 01:09:49.931104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.810 [2024-07-23 01:09:49.931129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.810 qpair failed and we were unable to recover it. 00:30:05.810 [2024-07-23 01:09:49.931264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.810 [2024-07-23 01:09:49.931418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.810 [2024-07-23 01:09:49.931443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.810 qpair failed and we were unable to recover it. 00:30:05.810 [2024-07-23 01:09:49.931601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.810 [2024-07-23 01:09:49.931759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.810 [2024-07-23 01:09:49.931784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.810 qpair failed and we were unable to recover it. 00:30:05.810 [2024-07-23 01:09:49.931930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.810 [2024-07-23 01:09:49.932072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.810 [2024-07-23 01:09:49.932096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.810 qpair failed and we were unable to recover it. 00:30:05.810 [2024-07-23 01:09:49.932228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.810 [2024-07-23 01:09:49.932366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.810 [2024-07-23 01:09:49.932390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.810 qpair failed and we were unable to recover it. 00:30:05.810 [2024-07-23 01:09:49.932553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.810 [2024-07-23 01:09:49.932730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.810 [2024-07-23 01:09:49.932755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.810 qpair failed and we were unable to recover it. 00:30:05.810 [2024-07-23 01:09:49.932892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.810 [2024-07-23 01:09:49.933021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.810 [2024-07-23 01:09:49.933046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.810 qpair failed and we were unable to recover it. 00:30:05.810 [2024-07-23 01:09:49.933206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.810 [2024-07-23 01:09:49.933372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.810 [2024-07-23 01:09:49.933402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.810 qpair failed and we were unable to recover it. 00:30:05.810 [2024-07-23 01:09:49.933530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.810 [2024-07-23 01:09:49.933689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.810 [2024-07-23 01:09:49.933714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.810 qpair failed and we were unable to recover it. 00:30:05.810 [2024-07-23 01:09:49.933871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.810 [2024-07-23 01:09:49.934035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.810 [2024-07-23 01:09:49.934059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.810 qpair failed and we were unable to recover it. 00:30:05.810 [2024-07-23 01:09:49.934194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.810 [2024-07-23 01:09:49.934325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.810 [2024-07-23 01:09:49.934350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.810 qpair failed and we were unable to recover it. 00:30:05.810 [2024-07-23 01:09:49.934475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.810 [2024-07-23 01:09:49.934637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.810 [2024-07-23 01:09:49.934661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.810 qpair failed and we were unable to recover it. 00:30:05.810 [2024-07-23 01:09:49.934804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.810 [2024-07-23 01:09:49.934933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.810 [2024-07-23 01:09:49.934959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.810 qpair failed and we were unable to recover it. 00:30:05.810 [2024-07-23 01:09:49.935119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.810 [2024-07-23 01:09:49.935263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.810 [2024-07-23 01:09:49.935288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.810 qpair failed and we were unable to recover it. 00:30:05.810 [2024-07-23 01:09:49.935429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.810 [2024-07-23 01:09:49.935577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.810 [2024-07-23 01:09:49.935601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.810 qpair failed and we were unable to recover it. 00:30:05.810 [2024-07-23 01:09:49.935738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.810 [2024-07-23 01:09:49.935899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.810 [2024-07-23 01:09:49.935923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.810 qpair failed and we were unable to recover it. 00:30:05.810 [2024-07-23 01:09:49.936061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.810 [2024-07-23 01:09:49.936190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.810 [2024-07-23 01:09:49.936215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.810 qpair failed and we were unable to recover it. 00:30:05.810 [2024-07-23 01:09:49.936376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.810 [2024-07-23 01:09:49.936516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.810 [2024-07-23 01:09:49.936540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.810 qpair failed and we were unable to recover it. 00:30:05.810 [2024-07-23 01:09:49.936705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.810 [2024-07-23 01:09:49.936865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.810 [2024-07-23 01:09:49.936889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.810 qpair failed and we were unable to recover it. 00:30:05.810 [2024-07-23 01:09:49.937021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.810 [2024-07-23 01:09:49.937151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.810 [2024-07-23 01:09:49.937175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.810 qpair failed and we were unable to recover it. 00:30:05.810 [2024-07-23 01:09:49.937368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.810 [2024-07-23 01:09:49.937503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.810 [2024-07-23 01:09:49.937528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.810 qpair failed and we were unable to recover it. 00:30:05.810 [2024-07-23 01:09:49.937660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.810 [2024-07-23 01:09:49.937789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.810 [2024-07-23 01:09:49.937813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.810 qpair failed and we were unable to recover it. 00:30:05.810 [2024-07-23 01:09:49.937988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.810 [2024-07-23 01:09:49.938117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.810 [2024-07-23 01:09:49.938141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.810 qpair failed and we were unable to recover it. 00:30:05.810 [2024-07-23 01:09:49.938315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.810 [2024-07-23 01:09:49.938449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.810 [2024-07-23 01:09:49.938473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.810 qpair failed and we were unable to recover it. 00:30:05.810 [2024-07-23 01:09:49.938633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.810 [2024-07-23 01:09:49.938790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.810 [2024-07-23 01:09:49.938814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.810 qpair failed and we were unable to recover it. 00:30:05.810 [2024-07-23 01:09:49.938979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.810 [2024-07-23 01:09:49.939137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.810 [2024-07-23 01:09:49.939161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.810 qpair failed and we were unable to recover it. 00:30:05.810 [2024-07-23 01:09:49.939293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.810 [2024-07-23 01:09:49.939453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.810 [2024-07-23 01:09:49.939477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.810 qpair failed and we were unable to recover it. 00:30:05.810 [2024-07-23 01:09:49.939625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.810 [2024-07-23 01:09:49.939750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.810 [2024-07-23 01:09:49.939774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.810 qpair failed and we were unable to recover it. 00:30:05.810 [2024-07-23 01:09:49.939916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.810 [2024-07-23 01:09:49.940074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.810 [2024-07-23 01:09:49.940098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.811 qpair failed and we were unable to recover it. 00:30:05.811 [2024-07-23 01:09:49.940227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.811 [2024-07-23 01:09:49.940384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.811 [2024-07-23 01:09:49.940408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.811 qpair failed and we were unable to recover it. 00:30:05.811 [2024-07-23 01:09:49.940570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.811 [2024-07-23 01:09:49.940716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.811 [2024-07-23 01:09:49.940742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.811 qpair failed and we were unable to recover it. 00:30:05.811 [2024-07-23 01:09:49.940906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.811 [2024-07-23 01:09:49.941036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.811 [2024-07-23 01:09:49.941061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.811 qpair failed and we were unable to recover it. 00:30:05.811 [2024-07-23 01:09:49.941193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.811 [2024-07-23 01:09:49.941379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.811 [2024-07-23 01:09:49.941403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.811 qpair failed and we were unable to recover it. 00:30:05.811 [2024-07-23 01:09:49.941540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.811 [2024-07-23 01:09:49.941667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.811 [2024-07-23 01:09:49.941692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.811 qpair failed and we were unable to recover it. 00:30:05.811 [2024-07-23 01:09:49.941822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.811 [2024-07-23 01:09:49.941963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.811 [2024-07-23 01:09:49.941988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.811 qpair failed and we were unable to recover it. 00:30:05.811 [2024-07-23 01:09:49.942151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.811 [2024-07-23 01:09:49.942294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.811 [2024-07-23 01:09:49.942319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.811 qpair failed and we were unable to recover it. 00:30:05.811 [2024-07-23 01:09:49.942466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.811 [2024-07-23 01:09:49.942630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.811 [2024-07-23 01:09:49.942655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.811 qpair failed and we were unable to recover it. 00:30:05.811 [2024-07-23 01:09:49.942831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.811 [2024-07-23 01:09:49.942959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.811 [2024-07-23 01:09:49.942983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.811 qpair failed and we were unable to recover it. 00:30:05.811 [2024-07-23 01:09:49.943129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.811 [2024-07-23 01:09:49.943260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.811 [2024-07-23 01:09:49.943284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.811 qpair failed and we were unable to recover it. 00:30:05.811 [2024-07-23 01:09:49.943422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.811 [2024-07-23 01:09:49.943557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.811 [2024-07-23 01:09:49.943582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.811 qpair failed and we were unable to recover it. 00:30:05.811 [2024-07-23 01:09:49.943720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.811 [2024-07-23 01:09:49.943847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.811 [2024-07-23 01:09:49.943871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.811 qpair failed and we were unable to recover it. 00:30:05.811 [2024-07-23 01:09:49.944009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.811 [2024-07-23 01:09:49.944164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.811 [2024-07-23 01:09:49.944188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.811 qpair failed and we were unable to recover it. 00:30:05.811 [2024-07-23 01:09:49.944344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.811 [2024-07-23 01:09:49.944489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.811 [2024-07-23 01:09:49.944513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.811 qpair failed and we were unable to recover it. 00:30:05.811 [2024-07-23 01:09:49.944672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.811 [2024-07-23 01:09:49.944801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.811 [2024-07-23 01:09:49.944825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.811 qpair failed and we were unable to recover it. 00:30:05.811 [2024-07-23 01:09:49.944957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.811 [2024-07-23 01:09:49.945100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.811 [2024-07-23 01:09:49.945124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.811 qpair failed and we were unable to recover it. 00:30:05.811 [2024-07-23 01:09:49.945287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.811 [2024-07-23 01:09:49.945441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.811 [2024-07-23 01:09:49.945465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.811 qpair failed and we were unable to recover it. 00:30:05.811 [2024-07-23 01:09:49.945595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.811 [2024-07-23 01:09:49.945743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.811 [2024-07-23 01:09:49.945768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.811 qpair failed and we were unable to recover it. 00:30:05.811 [2024-07-23 01:09:49.945931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.811 [2024-07-23 01:09:49.946074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.811 [2024-07-23 01:09:49.946098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.811 qpair failed and we were unable to recover it. 00:30:05.811 [2024-07-23 01:09:49.946244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.811 [2024-07-23 01:09:49.946392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.811 [2024-07-23 01:09:49.946416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.811 qpair failed and we were unable to recover it. 00:30:05.811 [2024-07-23 01:09:49.946562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.811 [2024-07-23 01:09:49.946712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.811 [2024-07-23 01:09:49.946738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.811 qpair failed and we were unable to recover it. 00:30:05.811 [2024-07-23 01:09:49.946880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.811 [2024-07-23 01:09:49.947040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.811 [2024-07-23 01:09:49.947064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.811 qpair failed and we were unable to recover it. 00:30:05.811 [2024-07-23 01:09:49.947201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.811 [2024-07-23 01:09:49.947325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.811 [2024-07-23 01:09:49.947349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.811 qpair failed and we were unable to recover it. 00:30:05.811 [2024-07-23 01:09:49.947476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.811 [2024-07-23 01:09:49.947624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.811 [2024-07-23 01:09:49.947649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.811 qpair failed and we were unable to recover it. 00:30:05.811 [2024-07-23 01:09:49.947786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.812 [2024-07-23 01:09:49.947917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.812 [2024-07-23 01:09:49.947941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.812 qpair failed and we were unable to recover it. 00:30:05.812 [2024-07-23 01:09:49.948065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.812 [2024-07-23 01:09:49.948198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.812 [2024-07-23 01:09:49.948223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.812 qpair failed and we were unable to recover it. 00:30:05.812 [2024-07-23 01:09:49.948380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.812 [2024-07-23 01:09:49.948522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.812 [2024-07-23 01:09:49.948546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.812 qpair failed and we were unable to recover it. 00:30:05.812 [2024-07-23 01:09:49.948711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.812 [2024-07-23 01:09:49.948836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.812 [2024-07-23 01:09:49.948860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.812 qpair failed and we were unable to recover it. 00:30:05.812 [2024-07-23 01:09:49.949025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.812 [2024-07-23 01:09:49.949185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.812 [2024-07-23 01:09:49.949208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.812 qpair failed and we were unable to recover it. 00:30:05.812 [2024-07-23 01:09:49.949369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.812 [2024-07-23 01:09:49.949498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.812 [2024-07-23 01:09:49.949529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.812 qpair failed and we were unable to recover it. 00:30:05.812 [2024-07-23 01:09:49.949685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.812 [2024-07-23 01:09:49.949836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.812 [2024-07-23 01:09:49.949860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.812 qpair failed and we were unable to recover it. 00:30:05.812 [2024-07-23 01:09:49.949988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.812 [2024-07-23 01:09:49.950114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.812 [2024-07-23 01:09:49.950139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.812 qpair failed and we were unable to recover it. 00:30:05.812 [2024-07-23 01:09:49.950306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.812 [2024-07-23 01:09:49.950443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.812 [2024-07-23 01:09:49.950467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.812 qpair failed and we were unable to recover it. 00:30:05.812 [2024-07-23 01:09:49.950593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.812 [2024-07-23 01:09:49.950737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.812 [2024-07-23 01:09:49.950762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.812 qpair failed and we were unable to recover it. 00:30:05.812 [2024-07-23 01:09:49.950891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.812 [2024-07-23 01:09:49.951036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.812 [2024-07-23 01:09:49.951060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.812 qpair failed and we were unable to recover it. 00:30:05.812 [2024-07-23 01:09:49.951186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.812 [2024-07-23 01:09:49.951341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.812 [2024-07-23 01:09:49.951366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.812 qpair failed and we were unable to recover it. 00:30:05.812 [2024-07-23 01:09:49.951491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.812 [2024-07-23 01:09:49.951624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.812 [2024-07-23 01:09:49.951648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.812 qpair failed and we were unable to recover it. 00:30:05.812 [2024-07-23 01:09:49.951780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.812 [2024-07-23 01:09:49.951916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.812 [2024-07-23 01:09:49.951942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.812 qpair failed and we were unable to recover it. 00:30:05.812 [2024-07-23 01:09:49.952102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.812 [2024-07-23 01:09:49.952258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.812 [2024-07-23 01:09:49.952283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.812 qpair failed and we were unable to recover it. 00:30:05.812 [2024-07-23 01:09:49.952457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.812 [2024-07-23 01:09:49.952590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.812 [2024-07-23 01:09:49.952637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.812 qpair failed and we were unable to recover it. 00:30:05.812 [2024-07-23 01:09:49.952777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.812 [2024-07-23 01:09:49.952902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.812 [2024-07-23 01:09:49.952927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.812 qpair failed and we were unable to recover it. 00:30:05.812 [2024-07-23 01:09:49.953073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.812 [2024-07-23 01:09:49.953236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.812 [2024-07-23 01:09:49.953262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.812 qpair failed and we were unable to recover it. 00:30:05.812 [2024-07-23 01:09:49.953398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.812 [2024-07-23 01:09:49.953530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.812 [2024-07-23 01:09:49.953554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.812 qpair failed and we were unable to recover it. 00:30:05.812 [2024-07-23 01:09:49.953725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.812 [2024-07-23 01:09:49.953856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.812 [2024-07-23 01:09:49.953880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.812 qpair failed and we were unable to recover it. 00:30:05.812 [2024-07-23 01:09:49.954041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.812 [2024-07-23 01:09:49.954183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.812 [2024-07-23 01:09:49.954207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.812 qpair failed and we were unable to recover it. 00:30:05.812 [2024-07-23 01:09:49.954367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.812 [2024-07-23 01:09:49.954507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.812 [2024-07-23 01:09:49.954532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.812 qpair failed and we were unable to recover it. 00:30:05.812 [2024-07-23 01:09:49.954707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.812 [2024-07-23 01:09:49.954851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.812 [2024-07-23 01:09:49.954875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.812 qpair failed and we were unable to recover it. 00:30:05.812 [2024-07-23 01:09:49.955037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.812 [2024-07-23 01:09:49.955178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.812 [2024-07-23 01:09:49.955202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.812 qpair failed and we were unable to recover it. 00:30:05.812 [2024-07-23 01:09:49.955345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.812 [2024-07-23 01:09:49.955503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.812 [2024-07-23 01:09:49.955527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.812 qpair failed and we were unable to recover it. 00:30:05.812 [2024-07-23 01:09:49.955686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.812 [2024-07-23 01:09:49.955831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.812 [2024-07-23 01:09:49.955855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.812 qpair failed and we were unable to recover it. 00:30:05.812 [2024-07-23 01:09:49.956005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.812 [2024-07-23 01:09:49.956138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.812 [2024-07-23 01:09:49.956168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.812 qpair failed and we were unable to recover it. 00:30:05.812 [2024-07-23 01:09:49.956331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.812 [2024-07-23 01:09:49.956457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.812 [2024-07-23 01:09:49.956482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.812 qpair failed and we were unable to recover it. 00:30:05.812 [2024-07-23 01:09:49.956645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.812 [2024-07-23 01:09:49.956769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.812 [2024-07-23 01:09:49.956793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.813 qpair failed and we were unable to recover it. 00:30:05.813 [2024-07-23 01:09:49.956927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.813 [2024-07-23 01:09:49.957054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.813 [2024-07-23 01:09:49.957079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.813 qpair failed and we were unable to recover it. 00:30:05.813 [2024-07-23 01:09:49.957216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.813 [2024-07-23 01:09:49.957346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.813 [2024-07-23 01:09:49.957370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.813 qpair failed and we were unable to recover it. 00:30:05.813 [2024-07-23 01:09:49.957501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.813 [2024-07-23 01:09:49.957635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.813 [2024-07-23 01:09:49.957661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.813 qpair failed and we were unable to recover it. 00:30:05.813 [2024-07-23 01:09:49.957838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.813 [2024-07-23 01:09:49.957984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.813 [2024-07-23 01:09:49.958008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.813 qpair failed and we were unable to recover it. 00:30:05.813 [2024-07-23 01:09:49.958174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.813 [2024-07-23 01:09:49.958332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.813 [2024-07-23 01:09:49.958357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.813 qpair failed and we were unable to recover it. 00:30:05.813 [2024-07-23 01:09:49.958490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.813 [2024-07-23 01:09:49.958619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.813 [2024-07-23 01:09:49.958644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.813 qpair failed and we were unable to recover it. 00:30:05.813 [2024-07-23 01:09:49.958803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.813 [2024-07-23 01:09:49.958964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.813 [2024-07-23 01:09:49.958988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.813 qpair failed and we were unable to recover it. 00:30:05.813 [2024-07-23 01:09:49.959124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.813 [2024-07-23 01:09:49.959298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.813 [2024-07-23 01:09:49.959323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.813 qpair failed and we were unable to recover it. 00:30:05.813 [2024-07-23 01:09:49.959454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.813 [2024-07-23 01:09:49.959605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.813 [2024-07-23 01:09:49.959635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.813 qpair failed and we were unable to recover it. 00:30:05.813 [2024-07-23 01:09:49.959794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.813 [2024-07-23 01:09:49.959931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.813 [2024-07-23 01:09:49.959955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.813 qpair failed and we were unable to recover it. 00:30:05.813 [2024-07-23 01:09:49.960113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.813 [2024-07-23 01:09:49.960246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.813 [2024-07-23 01:09:49.960270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.813 qpair failed and we were unable to recover it. 00:30:05.813 [2024-07-23 01:09:49.960402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.813 [2024-07-23 01:09:49.960560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.813 [2024-07-23 01:09:49.960585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.813 qpair failed and we were unable to recover it. 00:30:05.813 [2024-07-23 01:09:49.960720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.813 [2024-07-23 01:09:49.960860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.813 [2024-07-23 01:09:49.960886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.813 qpair failed and we were unable to recover it. 00:30:05.813 [2024-07-23 01:09:49.961052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.813 [2024-07-23 01:09:49.961193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.813 [2024-07-23 01:09:49.961218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.813 qpair failed and we were unable to recover it. 00:30:05.813 [2024-07-23 01:09:49.961411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.813 [2024-07-23 01:09:49.961563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.813 [2024-07-23 01:09:49.961588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.813 qpair failed and we were unable to recover it. 00:30:05.813 [2024-07-23 01:09:49.961745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.813 [2024-07-23 01:09:49.961904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.813 [2024-07-23 01:09:49.961928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.813 qpair failed and we were unable to recover it. 00:30:05.813 [2024-07-23 01:09:49.962066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.813 [2024-07-23 01:09:49.962203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.813 [2024-07-23 01:09:49.962228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.813 qpair failed and we were unable to recover it. 00:30:05.813 [2024-07-23 01:09:49.962387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.813 [2024-07-23 01:09:49.962569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.813 [2024-07-23 01:09:49.962594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.813 qpair failed and we were unable to recover it. 00:30:05.813 [2024-07-23 01:09:49.962763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.813 [2024-07-23 01:09:49.962942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.813 [2024-07-23 01:09:49.962966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.813 qpair failed and we were unable to recover it. 00:30:05.813 [2024-07-23 01:09:49.963158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.813 [2024-07-23 01:09:49.963293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.813 [2024-07-23 01:09:49.963317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.813 qpair failed and we were unable to recover it. 00:30:05.813 [2024-07-23 01:09:49.963446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.813 [2024-07-23 01:09:49.963599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.813 [2024-07-23 01:09:49.963630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.813 qpair failed and we were unable to recover it. 00:30:05.813 [2024-07-23 01:09:49.963769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.813 [2024-07-23 01:09:49.963898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.813 [2024-07-23 01:09:49.963922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.813 qpair failed and we were unable to recover it. 00:30:05.813 [2024-07-23 01:09:49.964080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.813 [2024-07-23 01:09:49.964213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.813 [2024-07-23 01:09:49.964240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.813 qpair failed and we were unable to recover it. 00:30:05.813 [2024-07-23 01:09:49.964402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.813 [2024-07-23 01:09:49.964529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.813 [2024-07-23 01:09:49.964553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.813 qpair failed and we were unable to recover it. 00:30:05.813 [2024-07-23 01:09:49.964695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.813 [2024-07-23 01:09:49.964827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.813 [2024-07-23 01:09:49.964851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.813 qpair failed and we were unable to recover it. 00:30:05.813 [2024-07-23 01:09:49.965010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.813 [2024-07-23 01:09:49.965158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.813 [2024-07-23 01:09:49.965182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.813 qpair failed and we were unable to recover it. 00:30:05.813 [2024-07-23 01:09:49.965371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.813 [2024-07-23 01:09:49.965513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.813 [2024-07-23 01:09:49.965537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.813 qpair failed and we were unable to recover it. 00:30:05.813 [2024-07-23 01:09:49.965698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.813 [2024-07-23 01:09:49.965860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.813 [2024-07-23 01:09:49.965885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.813 qpair failed and we were unable to recover it. 00:30:05.813 [2024-07-23 01:09:49.966032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.813 [2024-07-23 01:09:49.966162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.814 [2024-07-23 01:09:49.966187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.814 qpair failed and we were unable to recover it. 00:30:05.814 [2024-07-23 01:09:49.966337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.814 [2024-07-23 01:09:49.966494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.814 [2024-07-23 01:09:49.966519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.814 qpair failed and we were unable to recover it. 00:30:05.814 [2024-07-23 01:09:49.966681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.814 [2024-07-23 01:09:49.966810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.814 [2024-07-23 01:09:49.966835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.814 qpair failed and we were unable to recover it. 00:30:05.814 [2024-07-23 01:09:49.966991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.814 [2024-07-23 01:09:49.967141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.814 [2024-07-23 01:09:49.967165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.814 qpair failed and we were unable to recover it. 00:30:05.814 [2024-07-23 01:09:49.967300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.814 [2024-07-23 01:09:49.967460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.814 [2024-07-23 01:09:49.967484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.814 qpair failed and we were unable to recover it. 00:30:05.814 [2024-07-23 01:09:49.967632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.814 [2024-07-23 01:09:49.967763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.814 [2024-07-23 01:09:49.967787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.814 qpair failed and we were unable to recover it. 00:30:05.814 [2024-07-23 01:09:49.967935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.814 [2024-07-23 01:09:49.968071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.814 [2024-07-23 01:09:49.968095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.814 qpair failed and we were unable to recover it. 00:30:05.814 [2024-07-23 01:09:49.968224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.814 [2024-07-23 01:09:49.968380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.814 [2024-07-23 01:09:49.968404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.814 qpair failed and we were unable to recover it. 00:30:05.814 [2024-07-23 01:09:49.968586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.814 [2024-07-23 01:09:49.968766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.814 [2024-07-23 01:09:49.968790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.814 qpair failed and we were unable to recover it. 00:30:05.814 [2024-07-23 01:09:49.968929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.814 [2024-07-23 01:09:49.969107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.814 [2024-07-23 01:09:49.969135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:05.814 qpair failed and we were unable to recover it. 00:30:05.814 [2024-07-23 01:09:49.969267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.814 [2024-07-23 01:09:49.969415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.090 [2024-07-23 01:09:49.969439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.090 qpair failed and we were unable to recover it. 00:30:06.090 [2024-07-23 01:09:49.969573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.090 [2024-07-23 01:09:49.969721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.090 [2024-07-23 01:09:49.969746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.090 qpair failed and we were unable to recover it. 00:30:06.090 [2024-07-23 01:09:49.969884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.090 [2024-07-23 01:09:49.970020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.090 [2024-07-23 01:09:49.970044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.090 qpair failed and we were unable to recover it. 00:30:06.090 [2024-07-23 01:09:49.970182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.090 [2024-07-23 01:09:49.970302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.090 [2024-07-23 01:09:49.970326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.090 qpair failed and we were unable to recover it. 00:30:06.090 [2024-07-23 01:09:49.970463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.090 [2024-07-23 01:09:49.970600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.090 [2024-07-23 01:09:49.970639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.090 qpair failed and we were unable to recover it. 00:30:06.090 [2024-07-23 01:09:49.970805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.090 [2024-07-23 01:09:49.970959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.090 [2024-07-23 01:09:49.970983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.090 qpair failed and we were unable to recover it. 00:30:06.090 [2024-07-23 01:09:49.971162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.090 [2024-07-23 01:09:49.971319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.090 [2024-07-23 01:09:49.971343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.090 qpair failed and we were unable to recover it. 00:30:06.090 [2024-07-23 01:09:49.971489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.090 [2024-07-23 01:09:49.971649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.090 [2024-07-23 01:09:49.971674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.090 qpair failed and we were unable to recover it. 00:30:06.090 [2024-07-23 01:09:49.971841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.090 [2024-07-23 01:09:49.971972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.090 [2024-07-23 01:09:49.971996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.090 qpair failed and we were unable to recover it. 00:30:06.090 [2024-07-23 01:09:49.972173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.090 [2024-07-23 01:09:49.972304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.090 [2024-07-23 01:09:49.972330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.090 qpair failed and we were unable to recover it. 00:30:06.090 [2024-07-23 01:09:49.972499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.090 [2024-07-23 01:09:49.972631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.090 [2024-07-23 01:09:49.972656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.090 qpair failed and we were unable to recover it. 00:30:06.090 [2024-07-23 01:09:49.972808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.090 [2024-07-23 01:09:49.972954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.090 [2024-07-23 01:09:49.972979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.090 qpair failed and we were unable to recover it. 00:30:06.090 [2024-07-23 01:09:49.973117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.090 [2024-07-23 01:09:49.973264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.090 [2024-07-23 01:09:49.973289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.090 qpair failed and we were unable to recover it. 00:30:06.090 [2024-07-23 01:09:49.973417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.090 [2024-07-23 01:09:49.973545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.090 [2024-07-23 01:09:49.973569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.090 qpair failed and we were unable to recover it. 00:30:06.090 [2024-07-23 01:09:49.973718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.090 [2024-07-23 01:09:49.973847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.090 [2024-07-23 01:09:49.973871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.090 qpair failed and we were unable to recover it. 00:30:06.090 [2024-07-23 01:09:49.974049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.090 [2024-07-23 01:09:49.974205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.090 [2024-07-23 01:09:49.974229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.090 qpair failed and we were unable to recover it. 00:30:06.090 [2024-07-23 01:09:49.974356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.090 [2024-07-23 01:09:49.974500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.090 [2024-07-23 01:09:49.974525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.090 qpair failed and we were unable to recover it. 00:30:06.090 [2024-07-23 01:09:49.974716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.090 [2024-07-23 01:09:49.974859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.091 [2024-07-23 01:09:49.974884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.091 qpair failed and we were unable to recover it. 00:30:06.091 [2024-07-23 01:09:49.975016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.091 [2024-07-23 01:09:49.975178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.091 [2024-07-23 01:09:49.975202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.091 qpair failed and we were unable to recover it. 00:30:06.091 [2024-07-23 01:09:49.975357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.091 [2024-07-23 01:09:49.975498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.091 [2024-07-23 01:09:49.975523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.091 qpair failed and we were unable to recover it. 00:30:06.091 [2024-07-23 01:09:49.975688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.091 [2024-07-23 01:09:49.975817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.091 [2024-07-23 01:09:49.975841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.091 qpair failed and we were unable to recover it. 00:30:06.091 [2024-07-23 01:09:49.976013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.091 [2024-07-23 01:09:49.976140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.091 [2024-07-23 01:09:49.976164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.091 qpair failed and we were unable to recover it. 00:30:06.091 [2024-07-23 01:09:49.976313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.091 [2024-07-23 01:09:49.976463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.091 [2024-07-23 01:09:49.976487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.091 qpair failed and we were unable to recover it. 00:30:06.091 [2024-07-23 01:09:49.976621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.091 [2024-07-23 01:09:49.976747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.091 [2024-07-23 01:09:49.976772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.091 qpair failed and we were unable to recover it. 00:30:06.091 [2024-07-23 01:09:49.976904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.091 [2024-07-23 01:09:49.977037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.091 [2024-07-23 01:09:49.977062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.091 qpair failed and we were unable to recover it. 00:30:06.091 [2024-07-23 01:09:49.977226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.091 [2024-07-23 01:09:49.977380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.091 [2024-07-23 01:09:49.977404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.091 qpair failed and we were unable to recover it. 00:30:06.091 [2024-07-23 01:09:49.977554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.091 [2024-07-23 01:09:49.977730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.091 [2024-07-23 01:09:49.977755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.091 qpair failed and we were unable to recover it. 00:30:06.091 [2024-07-23 01:09:49.977914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.091 [2024-07-23 01:09:49.978049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.091 [2024-07-23 01:09:49.978074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.091 qpair failed and we were unable to recover it. 00:30:06.091 [2024-07-23 01:09:49.978240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.091 [2024-07-23 01:09:49.978369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.091 [2024-07-23 01:09:49.978393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.091 qpair failed and we were unable to recover it. 00:30:06.091 [2024-07-23 01:09:49.978529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.091 [2024-07-23 01:09:49.978724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.091 [2024-07-23 01:09:49.978750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.091 qpair failed and we were unable to recover it. 00:30:06.091 [2024-07-23 01:09:49.978915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.091 [2024-07-23 01:09:49.979048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.091 [2024-07-23 01:09:49.979074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.091 qpair failed and we were unable to recover it. 00:30:06.091 [2024-07-23 01:09:49.979221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.091 [2024-07-23 01:09:49.979349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.091 [2024-07-23 01:09:49.979373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.091 qpair failed and we were unable to recover it. 00:30:06.091 [2024-07-23 01:09:49.979514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.091 [2024-07-23 01:09:49.979664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.091 [2024-07-23 01:09:49.979689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.091 qpair failed and we were unable to recover it. 00:30:06.091 [2024-07-23 01:09:49.979825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.091 [2024-07-23 01:09:49.980000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.091 [2024-07-23 01:09:49.980024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.091 qpair failed and we were unable to recover it. 00:30:06.091 [2024-07-23 01:09:49.980167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.091 [2024-07-23 01:09:49.980290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.091 [2024-07-23 01:09:49.980314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.091 qpair failed and we were unable to recover it. 00:30:06.091 [2024-07-23 01:09:49.980472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.091 [2024-07-23 01:09:49.980598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.091 [2024-07-23 01:09:49.980628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.091 qpair failed and we were unable to recover it. 00:30:06.091 [2024-07-23 01:09:49.980793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.091 [2024-07-23 01:09:49.980956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.091 [2024-07-23 01:09:49.980980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.091 qpair failed and we were unable to recover it. 00:30:06.091 [2024-07-23 01:09:49.981110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.091 [2024-07-23 01:09:49.981266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.091 [2024-07-23 01:09:49.981290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.091 qpair failed and we were unable to recover it. 00:30:06.091 [2024-07-23 01:09:49.981460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.091 [2024-07-23 01:09:49.981622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.091 [2024-07-23 01:09:49.981646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.091 qpair failed and we were unable to recover it. 00:30:06.091 [2024-07-23 01:09:49.981775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.091 [2024-07-23 01:09:49.981907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.091 [2024-07-23 01:09:49.981931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.091 qpair failed and we were unable to recover it. 00:30:06.091 [2024-07-23 01:09:49.982071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.091 [2024-07-23 01:09:49.982261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.091 [2024-07-23 01:09:49.982286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.091 qpair failed and we were unable to recover it. 00:30:06.091 [2024-07-23 01:09:49.982421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.091 [2024-07-23 01:09:49.982609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.091 [2024-07-23 01:09:49.982641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.091 qpair failed and we were unable to recover it. 00:30:06.091 [2024-07-23 01:09:49.982782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.091 [2024-07-23 01:09:49.982940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.091 [2024-07-23 01:09:49.982964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.091 qpair failed and we were unable to recover it. 00:30:06.091 [2024-07-23 01:09:49.983140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.091 [2024-07-23 01:09:49.983276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.091 [2024-07-23 01:09:49.983301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.091 qpair failed and we were unable to recover it. 00:30:06.091 [2024-07-23 01:09:49.983430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.091 [2024-07-23 01:09:49.983557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.091 [2024-07-23 01:09:49.983581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.091 qpair failed and we were unable to recover it. 00:30:06.091 [2024-07-23 01:09:49.983725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.091 [2024-07-23 01:09:49.983860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.091 [2024-07-23 01:09:49.983885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.091 qpair failed and we were unable to recover it. 00:30:06.091 [2024-07-23 01:09:49.984051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.091 [2024-07-23 01:09:49.984215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.092 [2024-07-23 01:09:49.984239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.092 qpair failed and we were unable to recover it. 00:30:06.092 [2024-07-23 01:09:49.984373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.092 [2024-07-23 01:09:49.984506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.092 [2024-07-23 01:09:49.984530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.092 qpair failed and we were unable to recover it. 00:30:06.092 [2024-07-23 01:09:49.984679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.092 [2024-07-23 01:09:49.984837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.092 [2024-07-23 01:09:49.984861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.092 qpair failed and we were unable to recover it. 00:30:06.092 [2024-07-23 01:09:49.985018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.092 [2024-07-23 01:09:49.985151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.092 [2024-07-23 01:09:49.985175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.092 qpair failed and we were unable to recover it. 00:30:06.092 [2024-07-23 01:09:49.985305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.092 [2024-07-23 01:09:49.985496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.092 [2024-07-23 01:09:49.985525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.092 qpair failed and we were unable to recover it. 00:30:06.092 [2024-07-23 01:09:49.985662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.092 [2024-07-23 01:09:49.985820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.092 [2024-07-23 01:09:49.985845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.092 qpair failed and we were unable to recover it. 00:30:06.092 [2024-07-23 01:09:49.985981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.092 [2024-07-23 01:09:49.986108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.092 [2024-07-23 01:09:49.986131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.092 qpair failed and we were unable to recover it. 00:30:06.092 [2024-07-23 01:09:49.986286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.092 [2024-07-23 01:09:49.986409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.092 [2024-07-23 01:09:49.986433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.092 qpair failed and we were unable to recover it. 00:30:06.092 [2024-07-23 01:09:49.986567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.092 [2024-07-23 01:09:49.986702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.092 [2024-07-23 01:09:49.986728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.092 qpair failed and we were unable to recover it. 00:30:06.092 [2024-07-23 01:09:49.986883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.092 [2024-07-23 01:09:49.987042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.092 [2024-07-23 01:09:49.987066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.092 qpair failed and we were unable to recover it. 00:30:06.092 [2024-07-23 01:09:49.987217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.092 [2024-07-23 01:09:49.987349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.092 [2024-07-23 01:09:49.987373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.092 qpair failed and we were unable to recover it. 00:30:06.092 [2024-07-23 01:09:49.987502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.092 [2024-07-23 01:09:49.987660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.092 [2024-07-23 01:09:49.987685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.092 qpair failed and we were unable to recover it. 00:30:06.092 [2024-07-23 01:09:49.987827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.092 [2024-07-23 01:09:49.987960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.092 [2024-07-23 01:09:49.987986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.092 qpair failed and we were unable to recover it. 00:30:06.092 [2024-07-23 01:09:49.988140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.092 [2024-07-23 01:09:49.988266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.092 [2024-07-23 01:09:49.988290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.092 qpair failed and we were unable to recover it. 00:30:06.092 [2024-07-23 01:09:49.988465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.092 [2024-07-23 01:09:49.988631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.092 [2024-07-23 01:09:49.988656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.092 qpair failed and we were unable to recover it. 00:30:06.092 [2024-07-23 01:09:49.988792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.092 [2024-07-23 01:09:49.988934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.092 [2024-07-23 01:09:49.988959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.092 qpair failed and we were unable to recover it. 00:30:06.092 [2024-07-23 01:09:49.989124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.092 [2024-07-23 01:09:49.989284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.092 [2024-07-23 01:09:49.989309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.092 qpair failed and we were unable to recover it. 00:30:06.092 [2024-07-23 01:09:49.989468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.092 [2024-07-23 01:09:49.989645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.092 [2024-07-23 01:09:49.989670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.092 qpair failed and we were unable to recover it. 00:30:06.092 [2024-07-23 01:09:49.989834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.092 [2024-07-23 01:09:49.989963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.092 [2024-07-23 01:09:49.989987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.092 qpair failed and we were unable to recover it. 00:30:06.092 [2024-07-23 01:09:49.990118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.092 [2024-07-23 01:09:49.990261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.092 [2024-07-23 01:09:49.990286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.092 qpair failed and we were unable to recover it. 00:30:06.092 [2024-07-23 01:09:49.990420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.092 [2024-07-23 01:09:49.990573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.092 [2024-07-23 01:09:49.990598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.092 qpair failed and we were unable to recover it. 00:30:06.092 [2024-07-23 01:09:49.990766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.092 [2024-07-23 01:09:49.990895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.092 [2024-07-23 01:09:49.990919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.092 qpair failed and we were unable to recover it. 00:30:06.092 [2024-07-23 01:09:49.991077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.092 [2024-07-23 01:09:49.991205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.092 [2024-07-23 01:09:49.991229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.092 qpair failed and we were unable to recover it. 00:30:06.092 [2024-07-23 01:09:49.991379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.092 [2024-07-23 01:09:49.991537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.092 [2024-07-23 01:09:49.991561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.092 qpair failed and we were unable to recover it. 00:30:06.092 [2024-07-23 01:09:49.991704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.092 [2024-07-23 01:09:49.991837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.092 [2024-07-23 01:09:49.991861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.092 qpair failed and we were unable to recover it. 00:30:06.092 [2024-07-23 01:09:49.992059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.092 [2024-07-23 01:09:49.992198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.092 [2024-07-23 01:09:49.992222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.092 qpair failed and we were unable to recover it. 00:30:06.092 [2024-07-23 01:09:49.992356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.092 [2024-07-23 01:09:49.992486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.092 [2024-07-23 01:09:49.992511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.092 qpair failed and we were unable to recover it. 00:30:06.092 [2024-07-23 01:09:49.992674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.092 [2024-07-23 01:09:49.992841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.092 [2024-07-23 01:09:49.992865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.092 qpair failed and we were unable to recover it. 00:30:06.092 [2024-07-23 01:09:49.993025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.092 [2024-07-23 01:09:49.993199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.092 [2024-07-23 01:09:49.993223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.092 qpair failed and we were unable to recover it. 00:30:06.092 [2024-07-23 01:09:49.993382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.093 [2024-07-23 01:09:49.993539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.093 [2024-07-23 01:09:49.993563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.093 qpair failed and we were unable to recover it. 00:30:06.093 [2024-07-23 01:09:49.993711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.093 [2024-07-23 01:09:49.993857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.093 [2024-07-23 01:09:49.993882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.093 qpair failed and we were unable to recover it. 00:30:06.093 [2024-07-23 01:09:49.994043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.093 [2024-07-23 01:09:49.994171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.093 [2024-07-23 01:09:49.994195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.093 qpair failed and we were unable to recover it. 00:30:06.093 [2024-07-23 01:09:49.994337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.093 [2024-07-23 01:09:49.994492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.093 [2024-07-23 01:09:49.994516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.093 qpair failed and we were unable to recover it. 00:30:06.093 [2024-07-23 01:09:49.994640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.093 [2024-07-23 01:09:49.994767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.093 [2024-07-23 01:09:49.994792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.093 qpair failed and we were unable to recover it. 00:30:06.093 [2024-07-23 01:09:49.994926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.093 [2024-07-23 01:09:49.995089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.093 [2024-07-23 01:09:49.995114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.093 qpair failed and we were unable to recover it. 00:30:06.093 [2024-07-23 01:09:49.995243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.093 [2024-07-23 01:09:49.995376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.093 [2024-07-23 01:09:49.995400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.093 qpair failed and we were unable to recover it. 00:30:06.093 [2024-07-23 01:09:49.995590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.093 [2024-07-23 01:09:49.995769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.093 [2024-07-23 01:09:49.995793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.093 qpair failed and we were unable to recover it. 00:30:06.093 [2024-07-23 01:09:49.995920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.093 [2024-07-23 01:09:49.996080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.093 [2024-07-23 01:09:49.996104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.093 qpair failed and we were unable to recover it. 00:30:06.093 [2024-07-23 01:09:49.996270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.093 [2024-07-23 01:09:49.996427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.093 [2024-07-23 01:09:49.996451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.093 qpair failed and we were unable to recover it. 00:30:06.093 [2024-07-23 01:09:49.996590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.093 [2024-07-23 01:09:49.996756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.093 [2024-07-23 01:09:49.996780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.093 qpair failed and we were unable to recover it. 00:30:06.093 [2024-07-23 01:09:49.996922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.093 [2024-07-23 01:09:49.997080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.093 [2024-07-23 01:09:49.997104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.093 qpair failed and we were unable to recover it. 00:30:06.093 [2024-07-23 01:09:49.997244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.093 [2024-07-23 01:09:49.997400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.093 [2024-07-23 01:09:49.997425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.093 qpair failed and we were unable to recover it. 00:30:06.093 [2024-07-23 01:09:49.997554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.093 [2024-07-23 01:09:49.997713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.093 [2024-07-23 01:09:49.997738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.093 qpair failed and we were unable to recover it. 00:30:06.093 [2024-07-23 01:09:49.997866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.093 [2024-07-23 01:09:49.998021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.093 [2024-07-23 01:09:49.998046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.093 qpair failed and we were unable to recover it. 00:30:06.093 [2024-07-23 01:09:49.998190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.093 [2024-07-23 01:09:49.998317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.093 [2024-07-23 01:09:49.998341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.093 qpair failed and we were unable to recover it. 00:30:06.093 [2024-07-23 01:09:49.998479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.093 [2024-07-23 01:09:49.998665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.093 [2024-07-23 01:09:49.998690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.093 qpair failed and we were unable to recover it. 00:30:06.093 [2024-07-23 01:09:49.998828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.093 [2024-07-23 01:09:49.999002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.093 [2024-07-23 01:09:49.999026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.093 qpair failed and we were unable to recover it. 00:30:06.093 [2024-07-23 01:09:49.999158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.093 [2024-07-23 01:09:49.999312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.093 [2024-07-23 01:09:49.999335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.093 qpair failed and we were unable to recover it. 00:30:06.093 [2024-07-23 01:09:49.999473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.093 [2024-07-23 01:09:49.999599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.093 [2024-07-23 01:09:49.999629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.093 qpair failed and we were unable to recover it. 00:30:06.093 [2024-07-23 01:09:49.999767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.093 [2024-07-23 01:09:49.999919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.093 [2024-07-23 01:09:49.999943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.093 qpair failed and we were unable to recover it. 00:30:06.093 [2024-07-23 01:09:50.000079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.093 [2024-07-23 01:09:50.000214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.093 [2024-07-23 01:09:50.000238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.093 qpair failed and we were unable to recover it. 00:30:06.093 [2024-07-23 01:09:50.000381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.093 [2024-07-23 01:09:50.000542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.093 [2024-07-23 01:09:50.000566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.093 qpair failed and we were unable to recover it. 00:30:06.093 [2024-07-23 01:09:50.000698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.093 [2024-07-23 01:09:50.000832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.093 [2024-07-23 01:09:50.000859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.093 qpair failed and we were unable to recover it. 00:30:06.093 [2024-07-23 01:09:50.000989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.093 [2024-07-23 01:09:50.001133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.093 [2024-07-23 01:09:50.001157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.093 qpair failed and we were unable to recover it. 00:30:06.093 [2024-07-23 01:09:50.001291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.093 [2024-07-23 01:09:50.001427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.093 [2024-07-23 01:09:50.001451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.093 qpair failed and we were unable to recover it. 00:30:06.093 [2024-07-23 01:09:50.001611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.093 [2024-07-23 01:09:50.001769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.093 [2024-07-23 01:09:50.001800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.093 qpair failed and we were unable to recover it. 00:30:06.093 [2024-07-23 01:09:50.001940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.093 [2024-07-23 01:09:50.002096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.093 [2024-07-23 01:09:50.002123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.093 qpair failed and we were unable to recover it. 00:30:06.093 [2024-07-23 01:09:50.002253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.093 [2024-07-23 01:09:50.002450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.093 [2024-07-23 01:09:50.002475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.093 qpair failed and we were unable to recover it. 00:30:06.094 [2024-07-23 01:09:50.002608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.094 [2024-07-23 01:09:50.002762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.094 [2024-07-23 01:09:50.002789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.094 qpair failed and we were unable to recover it. 00:30:06.094 [2024-07-23 01:09:50.002959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.094 [2024-07-23 01:09:50.003094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.094 [2024-07-23 01:09:50.003119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.094 qpair failed and we were unable to recover it. 00:30:06.094 [2024-07-23 01:09:50.003311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.094 [2024-07-23 01:09:50.003448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.094 [2024-07-23 01:09:50.003472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.094 qpair failed and we were unable to recover it. 00:30:06.094 [2024-07-23 01:09:50.003628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.094 [2024-07-23 01:09:50.003762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.094 [2024-07-23 01:09:50.003787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.094 qpair failed and we were unable to recover it. 00:30:06.094 [2024-07-23 01:09:50.003954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.094 [2024-07-23 01:09:50.004084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.094 [2024-07-23 01:09:50.004109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.094 qpair failed and we were unable to recover it. 00:30:06.094 [2024-07-23 01:09:50.004243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.094 [2024-07-23 01:09:50.004403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.094 [2024-07-23 01:09:50.004427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.094 qpair failed and we were unable to recover it. 00:30:06.094 [2024-07-23 01:09:50.004586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.094 [2024-07-23 01:09:50.004742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.094 [2024-07-23 01:09:50.004768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.094 qpair failed and we were unable to recover it. 00:30:06.094 [2024-07-23 01:09:50.004898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.094 [2024-07-23 01:09:50.005069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.094 [2024-07-23 01:09:50.005098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.094 qpair failed and we were unable to recover it. 00:30:06.094 [2024-07-23 01:09:50.005254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.094 [2024-07-23 01:09:50.005398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.094 [2024-07-23 01:09:50.005422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.094 qpair failed and we were unable to recover it. 00:30:06.094 [2024-07-23 01:09:50.005581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.094 [2024-07-23 01:09:50.005766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.094 [2024-07-23 01:09:50.005792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.094 qpair failed and we were unable to recover it. 00:30:06.094 [2024-07-23 01:09:50.005934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.094 [2024-07-23 01:09:50.006063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.094 [2024-07-23 01:09:50.006087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.094 qpair failed and we were unable to recover it. 00:30:06.094 [2024-07-23 01:09:50.006245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.094 [2024-07-23 01:09:50.006398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.094 [2024-07-23 01:09:50.006423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.094 qpair failed and we were unable to recover it. 00:30:06.094 [2024-07-23 01:09:50.006558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.094 [2024-07-23 01:09:50.006746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.094 [2024-07-23 01:09:50.006772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.094 qpair failed and we were unable to recover it. 00:30:06.094 [2024-07-23 01:09:50.006914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.094 [2024-07-23 01:09:50.007071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.094 [2024-07-23 01:09:50.007096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.094 qpair failed and we were unable to recover it. 00:30:06.094 [2024-07-23 01:09:50.007245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.094 [2024-07-23 01:09:50.007379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.094 [2024-07-23 01:09:50.007403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.094 qpair failed and we were unable to recover it. 00:30:06.094 [2024-07-23 01:09:50.007573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.094 [2024-07-23 01:09:50.007738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.094 [2024-07-23 01:09:50.007763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.094 qpair failed and we were unable to recover it. 00:30:06.094 [2024-07-23 01:09:50.007924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.094 [2024-07-23 01:09:50.008050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.094 [2024-07-23 01:09:50.008075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.094 qpair failed and we were unable to recover it. 00:30:06.094 [2024-07-23 01:09:50.008238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.094 [2024-07-23 01:09:50.008397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.094 [2024-07-23 01:09:50.008422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.094 qpair failed and we were unable to recover it. 00:30:06.094 [2024-07-23 01:09:50.008561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.094 [2024-07-23 01:09:50.008695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.094 [2024-07-23 01:09:50.008720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.094 qpair failed and we were unable to recover it. 00:30:06.094 [2024-07-23 01:09:50.008899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.094 [2024-07-23 01:09:50.009032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.094 [2024-07-23 01:09:50.009059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.094 qpair failed and we were unable to recover it. 00:30:06.094 [2024-07-23 01:09:50.009190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.094 [2024-07-23 01:09:50.009351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.094 [2024-07-23 01:09:50.009377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.094 qpair failed and we were unable to recover it. 00:30:06.094 [2024-07-23 01:09:50.009510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.094 [2024-07-23 01:09:50.009702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.094 [2024-07-23 01:09:50.009728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.094 qpair failed and we were unable to recover it. 00:30:06.094 [2024-07-23 01:09:50.009869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.094 [2024-07-23 01:09:50.010039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.094 [2024-07-23 01:09:50.010063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.094 qpair failed and we were unable to recover it. 00:30:06.094 [2024-07-23 01:09:50.010216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.094 [2024-07-23 01:09:50.010351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.094 [2024-07-23 01:09:50.010375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.094 qpair failed and we were unable to recover it. 00:30:06.094 [2024-07-23 01:09:50.010556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.094 [2024-07-23 01:09:50.010701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.094 [2024-07-23 01:09:50.010725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.094 qpair failed and we were unable to recover it. 00:30:06.094 [2024-07-23 01:09:50.010875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.095 [2024-07-23 01:09:50.011011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.095 [2024-07-23 01:09:50.011038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.095 qpair failed and we were unable to recover it. 00:30:06.095 [2024-07-23 01:09:50.011178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.095 [2024-07-23 01:09:50.011335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.095 [2024-07-23 01:09:50.011359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.095 qpair failed and we were unable to recover it. 00:30:06.095 [2024-07-23 01:09:50.011535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.095 [2024-07-23 01:09:50.011693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.095 [2024-07-23 01:09:50.011728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.095 qpair failed and we were unable to recover it. 00:30:06.095 [2024-07-23 01:09:50.011904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.095 [2024-07-23 01:09:50.012064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.095 [2024-07-23 01:09:50.012097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.095 qpair failed and we were unable to recover it. 00:30:06.095 [2024-07-23 01:09:50.012247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.095 [2024-07-23 01:09:50.012425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.095 [2024-07-23 01:09:50.012456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.095 qpair failed and we were unable to recover it. 00:30:06.095 [2024-07-23 01:09:50.012630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.095 [2024-07-23 01:09:50.012780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.095 [2024-07-23 01:09:50.012811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.095 qpair failed and we were unable to recover it. 00:30:06.095 [2024-07-23 01:09:50.012973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.095 [2024-07-23 01:09:50.013143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.095 [2024-07-23 01:09:50.013175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.095 qpair failed and we were unable to recover it. 00:30:06.095 [2024-07-23 01:09:50.013343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.095 [2024-07-23 01:09:50.013709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.095 [2024-07-23 01:09:50.013758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.095 qpair failed and we were unable to recover it. 00:30:06.095 [2024-07-23 01:09:50.013930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.095 [2024-07-23 01:09:50.014094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.095 [2024-07-23 01:09:50.014127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.095 qpair failed and we were unable to recover it. 00:30:06.095 [2024-07-23 01:09:50.014297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.095 [2024-07-23 01:09:50.014474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.095 [2024-07-23 01:09:50.014516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.095 qpair failed and we were unable to recover it. 00:30:06.095 [2024-07-23 01:09:50.014726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.095 [2024-07-23 01:09:50.014898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.095 [2024-07-23 01:09:50.014928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.095 qpair failed and we were unable to recover it. 00:30:06.095 [2024-07-23 01:09:50.015110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.095 [2024-07-23 01:09:50.015291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.095 [2024-07-23 01:09:50.015326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.095 qpair failed and we were unable to recover it. 00:30:06.095 [2024-07-23 01:09:50.015534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.095 [2024-07-23 01:09:50.015707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.095 [2024-07-23 01:09:50.015735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.095 qpair failed and we were unable to recover it. 00:30:06.095 [2024-07-23 01:09:50.015878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.095 [2024-07-23 01:09:50.016025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.095 [2024-07-23 01:09:50.016050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.095 qpair failed and we were unable to recover it. 00:30:06.095 [2024-07-23 01:09:50.016186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.095 [2024-07-23 01:09:50.016343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.095 [2024-07-23 01:09:50.016367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.095 qpair failed and we were unable to recover it. 00:30:06.095 [2024-07-23 01:09:50.016490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.095 [2024-07-23 01:09:50.016655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.095 [2024-07-23 01:09:50.016681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.095 qpair failed and we were unable to recover it. 00:30:06.095 [2024-07-23 01:09:50.016812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.095 [2024-07-23 01:09:50.016980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.095 [2024-07-23 01:09:50.017005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.095 qpair failed and we were unable to recover it. 00:30:06.095 [2024-07-23 01:09:50.017138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.095 [2024-07-23 01:09:50.017298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.095 [2024-07-23 01:09:50.017323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.095 qpair failed and we were unable to recover it. 00:30:06.095 [2024-07-23 01:09:50.017491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.095 [2024-07-23 01:09:50.017651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.095 [2024-07-23 01:09:50.017676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.095 qpair failed and we were unable to recover it. 00:30:06.095 [2024-07-23 01:09:50.017813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.095 [2024-07-23 01:09:50.017956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.095 [2024-07-23 01:09:50.017981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.095 qpair failed and we were unable to recover it. 00:30:06.095 [2024-07-23 01:09:50.018125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.095 [2024-07-23 01:09:50.018270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.095 [2024-07-23 01:09:50.018295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.095 qpair failed and we were unable to recover it. 00:30:06.095 [2024-07-23 01:09:50.018433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.095 [2024-07-23 01:09:50.018567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.095 [2024-07-23 01:09:50.018592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.095 qpair failed and we were unable to recover it. 00:30:06.095 [2024-07-23 01:09:50.018764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.095 [2024-07-23 01:09:50.018918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.095 [2024-07-23 01:09:50.018942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.095 qpair failed and we were unable to recover it. 00:30:06.095 [2024-07-23 01:09:50.019092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.095 [2024-07-23 01:09:50.019230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.095 [2024-07-23 01:09:50.019255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.095 qpair failed and we were unable to recover it. 00:30:06.095 [2024-07-23 01:09:50.019413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.095 [2024-07-23 01:09:50.019549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.095 [2024-07-23 01:09:50.019573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.095 qpair failed and we were unable to recover it. 00:30:06.095 [2024-07-23 01:09:50.019709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.095 [2024-07-23 01:09:50.019898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.095 [2024-07-23 01:09:50.019922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.095 qpair failed and we were unable to recover it. 00:30:06.095 [2024-07-23 01:09:50.020086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.095 [2024-07-23 01:09:50.020226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.095 [2024-07-23 01:09:50.020251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.096 qpair failed and we were unable to recover it. 00:30:06.096 [2024-07-23 01:09:50.020386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.096 [2024-07-23 01:09:50.020547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.096 [2024-07-23 01:09:50.020571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.096 qpair failed and we were unable to recover it. 00:30:06.096 [2024-07-23 01:09:50.020738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.096 [2024-07-23 01:09:50.020875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.096 [2024-07-23 01:09:50.020899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.096 qpair failed and we were unable to recover it. 00:30:06.096 [2024-07-23 01:09:50.021060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.096 [2024-07-23 01:09:50.021191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.096 [2024-07-23 01:09:50.021215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.096 qpair failed and we were unable to recover it. 00:30:06.096 [2024-07-23 01:09:50.021375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.096 [2024-07-23 01:09:50.021537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.096 [2024-07-23 01:09:50.021562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.096 qpair failed and we were unable to recover it. 00:30:06.096 [2024-07-23 01:09:50.021711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.096 [2024-07-23 01:09:50.021877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.096 [2024-07-23 01:09:50.021901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.096 qpair failed and we were unable to recover it. 00:30:06.096 [2024-07-23 01:09:50.022035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.096 [2024-07-23 01:09:50.022173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.096 [2024-07-23 01:09:50.022200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.096 qpair failed and we were unable to recover it. 00:30:06.096 [2024-07-23 01:09:50.022331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.096 [2024-07-23 01:09:50.022483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.096 [2024-07-23 01:09:50.022511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.096 qpair failed and we were unable to recover it. 00:30:06.096 [2024-07-23 01:09:50.022665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.096 [2024-07-23 01:09:50.022813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.096 [2024-07-23 01:09:50.022838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.096 qpair failed and we were unable to recover it. 00:30:06.096 [2024-07-23 01:09:50.022998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.096 [2024-07-23 01:09:50.023124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.096 [2024-07-23 01:09:50.023149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.096 qpair failed and we were unable to recover it. 00:30:06.096 [2024-07-23 01:09:50.023288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.096 [2024-07-23 01:09:50.023435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.096 [2024-07-23 01:09:50.023459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.096 qpair failed and we were unable to recover it. 00:30:06.096 [2024-07-23 01:09:50.023628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.096 [2024-07-23 01:09:50.023786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.096 [2024-07-23 01:09:50.023811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.096 qpair failed and we were unable to recover it. 00:30:06.096 [2024-07-23 01:09:50.023944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.096 [2024-07-23 01:09:50.024083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.096 [2024-07-23 01:09:50.024107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.096 qpair failed and we were unable to recover it. 00:30:06.096 [2024-07-23 01:09:50.024242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.096 [2024-07-23 01:09:50.024368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.096 [2024-07-23 01:09:50.024393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.096 qpair failed and we were unable to recover it. 00:30:06.096 [2024-07-23 01:09:50.024529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.096 [2024-07-23 01:09:50.024660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.096 [2024-07-23 01:09:50.024685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.096 qpair failed and we were unable to recover it. 00:30:06.096 [2024-07-23 01:09:50.024816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.096 [2024-07-23 01:09:50.024968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.096 [2024-07-23 01:09:50.024992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.096 qpair failed and we were unable to recover it. 00:30:06.096 [2024-07-23 01:09:50.025152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.096 [2024-07-23 01:09:50.025286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.096 [2024-07-23 01:09:50.025311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.096 qpair failed and we were unable to recover it. 00:30:06.096 [2024-07-23 01:09:50.025441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.096 [2024-07-23 01:09:50.025584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.096 [2024-07-23 01:09:50.025608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.096 qpair failed and we were unable to recover it. 00:30:06.096 [2024-07-23 01:09:50.025801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.096 [2024-07-23 01:09:50.025974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.096 [2024-07-23 01:09:50.025999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.096 qpair failed and we were unable to recover it. 00:30:06.096 [2024-07-23 01:09:50.026138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.096 [2024-07-23 01:09:50.026270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.096 [2024-07-23 01:09:50.026295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.096 qpair failed and we were unable to recover it. 00:30:06.096 [2024-07-23 01:09:50.026455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.096 [2024-07-23 01:09:50.026643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.096 [2024-07-23 01:09:50.026668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.096 qpair failed and we were unable to recover it. 00:30:06.096 [2024-07-23 01:09:50.026827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.096 [2024-07-23 01:09:50.026985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.096 [2024-07-23 01:09:50.027009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.096 qpair failed and we were unable to recover it. 00:30:06.096 [2024-07-23 01:09:50.027146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.096 [2024-07-23 01:09:50.027310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.096 [2024-07-23 01:09:50.027334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.096 qpair failed and we were unable to recover it. 00:30:06.096 [2024-07-23 01:09:50.027461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.096 [2024-07-23 01:09:50.027627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.096 [2024-07-23 01:09:50.027652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.096 qpair failed and we were unable to recover it. 00:30:06.096 [2024-07-23 01:09:50.027790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.096 [2024-07-23 01:09:50.027927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.096 [2024-07-23 01:09:50.027952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.096 qpair failed and we were unable to recover it. 00:30:06.096 [2024-07-23 01:09:50.028088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.096 [2024-07-23 01:09:50.028247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.096 [2024-07-23 01:09:50.028271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.096 qpair failed and we were unable to recover it. 00:30:06.096 [2024-07-23 01:09:50.028396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.096 [2024-07-23 01:09:50.028521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.096 [2024-07-23 01:09:50.028547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.096 qpair failed and we were unable to recover it. 00:30:06.096 [2024-07-23 01:09:50.028685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.096 [2024-07-23 01:09:50.028813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.096 [2024-07-23 01:09:50.028838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.096 qpair failed and we were unable to recover it. 00:30:06.096 [2024-07-23 01:09:50.028976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.096 [2024-07-23 01:09:50.029135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.096 [2024-07-23 01:09:50.029159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.096 qpair failed and we were unable to recover it. 00:30:06.096 [2024-07-23 01:09:50.029305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.096 [2024-07-23 01:09:50.029468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.097 [2024-07-23 01:09:50.029492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.097 qpair failed and we were unable to recover it. 00:30:06.097 [2024-07-23 01:09:50.029638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.097 [2024-07-23 01:09:50.029767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.097 [2024-07-23 01:09:50.029792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.097 qpair failed and we were unable to recover it. 00:30:06.097 [2024-07-23 01:09:50.029971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.097 [2024-07-23 01:09:50.030106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.097 [2024-07-23 01:09:50.030130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.097 qpair failed and we were unable to recover it. 00:30:06.097 [2024-07-23 01:09:50.030295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.097 [2024-07-23 01:09:50.030444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.097 [2024-07-23 01:09:50.030469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.097 qpair failed and we were unable to recover it. 00:30:06.097 [2024-07-23 01:09:50.030597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.097 [2024-07-23 01:09:50.030754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.097 [2024-07-23 01:09:50.030780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.097 qpair failed and we were unable to recover it. 00:30:06.097 [2024-07-23 01:09:50.030907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.097 [2024-07-23 01:09:50.031069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.097 [2024-07-23 01:09:50.031094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.097 qpair failed and we were unable to recover it. 00:30:06.097 [2024-07-23 01:09:50.031264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.097 [2024-07-23 01:09:50.031396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.097 [2024-07-23 01:09:50.031420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.097 qpair failed and we were unable to recover it. 00:30:06.097 [2024-07-23 01:09:50.031618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.097 [2024-07-23 01:09:50.031745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.097 [2024-07-23 01:09:50.031770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.097 qpair failed and we were unable to recover it. 00:30:06.097 [2024-07-23 01:09:50.031900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.097 [2024-07-23 01:09:50.032073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.097 [2024-07-23 01:09:50.032098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.097 qpair failed and we were unable to recover it. 00:30:06.097 [2024-07-23 01:09:50.032238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.097 [2024-07-23 01:09:50.032365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.097 [2024-07-23 01:09:50.032390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.097 qpair failed and we were unable to recover it. 00:30:06.097 [2024-07-23 01:09:50.032553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.097 [2024-07-23 01:09:50.032718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.097 [2024-07-23 01:09:50.032743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.097 qpair failed and we were unable to recover it. 00:30:06.097 [2024-07-23 01:09:50.032880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.097 [2024-07-23 01:09:50.033013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.097 [2024-07-23 01:09:50.033038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.097 qpair failed and we were unable to recover it. 00:30:06.097 [2024-07-23 01:09:50.033185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.097 [2024-07-23 01:09:50.033322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.097 [2024-07-23 01:09:50.033348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.097 qpair failed and we were unable to recover it. 00:30:06.097 [2024-07-23 01:09:50.033504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.097 [2024-07-23 01:09:50.033673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.097 [2024-07-23 01:09:50.033698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.097 qpair failed and we were unable to recover it. 00:30:06.097 [2024-07-23 01:09:50.033851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.097 [2024-07-23 01:09:50.034007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.097 [2024-07-23 01:09:50.034031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.097 qpair failed and we were unable to recover it. 00:30:06.097 [2024-07-23 01:09:50.034192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.097 [2024-07-23 01:09:50.034352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.097 [2024-07-23 01:09:50.034376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.097 qpair failed and we were unable to recover it. 00:30:06.097 [2024-07-23 01:09:50.034550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.097 [2024-07-23 01:09:50.034694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.097 [2024-07-23 01:09:50.034719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.097 qpair failed and we were unable to recover it. 00:30:06.097 [2024-07-23 01:09:50.034883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.097 [2024-07-23 01:09:50.035034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.097 [2024-07-23 01:09:50.035058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.097 qpair failed and we were unable to recover it. 00:30:06.097 [2024-07-23 01:09:50.035198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.097 [2024-07-23 01:09:50.035352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.097 [2024-07-23 01:09:50.035376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.097 qpair failed and we were unable to recover it. 00:30:06.097 [2024-07-23 01:09:50.035530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.097 [2024-07-23 01:09:50.035673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.097 [2024-07-23 01:09:50.035698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.097 qpair failed and we were unable to recover it. 00:30:06.097 [2024-07-23 01:09:50.035840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.097 [2024-07-23 01:09:50.036002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.097 [2024-07-23 01:09:50.036026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.097 qpair failed and we were unable to recover it. 00:30:06.097 [2024-07-23 01:09:50.036155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.097 [2024-07-23 01:09:50.036288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.097 [2024-07-23 01:09:50.036312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.097 qpair failed and we were unable to recover it. 00:30:06.097 [2024-07-23 01:09:50.036454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.097 [2024-07-23 01:09:50.036618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.097 [2024-07-23 01:09:50.036644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.097 qpair failed and we were unable to recover it. 00:30:06.097 [2024-07-23 01:09:50.036806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.097 [2024-07-23 01:09:50.036942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.097 [2024-07-23 01:09:50.036966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.097 qpair failed and we were unable to recover it. 00:30:06.097 [2024-07-23 01:09:50.037100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.097 [2024-07-23 01:09:50.037229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.097 [2024-07-23 01:09:50.037254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.097 qpair failed and we were unable to recover it. 00:30:06.097 [2024-07-23 01:09:50.037412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.097 [2024-07-23 01:09:50.037542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.097 [2024-07-23 01:09:50.037566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.097 qpair failed and we were unable to recover it. 00:30:06.097 [2024-07-23 01:09:50.037695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.097 [2024-07-23 01:09:50.037820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.097 [2024-07-23 01:09:50.037845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.097 qpair failed and we were unable to recover it. 00:30:06.097 [2024-07-23 01:09:50.037980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.097 [2024-07-23 01:09:50.038110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.097 [2024-07-23 01:09:50.038134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.097 qpair failed and we were unable to recover it. 00:30:06.097 [2024-07-23 01:09:50.038297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.097 [2024-07-23 01:09:50.038443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.097 [2024-07-23 01:09:50.038467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.097 qpair failed and we were unable to recover it. 00:30:06.097 [2024-07-23 01:09:50.038659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.098 [2024-07-23 01:09:50.038788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.098 [2024-07-23 01:09:50.038817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.098 qpair failed and we were unable to recover it. 00:30:06.098 [2024-07-23 01:09:50.038947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.098 [2024-07-23 01:09:50.039079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.098 [2024-07-23 01:09:50.039105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.098 qpair failed and we were unable to recover it. 00:30:06.098 [2024-07-23 01:09:50.039269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.098 [2024-07-23 01:09:50.039425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.098 [2024-07-23 01:09:50.039450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.098 qpair failed and we were unable to recover it. 00:30:06.098 [2024-07-23 01:09:50.039590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.098 [2024-07-23 01:09:50.039768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.098 [2024-07-23 01:09:50.039794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.098 qpair failed and we were unable to recover it. 00:30:06.098 [2024-07-23 01:09:50.039940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.098 [2024-07-23 01:09:50.040116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.098 [2024-07-23 01:09:50.040141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.098 qpair failed and we were unable to recover it. 00:30:06.098 [2024-07-23 01:09:50.040277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.098 [2024-07-23 01:09:50.040437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.098 [2024-07-23 01:09:50.040462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.098 qpair failed and we were unable to recover it. 00:30:06.098 [2024-07-23 01:09:50.040634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.098 [2024-07-23 01:09:50.040809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.098 [2024-07-23 01:09:50.040833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.098 qpair failed and we were unable to recover it. 00:30:06.098 [2024-07-23 01:09:50.040967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.098 [2024-07-23 01:09:50.041133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.098 [2024-07-23 01:09:50.041158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.098 qpair failed and we were unable to recover it. 00:30:06.098 [2024-07-23 01:09:50.041300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.098 [2024-07-23 01:09:50.041431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.098 [2024-07-23 01:09:50.041455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.098 qpair failed and we were unable to recover it. 00:30:06.098 [2024-07-23 01:09:50.041632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.098 [2024-07-23 01:09:50.041778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.098 [2024-07-23 01:09:50.041803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.098 qpair failed and we were unable to recover it. 00:30:06.098 [2024-07-23 01:09:50.041949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.098 [2024-07-23 01:09:50.042122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.098 [2024-07-23 01:09:50.042149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.098 qpair failed and we were unable to recover it. 00:30:06.098 [2024-07-23 01:09:50.042319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.098 [2024-07-23 01:09:50.042452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.098 [2024-07-23 01:09:50.042477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.098 qpair failed and we were unable to recover it. 00:30:06.098 [2024-07-23 01:09:50.042625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.098 [2024-07-23 01:09:50.042786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.098 [2024-07-23 01:09:50.042810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.098 qpair failed and we were unable to recover it. 00:30:06.098 [2024-07-23 01:09:50.042949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.098 [2024-07-23 01:09:50.043105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.098 [2024-07-23 01:09:50.043130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.098 qpair failed and we were unable to recover it. 00:30:06.098 [2024-07-23 01:09:50.043270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.098 [2024-07-23 01:09:50.043456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.098 [2024-07-23 01:09:50.043481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.098 qpair failed and we were unable to recover it. 00:30:06.098 [2024-07-23 01:09:50.043646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.098 [2024-07-23 01:09:50.043835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.098 [2024-07-23 01:09:50.043860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.098 qpair failed and we were unable to recover it. 00:30:06.098 [2024-07-23 01:09:50.044020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.098 [2024-07-23 01:09:50.044147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.098 [2024-07-23 01:09:50.044171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.098 qpair failed and we were unable to recover it. 00:30:06.098 [2024-07-23 01:09:50.044359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.098 [2024-07-23 01:09:50.044485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.098 [2024-07-23 01:09:50.044509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.098 qpair failed and we were unable to recover it. 00:30:06.098 [2024-07-23 01:09:50.044676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.098 [2024-07-23 01:09:50.044821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.098 [2024-07-23 01:09:50.044847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.098 qpair failed and we were unable to recover it. 00:30:06.098 [2024-07-23 01:09:50.044981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.098 [2024-07-23 01:09:50.045141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.098 [2024-07-23 01:09:50.045166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.098 qpair failed and we were unable to recover it. 00:30:06.098 [2024-07-23 01:09:50.045302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.098 [2024-07-23 01:09:50.045438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.098 [2024-07-23 01:09:50.045462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.098 qpair failed and we were unable to recover it. 00:30:06.098 [2024-07-23 01:09:50.045632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.098 [2024-07-23 01:09:50.045768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.098 [2024-07-23 01:09:50.045792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.098 qpair failed and we were unable to recover it. 00:30:06.098 [2024-07-23 01:09:50.045972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.098 [2024-07-23 01:09:50.046136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.098 [2024-07-23 01:09:50.046160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.098 qpair failed and we were unable to recover it. 00:30:06.098 [2024-07-23 01:09:50.046324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.098 [2024-07-23 01:09:50.046459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.098 [2024-07-23 01:09:50.046484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.098 qpair failed and we were unable to recover it. 00:30:06.098 [2024-07-23 01:09:50.046662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.098 [2024-07-23 01:09:50.046803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.098 [2024-07-23 01:09:50.046827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.098 qpair failed and we were unable to recover it. 00:30:06.098 [2024-07-23 01:09:50.046958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.098 [2024-07-23 01:09:50.047141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.098 [2024-07-23 01:09:50.047166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.098 qpair failed and we were unable to recover it. 00:30:06.098 [2024-07-23 01:09:50.047323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.098 [2024-07-23 01:09:50.047458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.098 [2024-07-23 01:09:50.047484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.098 qpair failed and we were unable to recover it. 00:30:06.098 [2024-07-23 01:09:50.047640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.098 [2024-07-23 01:09:50.047827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.098 [2024-07-23 01:09:50.047852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.098 qpair failed and we were unable to recover it. 00:30:06.098 [2024-07-23 01:09:50.048009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.098 [2024-07-23 01:09:50.048152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.098 [2024-07-23 01:09:50.048176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.098 qpair failed and we were unable to recover it. 00:30:06.099 [2024-07-23 01:09:50.048326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.099 [2024-07-23 01:09:50.048467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.099 [2024-07-23 01:09:50.048493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.099 qpair failed and we were unable to recover it. 00:30:06.099 [2024-07-23 01:09:50.048623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.099 [2024-07-23 01:09:50.048762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.099 [2024-07-23 01:09:50.048786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.099 qpair failed and we were unable to recover it. 00:30:06.099 [2024-07-23 01:09:50.048946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.099 [2024-07-23 01:09:50.049105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.099 [2024-07-23 01:09:50.049135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.099 qpair failed and we were unable to recover it. 00:30:06.099 [2024-07-23 01:09:50.049314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.099 [2024-07-23 01:09:50.049446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.099 [2024-07-23 01:09:50.049472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.099 qpair failed and we were unable to recover it. 00:30:06.099 [2024-07-23 01:09:50.049604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.099 [2024-07-23 01:09:50.049774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.099 [2024-07-23 01:09:50.049800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.099 qpair failed and we were unable to recover it. 00:30:06.099 [2024-07-23 01:09:50.049994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.099 [2024-07-23 01:09:50.050121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.099 [2024-07-23 01:09:50.050146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.099 qpair failed and we were unable to recover it. 00:30:06.099 [2024-07-23 01:09:50.050301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.099 [2024-07-23 01:09:50.050434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.099 [2024-07-23 01:09:50.050460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.099 qpair failed and we were unable to recover it. 00:30:06.099 [2024-07-23 01:09:50.050585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.099 [2024-07-23 01:09:50.050733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.099 [2024-07-23 01:09:50.050759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.099 qpair failed and we were unable to recover it. 00:30:06.099 [2024-07-23 01:09:50.050927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.099 [2024-07-23 01:09:50.051123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.099 [2024-07-23 01:09:50.051149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.099 qpair failed and we were unable to recover it. 00:30:06.099 [2024-07-23 01:09:50.051307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.099 [2024-07-23 01:09:50.051439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.099 [2024-07-23 01:09:50.051464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.099 qpair failed and we were unable to recover it. 00:30:06.099 [2024-07-23 01:09:50.051647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.099 [2024-07-23 01:09:50.051795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.099 [2024-07-23 01:09:50.051821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.099 qpair failed and we were unable to recover it. 00:30:06.099 [2024-07-23 01:09:50.051983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.099 [2024-07-23 01:09:50.052119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.099 [2024-07-23 01:09:50.052144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.099 qpair failed and we were unable to recover it. 00:30:06.099 [2024-07-23 01:09:50.052276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.099 [2024-07-23 01:09:50.052414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.099 [2024-07-23 01:09:50.052441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.099 qpair failed and we were unable to recover it. 00:30:06.099 [2024-07-23 01:09:50.052625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.099 [2024-07-23 01:09:50.052761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.099 [2024-07-23 01:09:50.052787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.099 qpair failed and we were unable to recover it. 00:30:06.099 [2024-07-23 01:09:50.052939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.099 [2024-07-23 01:09:50.053094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.099 [2024-07-23 01:09:50.053120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.099 qpair failed and we were unable to recover it. 00:30:06.099 [2024-07-23 01:09:50.053276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.099 [2024-07-23 01:09:50.053412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.099 [2024-07-23 01:09:50.053438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.099 qpair failed and we were unable to recover it. 00:30:06.099 [2024-07-23 01:09:50.053599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.099 [2024-07-23 01:09:50.053737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.099 [2024-07-23 01:09:50.053763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.099 qpair failed and we were unable to recover it. 00:30:06.099 [2024-07-23 01:09:50.053918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.099 [2024-07-23 01:09:50.054081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.099 [2024-07-23 01:09:50.054106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.099 qpair failed and we were unable to recover it. 00:30:06.099 [2024-07-23 01:09:50.054279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.099 [2024-07-23 01:09:50.054408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.099 [2024-07-23 01:09:50.054433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.099 qpair failed and we were unable to recover it. 00:30:06.099 [2024-07-23 01:09:50.054566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.099 [2024-07-23 01:09:50.054739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.099 [2024-07-23 01:09:50.054767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.099 qpair failed and we were unable to recover it. 00:30:06.099 [2024-07-23 01:09:50.054940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.099 [2024-07-23 01:09:50.055086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.099 [2024-07-23 01:09:50.055114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.099 qpair failed and we were unable to recover it. 00:30:06.099 [2024-07-23 01:09:50.055279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.099 [2024-07-23 01:09:50.055411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.099 [2024-07-23 01:09:50.055437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.099 qpair failed and we were unable to recover it. 00:30:06.099 [2024-07-23 01:09:50.055601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.099 [2024-07-23 01:09:50.055755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.099 [2024-07-23 01:09:50.055781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.099 qpair failed and we were unable to recover it. 00:30:06.099 [2024-07-23 01:09:50.055915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.099 [2024-07-23 01:09:50.056043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.099 [2024-07-23 01:09:50.056068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.099 qpair failed and we were unable to recover it. 00:30:06.099 [2024-07-23 01:09:50.056235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.099 [2024-07-23 01:09:50.056372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.099 [2024-07-23 01:09:50.056398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.100 qpair failed and we were unable to recover it. 00:30:06.100 [2024-07-23 01:09:50.056566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.100 [2024-07-23 01:09:50.056703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.100 [2024-07-23 01:09:50.056729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.100 qpair failed and we were unable to recover it. 00:30:06.100 [2024-07-23 01:09:50.056882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.100 [2024-07-23 01:09:50.057031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.100 [2024-07-23 01:09:50.057056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.100 qpair failed and we were unable to recover it. 00:30:06.100 [2024-07-23 01:09:50.057195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.100 [2024-07-23 01:09:50.057332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.100 [2024-07-23 01:09:50.057357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.100 qpair failed and we were unable to recover it. 00:30:06.100 [2024-07-23 01:09:50.057531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.100 [2024-07-23 01:09:50.057666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.100 [2024-07-23 01:09:50.057693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.100 qpair failed and we were unable to recover it. 00:30:06.100 [2024-07-23 01:09:50.057833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.100 [2024-07-23 01:09:50.057983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.100 [2024-07-23 01:09:50.058008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.100 qpair failed and we were unable to recover it. 00:30:06.100 [2024-07-23 01:09:50.058166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.100 [2024-07-23 01:09:50.058314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.100 [2024-07-23 01:09:50.058339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.100 qpair failed and we were unable to recover it. 00:30:06.100 [2024-07-23 01:09:50.058495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.100 [2024-07-23 01:09:50.058642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.100 [2024-07-23 01:09:50.058670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.100 qpair failed and we were unable to recover it. 00:30:06.100 [2024-07-23 01:09:50.058805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.100 [2024-07-23 01:09:50.058956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.100 [2024-07-23 01:09:50.058982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.100 qpair failed and we were unable to recover it. 00:30:06.100 [2024-07-23 01:09:50.059116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.100 [2024-07-23 01:09:50.059262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.100 [2024-07-23 01:09:50.059288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.100 qpair failed and we were unable to recover it. 00:30:06.100 [2024-07-23 01:09:50.059424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.100 [2024-07-23 01:09:50.059553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.100 [2024-07-23 01:09:50.059578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.100 qpair failed and we were unable to recover it. 00:30:06.100 [2024-07-23 01:09:50.059718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.100 [2024-07-23 01:09:50.059847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.100 [2024-07-23 01:09:50.059872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.100 qpair failed and we were unable to recover it. 00:30:06.100 [2024-07-23 01:09:50.060067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.100 [2024-07-23 01:09:50.060227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.100 [2024-07-23 01:09:50.060252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.100 qpair failed and we were unable to recover it. 00:30:06.100 [2024-07-23 01:09:50.060394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.100 [2024-07-23 01:09:50.060524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.100 [2024-07-23 01:09:50.060549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.100 qpair failed and we were unable to recover it. 00:30:06.100 [2024-07-23 01:09:50.060706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.100 [2024-07-23 01:09:50.060865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.100 [2024-07-23 01:09:50.060891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.100 qpair failed and we were unable to recover it. 00:30:06.100 [2024-07-23 01:09:50.061081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.100 [2024-07-23 01:09:50.061226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.100 [2024-07-23 01:09:50.061251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.100 qpair failed and we were unable to recover it. 00:30:06.100 [2024-07-23 01:09:50.061388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.100 [2024-07-23 01:09:50.061551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.100 [2024-07-23 01:09:50.061577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.100 qpair failed and we were unable to recover it. 00:30:06.100 [2024-07-23 01:09:50.061748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.100 [2024-07-23 01:09:50.061888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.100 [2024-07-23 01:09:50.061915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.100 qpair failed and we were unable to recover it. 00:30:06.100 [2024-07-23 01:09:50.062079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.100 [2024-07-23 01:09:50.062237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.100 [2024-07-23 01:09:50.062262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.100 qpair failed and we were unable to recover it. 00:30:06.100 [2024-07-23 01:09:50.062426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.100 [2024-07-23 01:09:50.062620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.100 [2024-07-23 01:09:50.062646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.100 qpair failed and we were unable to recover it. 00:30:06.100 [2024-07-23 01:09:50.062807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.100 [2024-07-23 01:09:50.062944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.100 [2024-07-23 01:09:50.062969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.100 qpair failed and we were unable to recover it. 00:30:06.100 [2024-07-23 01:09:50.063119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.100 [2024-07-23 01:09:50.063276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.100 [2024-07-23 01:09:50.063301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.100 qpair failed and we were unable to recover it. 00:30:06.100 [2024-07-23 01:09:50.063460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.100 [2024-07-23 01:09:50.063605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.100 [2024-07-23 01:09:50.063637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.100 qpair failed and we were unable to recover it. 00:30:06.100 [2024-07-23 01:09:50.063778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.100 [2024-07-23 01:09:50.063941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.100 [2024-07-23 01:09:50.063966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.100 qpair failed and we were unable to recover it. 00:30:06.100 [2024-07-23 01:09:50.064132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.100 [2024-07-23 01:09:50.064261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.100 [2024-07-23 01:09:50.064286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.100 qpair failed and we were unable to recover it. 00:30:06.100 [2024-07-23 01:09:50.064421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.100 [2024-07-23 01:09:50.064552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.100 [2024-07-23 01:09:50.064577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.100 qpair failed and we were unable to recover it. 00:30:06.100 [2024-07-23 01:09:50.064745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.100 [2024-07-23 01:09:50.064879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.100 [2024-07-23 01:09:50.064906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.100 qpair failed and we were unable to recover it. 00:30:06.100 [2024-07-23 01:09:50.065053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.100 [2024-07-23 01:09:50.065192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.100 [2024-07-23 01:09:50.065218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.100 qpair failed and we were unable to recover it. 00:30:06.100 [2024-07-23 01:09:50.065379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.100 [2024-07-23 01:09:50.065510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.100 [2024-07-23 01:09:50.065540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.100 qpair failed and we were unable to recover it. 00:30:06.101 [2024-07-23 01:09:50.065708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.101 [2024-07-23 01:09:50.065880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.101 [2024-07-23 01:09:50.065906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.101 qpair failed and we were unable to recover it. 00:30:06.101 [2024-07-23 01:09:50.066056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.101 [2024-07-23 01:09:50.066218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.101 [2024-07-23 01:09:50.066245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.101 qpair failed and we were unable to recover it. 00:30:06.101 [2024-07-23 01:09:50.066393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.101 [2024-07-23 01:09:50.066556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.101 [2024-07-23 01:09:50.066582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.101 qpair failed and we were unable to recover it. 00:30:06.101 [2024-07-23 01:09:50.066730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.101 [2024-07-23 01:09:50.066878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.101 [2024-07-23 01:09:50.066904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.101 qpair failed and we were unable to recover it. 00:30:06.101 [2024-07-23 01:09:50.067033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.101 [2024-07-23 01:09:50.067193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.101 [2024-07-23 01:09:50.067218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.101 qpair failed and we were unable to recover it. 00:30:06.101 [2024-07-23 01:09:50.067374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.101 [2024-07-23 01:09:50.067503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.101 [2024-07-23 01:09:50.067528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.101 qpair failed and we were unable to recover it. 00:30:06.101 [2024-07-23 01:09:50.067666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.101 [2024-07-23 01:09:50.067815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.101 [2024-07-23 01:09:50.067841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.101 qpair failed and we were unable to recover it. 00:30:06.101 [2024-07-23 01:09:50.067983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.101 [2024-07-23 01:09:50.068132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.101 [2024-07-23 01:09:50.068158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.101 qpair failed and we were unable to recover it. 00:30:06.101 [2024-07-23 01:09:50.068293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.101 [2024-07-23 01:09:50.068455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.101 [2024-07-23 01:09:50.068480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.101 qpair failed and we were unable to recover it. 00:30:06.101 [2024-07-23 01:09:50.068645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.101 [2024-07-23 01:09:50.068788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.101 [2024-07-23 01:09:50.068819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.101 qpair failed and we were unable to recover it. 00:30:06.101 [2024-07-23 01:09:50.068979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.101 [2024-07-23 01:09:50.069139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.101 [2024-07-23 01:09:50.069165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.101 qpair failed and we were unable to recover it. 00:30:06.101 [2024-07-23 01:09:50.069299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.101 [2024-07-23 01:09:50.069436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.101 [2024-07-23 01:09:50.069461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.101 qpair failed and we were unable to recover it. 00:30:06.101 [2024-07-23 01:09:50.069598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.101 [2024-07-23 01:09:50.069773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.101 [2024-07-23 01:09:50.069799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.101 qpair failed and we were unable to recover it. 00:30:06.101 [2024-07-23 01:09:50.069933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.101 [2024-07-23 01:09:50.070069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.101 [2024-07-23 01:09:50.070094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.101 qpair failed and we were unable to recover it. 00:30:06.101 [2024-07-23 01:09:50.070245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.101 [2024-07-23 01:09:50.070405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.101 [2024-07-23 01:09:50.070430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.101 qpair failed and we were unable to recover it. 00:30:06.101 [2024-07-23 01:09:50.070593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.101 [2024-07-23 01:09:50.070767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.101 [2024-07-23 01:09:50.070793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.101 qpair failed and we were unable to recover it. 00:30:06.101 [2024-07-23 01:09:50.070957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.101 [2024-07-23 01:09:50.071091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.101 [2024-07-23 01:09:50.071117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.101 qpair failed and we were unable to recover it. 00:30:06.101 [2024-07-23 01:09:50.071275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.101 [2024-07-23 01:09:50.071423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.101 [2024-07-23 01:09:50.071449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.101 qpair failed and we were unable to recover it. 00:30:06.101 [2024-07-23 01:09:50.071585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.101 [2024-07-23 01:09:50.071747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.101 [2024-07-23 01:09:50.071774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.101 qpair failed and we were unable to recover it. 00:30:06.101 [2024-07-23 01:09:50.071929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.101 [2024-07-23 01:09:50.072061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.101 [2024-07-23 01:09:50.072094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.101 qpair failed and we were unable to recover it. 00:30:06.101 [2024-07-23 01:09:50.072233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.101 [2024-07-23 01:09:50.072421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.101 [2024-07-23 01:09:50.072447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.101 qpair failed and we were unable to recover it. 00:30:06.101 [2024-07-23 01:09:50.072609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.101 [2024-07-23 01:09:50.072742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.101 [2024-07-23 01:09:50.072767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.101 qpair failed and we were unable to recover it. 00:30:06.101 [2024-07-23 01:09:50.072903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.101 [2024-07-23 01:09:50.073034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.101 [2024-07-23 01:09:50.073060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.101 qpair failed and we were unable to recover it. 00:30:06.101 [2024-07-23 01:09:50.073204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.101 [2024-07-23 01:09:50.073363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.101 [2024-07-23 01:09:50.073388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.101 qpair failed and we were unable to recover it. 00:30:06.101 [2024-07-23 01:09:50.073544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.101 [2024-07-23 01:09:50.073703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.101 [2024-07-23 01:09:50.073730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.101 qpair failed and we were unable to recover it. 00:30:06.101 [2024-07-23 01:09:50.073888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.101 [2024-07-23 01:09:50.074024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.101 [2024-07-23 01:09:50.074051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.101 qpair failed and we were unable to recover it. 00:30:06.101 [2024-07-23 01:09:50.074188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.101 [2024-07-23 01:09:50.074317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.101 [2024-07-23 01:09:50.074343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.101 qpair failed and we were unable to recover it. 00:30:06.101 [2024-07-23 01:09:50.074477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.101 [2024-07-23 01:09:50.074607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.101 [2024-07-23 01:09:50.074638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.101 qpair failed and we were unable to recover it. 00:30:06.101 [2024-07-23 01:09:50.074830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.101 [2024-07-23 01:09:50.074994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.102 [2024-07-23 01:09:50.075019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.102 qpair failed and we were unable to recover it. 00:30:06.102 [2024-07-23 01:09:50.075182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.102 [2024-07-23 01:09:50.075318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.102 [2024-07-23 01:09:50.075348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.102 qpair failed and we were unable to recover it. 00:30:06.102 [2024-07-23 01:09:50.075484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.102 [2024-07-23 01:09:50.075635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.102 [2024-07-23 01:09:50.075661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.102 qpair failed and we were unable to recover it. 00:30:06.102 [2024-07-23 01:09:50.075829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.102 [2024-07-23 01:09:50.075997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.102 [2024-07-23 01:09:50.076022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.102 qpair failed and we were unable to recover it. 00:30:06.102 [2024-07-23 01:09:50.076156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.102 [2024-07-23 01:09:50.076324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.102 [2024-07-23 01:09:50.076355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.102 qpair failed and we were unable to recover it. 00:30:06.102 [2024-07-23 01:09:50.076485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.102 [2024-07-23 01:09:50.076645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.102 [2024-07-23 01:09:50.076671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.102 qpair failed and we were unable to recover it. 00:30:06.102 [2024-07-23 01:09:50.076838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.102 [2024-07-23 01:09:50.076962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.102 [2024-07-23 01:09:50.076987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.102 qpair failed and we were unable to recover it. 00:30:06.102 [2024-07-23 01:09:50.077131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.102 [2024-07-23 01:09:50.077297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.102 [2024-07-23 01:09:50.077322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.102 qpair failed and we were unable to recover it. 00:30:06.102 [2024-07-23 01:09:50.077500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.102 [2024-07-23 01:09:50.077641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.102 [2024-07-23 01:09:50.077669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.102 qpair failed and we were unable to recover it. 00:30:06.102 [2024-07-23 01:09:50.077828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.102 [2024-07-23 01:09:50.077974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.102 [2024-07-23 01:09:50.078000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.102 qpair failed and we were unable to recover it. 00:30:06.102 [2024-07-23 01:09:50.078170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.102 [2024-07-23 01:09:50.078348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.102 [2024-07-23 01:09:50.078373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.102 qpair failed and we were unable to recover it. 00:30:06.102 [2024-07-23 01:09:50.078511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.102 [2024-07-23 01:09:50.078646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.102 [2024-07-23 01:09:50.078680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.102 qpair failed and we were unable to recover it. 00:30:06.102 [2024-07-23 01:09:50.078882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.102 [2024-07-23 01:09:50.079049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.102 [2024-07-23 01:09:50.079075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.102 qpair failed and we were unable to recover it. 00:30:06.102 [2024-07-23 01:09:50.079206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.102 [2024-07-23 01:09:50.079334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.102 [2024-07-23 01:09:50.079359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.102 qpair failed and we were unable to recover it. 00:30:06.102 [2024-07-23 01:09:50.079527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.102 [2024-07-23 01:09:50.079693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.102 [2024-07-23 01:09:50.079719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.102 qpair failed and we were unable to recover it. 00:30:06.102 [2024-07-23 01:09:50.079856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.102 [2024-07-23 01:09:50.079988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.102 [2024-07-23 01:09:50.080014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.102 qpair failed and we were unable to recover it. 00:30:06.102 [2024-07-23 01:09:50.080152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.102 [2024-07-23 01:09:50.080277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.102 [2024-07-23 01:09:50.080302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.102 qpair failed and we were unable to recover it. 00:30:06.102 [2024-07-23 01:09:50.080434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.102 [2024-07-23 01:09:50.080568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.102 [2024-07-23 01:09:50.080593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.102 qpair failed and we were unable to recover it. 00:30:06.102 [2024-07-23 01:09:50.080779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.102 [2024-07-23 01:09:50.080917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.102 [2024-07-23 01:09:50.080943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.102 qpair failed and we were unable to recover it. 00:30:06.102 [2024-07-23 01:09:50.081074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.102 [2024-07-23 01:09:50.081223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.102 [2024-07-23 01:09:50.081250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.102 qpair failed and we were unable to recover it. 00:30:06.102 [2024-07-23 01:09:50.081385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.102 [2024-07-23 01:09:50.081571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.102 [2024-07-23 01:09:50.081597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.102 qpair failed and we were unable to recover it. 00:30:06.102 [2024-07-23 01:09:50.081753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.102 [2024-07-23 01:09:50.081919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.102 [2024-07-23 01:09:50.081946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.102 qpair failed and we were unable to recover it. 00:30:06.102 [2024-07-23 01:09:50.082092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.102 [2024-07-23 01:09:50.082240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.102 [2024-07-23 01:09:50.082265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.102 qpair failed and we were unable to recover it. 00:30:06.102 [2024-07-23 01:09:50.082415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.102 [2024-07-23 01:09:50.082559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.102 [2024-07-23 01:09:50.082584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.102 qpair failed and we were unable to recover it. 00:30:06.102 [2024-07-23 01:09:50.082769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.102 [2024-07-23 01:09:50.082902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.102 [2024-07-23 01:09:50.082926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.102 qpair failed and we were unable to recover it. 00:30:06.102 [2024-07-23 01:09:50.083058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.102 [2024-07-23 01:09:50.083192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.102 [2024-07-23 01:09:50.083218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.102 qpair failed and we were unable to recover it. 00:30:06.102 [2024-07-23 01:09:50.083365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.102 [2024-07-23 01:09:50.083494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.102 [2024-07-23 01:09:50.083521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.102 qpair failed and we were unable to recover it. 00:30:06.102 [2024-07-23 01:09:50.083658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.102 [2024-07-23 01:09:50.083826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.102 [2024-07-23 01:09:50.083853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.102 qpair failed and we were unable to recover it. 00:30:06.102 [2024-07-23 01:09:50.084017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.102 [2024-07-23 01:09:50.084179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.102 [2024-07-23 01:09:50.084204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.102 qpair failed and we were unable to recover it. 00:30:06.102 [2024-07-23 01:09:50.084355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.103 [2024-07-23 01:09:50.084494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.103 [2024-07-23 01:09:50.084521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.103 qpair failed and we were unable to recover it. 00:30:06.103 [2024-07-23 01:09:50.084683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.103 [2024-07-23 01:09:50.084816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.103 [2024-07-23 01:09:50.084841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.103 qpair failed and we were unable to recover it. 00:30:06.103 [2024-07-23 01:09:50.085001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.103 [2024-07-23 01:09:50.085177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.103 [2024-07-23 01:09:50.085203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.103 qpair failed and we were unable to recover it. 00:30:06.103 [2024-07-23 01:09:50.085348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.103 [2024-07-23 01:09:50.085537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.103 [2024-07-23 01:09:50.085563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.103 qpair failed and we were unable to recover it. 00:30:06.103 [2024-07-23 01:09:50.085701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.103 [2024-07-23 01:09:50.085838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.103 [2024-07-23 01:09:50.085863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.103 qpair failed and we were unable to recover it. 00:30:06.103 [2024-07-23 01:09:50.086004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.103 [2024-07-23 01:09:50.086169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.103 [2024-07-23 01:09:50.086194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.103 qpair failed and we were unable to recover it. 00:30:06.103 [2024-07-23 01:09:50.086355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.103 [2024-07-23 01:09:50.086516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.103 [2024-07-23 01:09:50.086542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.103 qpair failed and we were unable to recover it. 00:30:06.103 [2024-07-23 01:09:50.086711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.103 [2024-07-23 01:09:50.086876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.103 [2024-07-23 01:09:50.086902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.103 qpair failed and we were unable to recover it. 00:30:06.103 [2024-07-23 01:09:50.087063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.103 [2024-07-23 01:09:50.087221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.103 [2024-07-23 01:09:50.087247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.103 qpair failed and we were unable to recover it. 00:30:06.103 [2024-07-23 01:09:50.087405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.103 [2024-07-23 01:09:50.087546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.103 [2024-07-23 01:09:50.087572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.103 qpair failed and we were unable to recover it. 00:30:06.103 [2024-07-23 01:09:50.087706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.103 [2024-07-23 01:09:50.087869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.103 [2024-07-23 01:09:50.087895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.103 qpair failed and we were unable to recover it. 00:30:06.103 [2024-07-23 01:09:50.088030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.103 [2024-07-23 01:09:50.088193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.103 [2024-07-23 01:09:50.088220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.103 qpair failed and we were unable to recover it. 00:30:06.103 [2024-07-23 01:09:50.088375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.103 [2024-07-23 01:09:50.088566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.103 [2024-07-23 01:09:50.088591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.103 qpair failed and we were unable to recover it. 00:30:06.103 [2024-07-23 01:09:50.088741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.103 [2024-07-23 01:09:50.088874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.103 [2024-07-23 01:09:50.088899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.103 qpair failed and we were unable to recover it. 00:30:06.103 [2024-07-23 01:09:50.089029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.103 [2024-07-23 01:09:50.089162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.103 [2024-07-23 01:09:50.089187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.103 qpair failed and we were unable to recover it. 00:30:06.103 [2024-07-23 01:09:50.089324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.103 [2024-07-23 01:09:50.089481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.103 [2024-07-23 01:09:50.089506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.103 qpair failed and we were unable to recover it. 00:30:06.103 [2024-07-23 01:09:50.089642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.103 [2024-07-23 01:09:50.089777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.103 [2024-07-23 01:09:50.089803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.103 qpair failed and we were unable to recover it. 00:30:06.103 [2024-07-23 01:09:50.089950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.103 [2024-07-23 01:09:50.090092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.103 [2024-07-23 01:09:50.090117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.103 qpair failed and we were unable to recover it. 00:30:06.103 [2024-07-23 01:09:50.090257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.103 [2024-07-23 01:09:50.090446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.103 [2024-07-23 01:09:50.090472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.103 qpair failed and we were unable to recover it. 00:30:06.103 [2024-07-23 01:09:50.090668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.103 [2024-07-23 01:09:50.090831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.103 [2024-07-23 01:09:50.090856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.103 qpair failed and we were unable to recover it. 00:30:06.103 [2024-07-23 01:09:50.090992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.103 [2024-07-23 01:09:50.091150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.103 [2024-07-23 01:09:50.091175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.103 qpair failed and we were unable to recover it. 00:30:06.103 [2024-07-23 01:09:50.091307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.103 [2024-07-23 01:09:50.091471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.103 [2024-07-23 01:09:50.091496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.103 qpair failed and we were unable to recover it. 00:30:06.103 [2024-07-23 01:09:50.091662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.103 [2024-07-23 01:09:50.091788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.103 [2024-07-23 01:09:50.091814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.103 qpair failed and we were unable to recover it. 00:30:06.103 [2024-07-23 01:09:50.091985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.103 [2024-07-23 01:09:50.092146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.103 [2024-07-23 01:09:50.092172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.103 qpair failed and we were unable to recover it. 00:30:06.103 [2024-07-23 01:09:50.092350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.103 [2024-07-23 01:09:50.092480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.103 [2024-07-23 01:09:50.092505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.103 qpair failed and we were unable to recover it. 00:30:06.103 [2024-07-23 01:09:50.092666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.103 [2024-07-23 01:09:50.092796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.103 [2024-07-23 01:09:50.092822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.103 qpair failed and we were unable to recover it. 00:30:06.103 [2024-07-23 01:09:50.092979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.103 [2024-07-23 01:09:50.093112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.103 [2024-07-23 01:09:50.093137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.103 qpair failed and we were unable to recover it. 00:30:06.103 [2024-07-23 01:09:50.093302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.103 [2024-07-23 01:09:50.093458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.103 [2024-07-23 01:09:50.093484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.103 qpair failed and we were unable to recover it. 00:30:06.103 [2024-07-23 01:09:50.093620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.103 [2024-07-23 01:09:50.093781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.103 [2024-07-23 01:09:50.093807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.104 qpair failed and we were unable to recover it. 00:30:06.104 [2024-07-23 01:09:50.093955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.104 [2024-07-23 01:09:50.094100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.104 [2024-07-23 01:09:50.094125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.104 qpair failed and we were unable to recover it. 00:30:06.104 [2024-07-23 01:09:50.094258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.104 [2024-07-23 01:09:50.094390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.104 [2024-07-23 01:09:50.094415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.104 qpair failed and we were unable to recover it. 00:30:06.104 [2024-07-23 01:09:50.094547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.104 [2024-07-23 01:09:50.094682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.104 [2024-07-23 01:09:50.094708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.104 qpair failed and we were unable to recover it. 00:30:06.104 [2024-07-23 01:09:50.094842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.104 [2024-07-23 01:09:50.095030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.104 [2024-07-23 01:09:50.095054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.104 qpair failed and we were unable to recover it. 00:30:06.104 [2024-07-23 01:09:50.095192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.104 [2024-07-23 01:09:50.095350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.104 [2024-07-23 01:09:50.095375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.104 qpair failed and we were unable to recover it. 00:30:06.104 [2024-07-23 01:09:50.095525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.104 [2024-07-23 01:09:50.095689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.104 [2024-07-23 01:09:50.095715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.104 qpair failed and we were unable to recover it. 00:30:06.104 [2024-07-23 01:09:50.095852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.104 [2024-07-23 01:09:50.095981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.104 [2024-07-23 01:09:50.096006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.104 qpair failed and we were unable to recover it. 00:30:06.104 [2024-07-23 01:09:50.096193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.104 [2024-07-23 01:09:50.096346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.104 [2024-07-23 01:09:50.096371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.104 qpair failed and we were unable to recover it. 00:30:06.104 [2024-07-23 01:09:50.096515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.104 [2024-07-23 01:09:50.096650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.104 [2024-07-23 01:09:50.096676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.104 qpair failed and we were unable to recover it. 00:30:06.104 [2024-07-23 01:09:50.096853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.104 [2024-07-23 01:09:50.096996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.104 [2024-07-23 01:09:50.097021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.104 qpair failed and we were unable to recover it. 00:30:06.104 [2024-07-23 01:09:50.097200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.104 [2024-07-23 01:09:50.097362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.104 [2024-07-23 01:09:50.097387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.104 qpair failed and we were unable to recover it. 00:30:06.104 [2024-07-23 01:09:50.097553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.104 [2024-07-23 01:09:50.097718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.104 [2024-07-23 01:09:50.097744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.104 qpair failed and we were unable to recover it. 00:30:06.104 [2024-07-23 01:09:50.097908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.104 [2024-07-23 01:09:50.098069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.104 [2024-07-23 01:09:50.098094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.104 qpair failed and we were unable to recover it. 00:30:06.104 [2024-07-23 01:09:50.098228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.104 [2024-07-23 01:09:50.098393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.104 [2024-07-23 01:09:50.098419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.104 qpair failed and we were unable to recover it. 00:30:06.104 [2024-07-23 01:09:50.098598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.104 [2024-07-23 01:09:50.098762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.104 [2024-07-23 01:09:50.098790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:06.104 qpair failed and we were unable to recover it. 00:30:06.104 [2024-07-23 01:09:50.098960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.104 [2024-07-23 01:09:50.099110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.104 [2024-07-23 01:09:50.099136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:06.104 qpair failed and we were unable to recover it. 00:30:06.104 [2024-07-23 01:09:50.099274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.104 [2024-07-23 01:09:50.099436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.104 [2024-07-23 01:09:50.099461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:06.104 qpair failed and we were unable to recover it. 00:30:06.104 [2024-07-23 01:09:50.099594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.104 [2024-07-23 01:09:50.099740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.104 [2024-07-23 01:09:50.099765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:06.104 qpair failed and we were unable to recover it. 00:30:06.104 [2024-07-23 01:09:50.099916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.104 [2024-07-23 01:09:50.100083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.104 [2024-07-23 01:09:50.100109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:06.104 qpair failed and we were unable to recover it. 00:30:06.104 [2024-07-23 01:09:50.100242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.104 [2024-07-23 01:09:50.100407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.104 [2024-07-23 01:09:50.100432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:06.104 qpair failed and we were unable to recover it. 00:30:06.104 [2024-07-23 01:09:50.100562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.104 [2024-07-23 01:09:50.100724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.104 [2024-07-23 01:09:50.100750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:06.104 qpair failed and we were unable to recover it. 00:30:06.104 [2024-07-23 01:09:50.100878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.104 [2024-07-23 01:09:50.101018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.104 [2024-07-23 01:09:50.101043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:06.104 qpair failed and we were unable to recover it. 00:30:06.104 [2024-07-23 01:09:50.101193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.104 [2024-07-23 01:09:50.101332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.104 [2024-07-23 01:09:50.101359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:06.104 qpair failed and we were unable to recover it. 00:30:06.104 [2024-07-23 01:09:50.101495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.104 [2024-07-23 01:09:50.101656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.104 [2024-07-23 01:09:50.101681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb168000b90 with addr=10.0.0.2, port=4420 00:30:06.104 qpair failed and we were unable to recover it. 00:30:06.104 [2024-07-23 01:09:50.101866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.105 [2024-07-23 01:09:50.102034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.105 [2024-07-23 01:09:50.102062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.105 qpair failed and we were unable to recover it. 00:30:06.105 [2024-07-23 01:09:50.102201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.105 [2024-07-23 01:09:50.102337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.105 [2024-07-23 01:09:50.102363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.105 qpair failed and we were unable to recover it. 00:30:06.105 [2024-07-23 01:09:50.102494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.105 [2024-07-23 01:09:50.102633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.105 [2024-07-23 01:09:50.102662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.105 qpair failed and we were unable to recover it. 00:30:06.105 [2024-07-23 01:09:50.102805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.105 [2024-07-23 01:09:50.102959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.105 [2024-07-23 01:09:50.102984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.105 qpair failed and we were unable to recover it. 00:30:06.105 [2024-07-23 01:09:50.103118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.105 [2024-07-23 01:09:50.103255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.105 [2024-07-23 01:09:50.103279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.105 qpair failed and we were unable to recover it. 00:30:06.105 [2024-07-23 01:09:50.103409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.105 [2024-07-23 01:09:50.103542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.105 [2024-07-23 01:09:50.103568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.105 qpair failed and we were unable to recover it. 00:30:06.105 [2024-07-23 01:09:50.103724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.105 [2024-07-23 01:09:50.103863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.105 [2024-07-23 01:09:50.103890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.105 qpair failed and we were unable to recover it. 00:30:06.105 [2024-07-23 01:09:50.104055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.105 [2024-07-23 01:09:50.104191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.105 [2024-07-23 01:09:50.104217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.105 qpair failed and we were unable to recover it. 00:30:06.105 [2024-07-23 01:09:50.104354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.105 [2024-07-23 01:09:50.104510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.105 [2024-07-23 01:09:50.104534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.105 qpair failed and we were unable to recover it. 00:30:06.105 [2024-07-23 01:09:50.104675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.105 [2024-07-23 01:09:50.104825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.105 [2024-07-23 01:09:50.104850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.105 qpair failed and we were unable to recover it. 00:30:06.105 [2024-07-23 01:09:50.104992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.105 [2024-07-23 01:09:50.105128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.105 [2024-07-23 01:09:50.105153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.105 qpair failed and we were unable to recover it. 00:30:06.105 [2024-07-23 01:09:50.105294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.105 [2024-07-23 01:09:50.105449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.105 [2024-07-23 01:09:50.105474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.105 qpair failed and we were unable to recover it. 00:30:06.105 [2024-07-23 01:09:50.105602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.105 [2024-07-23 01:09:50.105745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.105 [2024-07-23 01:09:50.105771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.105 qpair failed and we were unable to recover it. 00:30:06.105 [2024-07-23 01:09:50.105905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.105 [2024-07-23 01:09:50.106031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.105 [2024-07-23 01:09:50.106056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.105 qpair failed and we were unable to recover it. 00:30:06.105 [2024-07-23 01:09:50.106193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.105 [2024-07-23 01:09:50.106321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.105 [2024-07-23 01:09:50.106346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.105 qpair failed and we were unable to recover it. 00:30:06.105 [2024-07-23 01:09:50.106472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.105 [2024-07-23 01:09:50.106600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.105 [2024-07-23 01:09:50.106632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.105 qpair failed and we were unable to recover it. 00:30:06.105 [2024-07-23 01:09:50.106762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.105 [2024-07-23 01:09:50.106906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.105 [2024-07-23 01:09:50.106930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.105 qpair failed and we were unable to recover it. 00:30:06.105 [2024-07-23 01:09:50.107058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.105 [2024-07-23 01:09:50.107224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.105 [2024-07-23 01:09:50.107248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.105 qpair failed and we were unable to recover it. 00:30:06.105 [2024-07-23 01:09:50.107379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.105 [2024-07-23 01:09:50.107516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.105 [2024-07-23 01:09:50.107541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.105 qpair failed and we were unable to recover it. 00:30:06.105 [2024-07-23 01:09:50.107679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.105 [2024-07-23 01:09:50.107824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.105 [2024-07-23 01:09:50.107849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.105 qpair failed and we were unable to recover it. 00:30:06.105 [2024-07-23 01:09:50.108002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.105 [2024-07-23 01:09:50.108165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.105 [2024-07-23 01:09:50.108190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.105 qpair failed and we were unable to recover it. 00:30:06.105 [2024-07-23 01:09:50.108337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.105 [2024-07-23 01:09:50.108470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.105 [2024-07-23 01:09:50.108494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.105 qpair failed and we were unable to recover it. 00:30:06.105 [2024-07-23 01:09:50.108622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.105 [2024-07-23 01:09:50.108768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.105 [2024-07-23 01:09:50.108792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.105 qpair failed and we were unable to recover it. 00:30:06.105 [2024-07-23 01:09:50.108946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.105 [2024-07-23 01:09:50.109071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.105 [2024-07-23 01:09:50.109095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.105 qpair failed and we were unable to recover it. 00:30:06.105 [2024-07-23 01:09:50.109251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.105 [2024-07-23 01:09:50.109391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.105 [2024-07-23 01:09:50.109416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.105 qpair failed and we were unable to recover it. 00:30:06.105 [2024-07-23 01:09:50.109567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.105 [2024-07-23 01:09:50.109700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.105 [2024-07-23 01:09:50.109727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.105 qpair failed and we were unable to recover it. 00:30:06.105 [2024-07-23 01:09:50.109888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.105 [2024-07-23 01:09:50.110034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.105 [2024-07-23 01:09:50.110060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.105 qpair failed and we were unable to recover it. 00:30:06.105 [2024-07-23 01:09:50.110201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.105 [2024-07-23 01:09:50.110360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.105 [2024-07-23 01:09:50.110384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.105 qpair failed and we were unable to recover it. 00:30:06.105 [2024-07-23 01:09:50.110546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.105 [2024-07-23 01:09:50.110704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.106 [2024-07-23 01:09:50.110729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.106 qpair failed and we were unable to recover it. 00:30:06.106 [2024-07-23 01:09:50.110877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.106 [2024-07-23 01:09:50.111011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.106 [2024-07-23 01:09:50.111035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.106 qpair failed and we were unable to recover it. 00:30:06.106 [2024-07-23 01:09:50.111191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.106 [2024-07-23 01:09:50.111333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.106 [2024-07-23 01:09:50.111357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.106 qpair failed and we were unable to recover it. 00:30:06.106 [2024-07-23 01:09:50.111507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.106 [2024-07-23 01:09:50.111664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.106 [2024-07-23 01:09:50.111690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.106 qpair failed and we were unable to recover it. 00:30:06.106 [2024-07-23 01:09:50.111851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.106 [2024-07-23 01:09:50.111987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.106 [2024-07-23 01:09:50.112021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.106 qpair failed and we were unable to recover it. 00:30:06.106 [2024-07-23 01:09:50.112176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.106 [2024-07-23 01:09:50.112352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.106 [2024-07-23 01:09:50.112377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.106 qpair failed and we were unable to recover it. 00:30:06.106 [2024-07-23 01:09:50.112526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.106 [2024-07-23 01:09:50.112664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.106 [2024-07-23 01:09:50.112689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.106 qpair failed and we were unable to recover it. 00:30:06.106 [2024-07-23 01:09:50.112828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.106 [2024-07-23 01:09:50.113006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.106 [2024-07-23 01:09:50.113031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.106 qpair failed and we were unable to recover it. 00:30:06.106 [2024-07-23 01:09:50.113196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.106 [2024-07-23 01:09:50.113361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.106 [2024-07-23 01:09:50.113385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.106 qpair failed and we were unable to recover it. 00:30:06.106 [2024-07-23 01:09:50.113523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.106 [2024-07-23 01:09:50.113676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.106 [2024-07-23 01:09:50.113701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.106 qpair failed and we were unable to recover it. 00:30:06.106 [2024-07-23 01:09:50.113849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.106 [2024-07-23 01:09:50.113983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.106 [2024-07-23 01:09:50.114007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.106 qpair failed and we were unable to recover it. 00:30:06.106 [2024-07-23 01:09:50.114171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.106 [2024-07-23 01:09:50.114331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.106 [2024-07-23 01:09:50.114356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.106 qpair failed and we were unable to recover it. 00:30:06.106 [2024-07-23 01:09:50.114499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.106 [2024-07-23 01:09:50.114638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.106 [2024-07-23 01:09:50.114667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.106 qpair failed and we were unable to recover it. 00:30:06.106 [2024-07-23 01:09:50.114820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.106 [2024-07-23 01:09:50.114970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.106 [2024-07-23 01:09:50.114995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.106 qpair failed and we were unable to recover it. 00:30:06.106 [2024-07-23 01:09:50.115182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.106 [2024-07-23 01:09:50.115343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.106 [2024-07-23 01:09:50.115367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.106 qpair failed and we were unable to recover it. 00:30:06.106 [2024-07-23 01:09:50.115545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.106 [2024-07-23 01:09:50.115688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.106 [2024-07-23 01:09:50.115713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.106 qpair failed and we were unable to recover it. 00:30:06.106 [2024-07-23 01:09:50.115859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.106 [2024-07-23 01:09:50.116002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.106 [2024-07-23 01:09:50.116026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.106 qpair failed and we were unable to recover it. 00:30:06.106 [2024-07-23 01:09:50.116186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.106 [2024-07-23 01:09:50.116331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.106 [2024-07-23 01:09:50.116356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.106 qpair failed and we were unable to recover it. 00:30:06.106 [2024-07-23 01:09:50.116482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.106 [2024-07-23 01:09:50.116621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.106 [2024-07-23 01:09:50.116647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.106 qpair failed and we were unable to recover it. 00:30:06.106 [2024-07-23 01:09:50.116837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.106 [2024-07-23 01:09:50.116962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.106 [2024-07-23 01:09:50.116986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.106 qpair failed and we were unable to recover it. 00:30:06.106 [2024-07-23 01:09:50.117112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.106 [2024-07-23 01:09:50.117305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.106 [2024-07-23 01:09:50.117330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.106 qpair failed and we were unable to recover it. 00:30:06.106 [2024-07-23 01:09:50.117481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.106 [2024-07-23 01:09:50.117610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.106 [2024-07-23 01:09:50.117647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.106 qpair failed and we were unable to recover it. 00:30:06.106 [2024-07-23 01:09:50.117786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.106 [2024-07-23 01:09:50.117951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.106 [2024-07-23 01:09:50.117980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.106 qpair failed and we were unable to recover it. 00:30:06.106 [2024-07-23 01:09:50.118118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.106 [2024-07-23 01:09:50.118248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.106 [2024-07-23 01:09:50.118274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.106 qpair failed and we were unable to recover it. 00:30:06.106 [2024-07-23 01:09:50.118405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.106 [2024-07-23 01:09:50.118578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.106 [2024-07-23 01:09:50.118602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.106 qpair failed and we were unable to recover it. 00:30:06.106 [2024-07-23 01:09:50.118744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.106 [2024-07-23 01:09:50.118872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.106 [2024-07-23 01:09:50.118896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.106 qpair failed and we were unable to recover it. 00:30:06.106 [2024-07-23 01:09:50.119047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.106 [2024-07-23 01:09:50.119236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.106 [2024-07-23 01:09:50.119260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.106 qpair failed and we were unable to recover it. 00:30:06.106 [2024-07-23 01:09:50.119424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.106 [2024-07-23 01:09:50.119554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.106 [2024-07-23 01:09:50.119579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.106 qpair failed and we were unable to recover it. 00:30:06.106 [2024-07-23 01:09:50.119730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.106 [2024-07-23 01:09:50.119878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.106 [2024-07-23 01:09:50.119903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.106 qpair failed and we were unable to recover it. 00:30:06.107 [2024-07-23 01:09:50.120066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.107 [2024-07-23 01:09:50.120240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.107 [2024-07-23 01:09:50.120264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.107 qpair failed and we were unable to recover it. 00:30:06.107 [2024-07-23 01:09:50.120425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.107 [2024-07-23 01:09:50.120597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.107 [2024-07-23 01:09:50.120629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.107 qpair failed and we were unable to recover it. 00:30:06.107 [2024-07-23 01:09:50.120758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.107 [2024-07-23 01:09:50.120893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.107 [2024-07-23 01:09:50.120918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.107 qpair failed and we were unable to recover it. 00:30:06.107 [2024-07-23 01:09:50.121056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.107 [2024-07-23 01:09:50.121222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.107 [2024-07-23 01:09:50.121251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.107 qpair failed and we were unable to recover it. 00:30:06.107 [2024-07-23 01:09:50.121414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.107 [2024-07-23 01:09:50.121542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.107 [2024-07-23 01:09:50.121566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.107 qpair failed and we were unable to recover it. 00:30:06.107 [2024-07-23 01:09:50.121699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.107 [2024-07-23 01:09:50.121861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.107 [2024-07-23 01:09:50.121886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.107 qpair failed and we were unable to recover it. 00:30:06.107 [2024-07-23 01:09:50.122048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.107 [2024-07-23 01:09:50.122211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.107 [2024-07-23 01:09:50.122238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.107 qpair failed and we were unable to recover it. 00:30:06.107 [2024-07-23 01:09:50.122376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.107 [2024-07-23 01:09:50.122511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.107 [2024-07-23 01:09:50.122535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.107 qpair failed and we were unable to recover it. 00:30:06.107 [2024-07-23 01:09:50.122683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.107 [2024-07-23 01:09:50.122837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.107 [2024-07-23 01:09:50.122862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.107 qpair failed and we were unable to recover it. 00:30:06.107 [2024-07-23 01:09:50.123051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.107 [2024-07-23 01:09:50.123196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.107 [2024-07-23 01:09:50.123221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.107 qpair failed and we were unable to recover it. 00:30:06.107 [2024-07-23 01:09:50.123348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.107 [2024-07-23 01:09:50.123480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.107 [2024-07-23 01:09:50.123505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.107 qpair failed and we were unable to recover it. 00:30:06.107 [2024-07-23 01:09:50.123645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.107 [2024-07-23 01:09:50.123779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.107 [2024-07-23 01:09:50.123804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.107 qpair failed and we were unable to recover it. 00:30:06.107 [2024-07-23 01:09:50.123964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.107 [2024-07-23 01:09:50.124120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.107 [2024-07-23 01:09:50.124145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.107 qpair failed and we were unable to recover it. 00:30:06.107 [2024-07-23 01:09:50.124288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.107 [2024-07-23 01:09:50.124414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.107 [2024-07-23 01:09:50.124443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.107 qpair failed and we were unable to recover it. 00:30:06.107 [2024-07-23 01:09:50.124572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.107 [2024-07-23 01:09:50.124730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.107 [2024-07-23 01:09:50.124754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.107 qpair failed and we were unable to recover it. 00:30:06.107 [2024-07-23 01:09:50.124905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.107 [2024-07-23 01:09:50.125067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.107 [2024-07-23 01:09:50.125093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.107 qpair failed and we were unable to recover it. 00:30:06.107 [2024-07-23 01:09:50.125240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.107 [2024-07-23 01:09:50.125396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.107 [2024-07-23 01:09:50.125421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.107 qpair failed and we were unable to recover it. 00:30:06.107 [2024-07-23 01:09:50.125573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.107 [2024-07-23 01:09:50.125738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.107 [2024-07-23 01:09:50.125764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.107 qpair failed and we were unable to recover it. 00:30:06.107 [2024-07-23 01:09:50.125897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.107 [2024-07-23 01:09:50.126059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.107 [2024-07-23 01:09:50.126083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.107 qpair failed and we were unable to recover it. 00:30:06.107 [2024-07-23 01:09:50.126213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.107 [2024-07-23 01:09:50.126342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.107 [2024-07-23 01:09:50.126367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.107 qpair failed and we were unable to recover it. 00:30:06.107 [2024-07-23 01:09:50.126502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.107 [2024-07-23 01:09:50.126647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.107 [2024-07-23 01:09:50.126682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.107 qpair failed and we were unable to recover it. 00:30:06.107 [2024-07-23 01:09:50.126844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.107 [2024-07-23 01:09:50.126980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.107 [2024-07-23 01:09:50.127004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.107 qpair failed and we were unable to recover it. 00:30:06.107 [2024-07-23 01:09:50.127163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.107 [2024-07-23 01:09:50.127297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.107 [2024-07-23 01:09:50.127322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.107 qpair failed and we were unable to recover it. 00:30:06.107 [2024-07-23 01:09:50.127459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.107 [2024-07-23 01:09:50.127604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.107 [2024-07-23 01:09:50.127641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.107 qpair failed and we were unable to recover it. 00:30:06.107 [2024-07-23 01:09:50.127778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.107 [2024-07-23 01:09:50.127926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.107 [2024-07-23 01:09:50.127952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.107 qpair failed and we were unable to recover it. 00:30:06.107 [2024-07-23 01:09:50.128116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.107 [2024-07-23 01:09:50.128249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.107 [2024-07-23 01:09:50.128273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.107 qpair failed and we were unable to recover it. 00:30:06.107 [2024-07-23 01:09:50.128415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.107 [2024-07-23 01:09:50.128573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.107 [2024-07-23 01:09:50.128597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.107 qpair failed and we were unable to recover it. 00:30:06.107 [2024-07-23 01:09:50.128741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.107 [2024-07-23 01:09:50.128884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.107 [2024-07-23 01:09:50.128908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.107 qpair failed and we were unable to recover it. 00:30:06.107 [2024-07-23 01:09:50.129035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.107 [2024-07-23 01:09:50.129190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.108 [2024-07-23 01:09:50.129214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.108 qpair failed and we were unable to recover it. 00:30:06.108 [2024-07-23 01:09:50.129360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.108 [2024-07-23 01:09:50.129494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.108 [2024-07-23 01:09:50.129520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.108 qpair failed and we were unable to recover it. 00:30:06.108 [2024-07-23 01:09:50.129657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.108 [2024-07-23 01:09:50.129787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.108 [2024-07-23 01:09:50.129814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.108 qpair failed and we were unable to recover it. 00:30:06.108 [2024-07-23 01:09:50.130008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.108 [2024-07-23 01:09:50.130148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.108 [2024-07-23 01:09:50.130173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.108 qpair failed and we were unable to recover it. 00:30:06.108 [2024-07-23 01:09:50.130321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.108 [2024-07-23 01:09:50.130452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.108 [2024-07-23 01:09:50.130478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.108 qpair failed and we were unable to recover it. 00:30:06.108 [2024-07-23 01:09:50.130619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.108 [2024-07-23 01:09:50.130762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.108 [2024-07-23 01:09:50.130787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.108 qpair failed and we were unable to recover it. 00:30:06.108 [2024-07-23 01:09:50.130923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.108 [2024-07-23 01:09:50.131049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.108 [2024-07-23 01:09:50.131073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.108 qpair failed and we were unable to recover it. 00:30:06.108 [2024-07-23 01:09:50.131232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.108 [2024-07-23 01:09:50.131362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.108 [2024-07-23 01:09:50.131386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.108 qpair failed and we were unable to recover it. 00:30:06.108 [2024-07-23 01:09:50.131522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.108 [2024-07-23 01:09:50.131650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.108 [2024-07-23 01:09:50.131682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.108 qpair failed and we were unable to recover it. 00:30:06.108 [2024-07-23 01:09:50.131830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.108 [2024-07-23 01:09:50.131976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.108 [2024-07-23 01:09:50.132000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.108 qpair failed and we were unable to recover it. 00:30:06.108 [2024-07-23 01:09:50.132127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.108 [2024-07-23 01:09:50.132291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.108 [2024-07-23 01:09:50.132318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.108 qpair failed and we were unable to recover it. 00:30:06.108 [2024-07-23 01:09:50.132449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.108 [2024-07-23 01:09:50.132577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.108 [2024-07-23 01:09:50.132602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.108 qpair failed and we were unable to recover it. 00:30:06.108 [2024-07-23 01:09:50.132764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.108 [2024-07-23 01:09:50.132899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.108 [2024-07-23 01:09:50.132925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.108 qpair failed and we were unable to recover it. 00:30:06.108 [2024-07-23 01:09:50.133093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.108 [2024-07-23 01:09:50.133219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.108 [2024-07-23 01:09:50.133243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.108 qpair failed and we were unable to recover it. 00:30:06.108 [2024-07-23 01:09:50.133401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.108 [2024-07-23 01:09:50.133543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.108 [2024-07-23 01:09:50.133568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.108 qpair failed and we were unable to recover it. 00:30:06.108 [2024-07-23 01:09:50.133697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.108 [2024-07-23 01:09:50.133869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.108 [2024-07-23 01:09:50.133896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.108 qpair failed and we were unable to recover it. 00:30:06.108 [2024-07-23 01:09:50.134063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.108 [2024-07-23 01:09:50.134228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.108 [2024-07-23 01:09:50.134253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.108 qpair failed and we were unable to recover it. 00:30:06.108 [2024-07-23 01:09:50.134388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.108 [2024-07-23 01:09:50.134528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.108 [2024-07-23 01:09:50.134553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.108 qpair failed and we were unable to recover it. 00:30:06.108 [2024-07-23 01:09:50.134701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.108 [2024-07-23 01:09:50.134835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.108 [2024-07-23 01:09:50.134860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.108 qpair failed and we were unable to recover it. 00:30:06.108 [2024-07-23 01:09:50.135049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.108 [2024-07-23 01:09:50.135183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.108 [2024-07-23 01:09:50.135207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.108 qpair failed and we were unable to recover it. 00:30:06.108 [2024-07-23 01:09:50.135372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.108 [2024-07-23 01:09:50.135512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.108 [2024-07-23 01:09:50.135536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.108 qpair failed and we were unable to recover it. 00:30:06.108 [2024-07-23 01:09:50.135688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.108 [2024-07-23 01:09:50.135832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.108 [2024-07-23 01:09:50.135859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.108 qpair failed and we were unable to recover it. 00:30:06.108 [2024-07-23 01:09:50.136025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.108 [2024-07-23 01:09:50.136172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.108 [2024-07-23 01:09:50.136198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.108 qpair failed and we were unable to recover it. 00:30:06.108 [2024-07-23 01:09:50.136349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.108 [2024-07-23 01:09:50.136485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.108 [2024-07-23 01:09:50.136509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.108 qpair failed and we were unable to recover it. 00:30:06.108 [2024-07-23 01:09:50.136656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.108 [2024-07-23 01:09:50.136822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.108 [2024-07-23 01:09:50.136846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.108 qpair failed and we were unable to recover it. 00:30:06.108 [2024-07-23 01:09:50.136975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.109 [2024-07-23 01:09:50.137112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.109 [2024-07-23 01:09:50.137136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.109 qpair failed and we were unable to recover it. 00:30:06.109 [2024-07-23 01:09:50.137268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.109 [2024-07-23 01:09:50.137431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.109 [2024-07-23 01:09:50.137457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.109 qpair failed and we were unable to recover it. 00:30:06.109 [2024-07-23 01:09:50.137639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.109 [2024-07-23 01:09:50.137779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.109 [2024-07-23 01:09:50.137804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.109 qpair failed and we were unable to recover it. 00:30:06.109 [2024-07-23 01:09:50.137951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.109 [2024-07-23 01:09:50.138115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.109 [2024-07-23 01:09:50.138139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.109 qpair failed and we were unable to recover it. 00:30:06.109 [2024-07-23 01:09:50.138331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.109 [2024-07-23 01:09:50.138472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.109 [2024-07-23 01:09:50.138498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.109 qpair failed and we were unable to recover it. 00:30:06.109 [2024-07-23 01:09:50.138641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.109 [2024-07-23 01:09:50.138786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.109 [2024-07-23 01:09:50.138810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.109 qpair failed and we were unable to recover it. 00:30:06.109 [2024-07-23 01:09:50.138965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.109 [2024-07-23 01:09:50.139128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.109 [2024-07-23 01:09:50.139152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.109 qpair failed and we were unable to recover it. 00:30:06.109 [2024-07-23 01:09:50.139280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.109 [2024-07-23 01:09:50.139442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.109 [2024-07-23 01:09:50.139467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.109 qpair failed and we were unable to recover it. 00:30:06.109 [2024-07-23 01:09:50.139604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.109 [2024-07-23 01:09:50.139775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.109 [2024-07-23 01:09:50.139800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.109 qpair failed and we were unable to recover it. 00:30:06.109 [2024-07-23 01:09:50.139961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.109 [2024-07-23 01:09:50.140091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.109 [2024-07-23 01:09:50.140117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.109 qpair failed and we were unable to recover it. 00:30:06.109 [2024-07-23 01:09:50.140245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.109 [2024-07-23 01:09:50.140376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.109 [2024-07-23 01:09:50.140403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.109 qpair failed and we were unable to recover it. 00:30:06.109 [2024-07-23 01:09:50.140560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.109 [2024-07-23 01:09:50.140703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.109 [2024-07-23 01:09:50.140728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.109 qpair failed and we were unable to recover it. 00:30:06.109 [2024-07-23 01:09:50.140873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.109 [2024-07-23 01:09:50.141033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.109 [2024-07-23 01:09:50.141058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.109 qpair failed and we were unable to recover it. 00:30:06.109 [2024-07-23 01:09:50.141187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.109 [2024-07-23 01:09:50.141345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.109 [2024-07-23 01:09:50.141369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.109 qpair failed and we were unable to recover it. 00:30:06.109 [2024-07-23 01:09:50.141520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.109 [2024-07-23 01:09:50.141645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.109 [2024-07-23 01:09:50.141676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.109 qpair failed and we were unable to recover it. 00:30:06.109 [2024-07-23 01:09:50.141853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.109 [2024-07-23 01:09:50.141994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.109 [2024-07-23 01:09:50.142018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.109 qpair failed and we were unable to recover it. 00:30:06.109 [2024-07-23 01:09:50.142173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.109 [2024-07-23 01:09:50.142333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.109 [2024-07-23 01:09:50.142356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.109 qpair failed and we were unable to recover it. 00:30:06.109 [2024-07-23 01:09:50.142514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.109 [2024-07-23 01:09:50.142645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.109 [2024-07-23 01:09:50.142672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.109 qpair failed and we were unable to recover it. 00:30:06.109 [2024-07-23 01:09:50.142810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.109 [2024-07-23 01:09:50.142938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.109 [2024-07-23 01:09:50.142962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.109 qpair failed and we were unable to recover it. 00:30:06.109 [2024-07-23 01:09:50.143120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.109 [2024-07-23 01:09:50.143285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.109 [2024-07-23 01:09:50.143309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.109 qpair failed and we were unable to recover it. 00:30:06.109 [2024-07-23 01:09:50.143444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.109 [2024-07-23 01:09:50.143567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.109 [2024-07-23 01:09:50.143591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.109 qpair failed and we were unable to recover it. 00:30:06.109 [2024-07-23 01:09:50.143779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.109 [2024-07-23 01:09:50.143938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.109 [2024-07-23 01:09:50.143963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.109 qpair failed and we were unable to recover it. 00:30:06.109 [2024-07-23 01:09:50.144137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.109 [2024-07-23 01:09:50.144300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.109 [2024-07-23 01:09:50.144324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.109 qpair failed and we were unable to recover it. 00:30:06.109 [2024-07-23 01:09:50.144487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.109 [2024-07-23 01:09:50.144668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.109 [2024-07-23 01:09:50.144694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.109 qpair failed and we were unable to recover it. 00:30:06.109 [2024-07-23 01:09:50.144832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.109 [2024-07-23 01:09:50.144996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.109 [2024-07-23 01:09:50.145023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.109 qpair failed and we were unable to recover it. 00:30:06.109 [2024-07-23 01:09:50.145146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.109 [2024-07-23 01:09:50.145282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.109 [2024-07-23 01:09:50.145308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.109 qpair failed and we were unable to recover it. 00:30:06.109 [2024-07-23 01:09:50.145448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.109 [2024-07-23 01:09:50.145579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.109 [2024-07-23 01:09:50.145605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.109 qpair failed and we were unable to recover it. 00:30:06.109 [2024-07-23 01:09:50.145759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.109 [2024-07-23 01:09:50.145904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.109 [2024-07-23 01:09:50.145928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.109 qpair failed and we were unable to recover it. 00:30:06.109 [2024-07-23 01:09:50.146092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.109 [2024-07-23 01:09:50.146250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.109 [2024-07-23 01:09:50.146274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.109 qpair failed and we were unable to recover it. 00:30:06.110 [2024-07-23 01:09:50.146411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.110 [2024-07-23 01:09:50.146567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.110 [2024-07-23 01:09:50.146591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.110 qpair failed and we were unable to recover it. 00:30:06.110 [2024-07-23 01:09:50.146742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.110 [2024-07-23 01:09:50.146879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.110 [2024-07-23 01:09:50.146903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.110 qpair failed and we were unable to recover it. 00:30:06.110 [2024-07-23 01:09:50.147036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.110 [2024-07-23 01:09:50.147196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.110 [2024-07-23 01:09:50.147222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.110 qpair failed and we were unable to recover it. 00:30:06.110 [2024-07-23 01:09:50.147351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.110 [2024-07-23 01:09:50.147514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.110 [2024-07-23 01:09:50.147541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.110 qpair failed and we were unable to recover it. 00:30:06.110 [2024-07-23 01:09:50.147679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.110 [2024-07-23 01:09:50.147843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.110 [2024-07-23 01:09:50.147868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.110 qpair failed and we were unable to recover it. 00:30:06.110 [2024-07-23 01:09:50.147997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.110 [2024-07-23 01:09:50.148129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.110 [2024-07-23 01:09:50.148155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.110 qpair failed and we were unable to recover it. 00:30:06.110 [2024-07-23 01:09:50.148283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.110 [2024-07-23 01:09:50.148412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.110 [2024-07-23 01:09:50.148438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.110 qpair failed and we were unable to recover it. 00:30:06.110 [2024-07-23 01:09:50.148572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.110 [2024-07-23 01:09:50.148724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.110 [2024-07-23 01:09:50.148750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.110 qpair failed and we were unable to recover it. 00:30:06.110 [2024-07-23 01:09:50.148878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.110 [2024-07-23 01:09:50.149036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.110 [2024-07-23 01:09:50.149060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.110 qpair failed and we were unable to recover it. 00:30:06.110 [2024-07-23 01:09:50.149219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.110 [2024-07-23 01:09:50.149365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.110 [2024-07-23 01:09:50.149389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.110 qpair failed and we were unable to recover it. 00:30:06.110 [2024-07-23 01:09:50.149554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.110 [2024-07-23 01:09:50.149698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.110 [2024-07-23 01:09:50.149723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.110 qpair failed and we were unable to recover it. 00:30:06.110 [2024-07-23 01:09:50.149875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.110 [2024-07-23 01:09:50.150022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.110 [2024-07-23 01:09:50.150046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.110 qpair failed and we were unable to recover it. 00:30:06.110 [2024-07-23 01:09:50.150192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.110 [2024-07-23 01:09:50.150355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.110 [2024-07-23 01:09:50.150379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.110 qpair failed and we were unable to recover it. 00:30:06.110 [2024-07-23 01:09:50.150515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.110 [2024-07-23 01:09:50.150649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.110 [2024-07-23 01:09:50.150676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.110 qpair failed and we were unable to recover it. 00:30:06.110 [2024-07-23 01:09:50.150839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.110 [2024-07-23 01:09:50.151000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.110 [2024-07-23 01:09:50.151024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.110 qpair failed and we were unable to recover it. 00:30:06.110 [2024-07-23 01:09:50.151190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.110 [2024-07-23 01:09:50.151322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.110 [2024-07-23 01:09:50.151346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.110 qpair failed and we were unable to recover it. 00:30:06.110 [2024-07-23 01:09:50.151475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.110 [2024-07-23 01:09:50.151653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.110 [2024-07-23 01:09:50.151678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.110 qpair failed and we were unable to recover it. 00:30:06.110 [2024-07-23 01:09:50.151830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.110 [2024-07-23 01:09:50.152006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.110 [2024-07-23 01:09:50.152030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.110 qpair failed and we were unable to recover it. 00:30:06.110 [2024-07-23 01:09:50.152160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.110 [2024-07-23 01:09:50.152325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.110 [2024-07-23 01:09:50.152349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.110 qpair failed and we were unable to recover it. 00:30:06.110 [2024-07-23 01:09:50.152479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.110 [2024-07-23 01:09:50.152625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.110 [2024-07-23 01:09:50.152650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.110 qpair failed and we were unable to recover it. 00:30:06.110 [2024-07-23 01:09:50.152782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.110 [2024-07-23 01:09:50.152927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.110 [2024-07-23 01:09:50.152951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.110 qpair failed and we were unable to recover it. 00:30:06.110 [2024-07-23 01:09:50.153089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.110 [2024-07-23 01:09:50.153218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.110 [2024-07-23 01:09:50.153242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.110 qpair failed and we were unable to recover it. 00:30:06.110 [2024-07-23 01:09:50.153386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.110 [2024-07-23 01:09:50.153544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.110 [2024-07-23 01:09:50.153568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.110 qpair failed and we were unable to recover it. 00:30:06.110 [2024-07-23 01:09:50.153743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.110 [2024-07-23 01:09:50.153889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.110 [2024-07-23 01:09:50.153914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.110 qpair failed and we were unable to recover it. 00:30:06.110 [2024-07-23 01:09:50.154048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.110 [2024-07-23 01:09:50.154202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.110 [2024-07-23 01:09:50.154226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.110 qpair failed and we were unable to recover it. 00:30:06.110 [2024-07-23 01:09:50.154399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.110 [2024-07-23 01:09:50.154559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.110 [2024-07-23 01:09:50.154584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.110 qpair failed and we were unable to recover it. 00:30:06.110 [2024-07-23 01:09:50.154745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.110 [2024-07-23 01:09:50.154883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.110 [2024-07-23 01:09:50.154907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.110 qpair failed and we were unable to recover it. 00:30:06.110 [2024-07-23 01:09:50.155036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.110 [2024-07-23 01:09:50.155167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.110 [2024-07-23 01:09:50.155193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.110 qpair failed and we were unable to recover it. 00:30:06.110 [2024-07-23 01:09:50.155363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.110 [2024-07-23 01:09:50.155502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.111 [2024-07-23 01:09:50.155526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.111 qpair failed and we were unable to recover it. 00:30:06.111 [2024-07-23 01:09:50.155664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.111 [2024-07-23 01:09:50.155820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.111 [2024-07-23 01:09:50.155844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.111 qpair failed and we were unable to recover it. 00:30:06.111 [2024-07-23 01:09:50.156004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.111 [2024-07-23 01:09:50.156155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.111 [2024-07-23 01:09:50.156179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.111 qpair failed and we were unable to recover it. 00:30:06.111 [2024-07-23 01:09:50.156353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.111 [2024-07-23 01:09:50.156487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.111 [2024-07-23 01:09:50.156512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.111 qpair failed and we were unable to recover it. 00:30:06.111 [2024-07-23 01:09:50.156638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.111 [2024-07-23 01:09:50.156787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.111 [2024-07-23 01:09:50.156812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.111 qpair failed and we were unable to recover it. 00:30:06.111 [2024-07-23 01:09:50.156948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.111 [2024-07-23 01:09:50.157079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.111 [2024-07-23 01:09:50.157103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.111 qpair failed and we were unable to recover it. 00:30:06.111 [2024-07-23 01:09:50.157278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.111 [2024-07-23 01:09:50.157415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.111 [2024-07-23 01:09:50.157440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.111 qpair failed and we were unable to recover it. 00:30:06.111 [2024-07-23 01:09:50.157602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.111 [2024-07-23 01:09:50.157774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.111 [2024-07-23 01:09:50.157799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.111 qpair failed and we were unable to recover it. 00:30:06.111 [2024-07-23 01:09:50.157932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.111 [2024-07-23 01:09:50.158065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.111 [2024-07-23 01:09:50.158089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.111 qpair failed and we were unable to recover it. 00:30:06.111 [2024-07-23 01:09:50.158233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.111 [2024-07-23 01:09:50.158360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.111 [2024-07-23 01:09:50.158384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.111 qpair failed and we were unable to recover it. 00:30:06.111 [2024-07-23 01:09:50.158529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.111 [2024-07-23 01:09:50.158674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.111 [2024-07-23 01:09:50.158699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.111 qpair failed and we were unable to recover it. 00:30:06.111 [2024-07-23 01:09:50.158832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.111 [2024-07-23 01:09:50.159005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.111 [2024-07-23 01:09:50.159029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.111 qpair failed and we were unable to recover it. 00:30:06.111 [2024-07-23 01:09:50.159162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.111 [2024-07-23 01:09:50.159304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.111 [2024-07-23 01:09:50.159328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.111 qpair failed and we were unable to recover it. 00:30:06.111 [2024-07-23 01:09:50.159461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.111 [2024-07-23 01:09:50.159603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.111 [2024-07-23 01:09:50.159636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.111 qpair failed and we were unable to recover it. 00:30:06.111 [2024-07-23 01:09:50.159769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.111 [2024-07-23 01:09:50.159901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.111 [2024-07-23 01:09:50.159926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.111 qpair failed and we were unable to recover it. 00:30:06.111 [2024-07-23 01:09:50.160076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.111 [2024-07-23 01:09:50.160262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.111 [2024-07-23 01:09:50.160287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.111 qpair failed and we were unable to recover it. 00:30:06.111 [2024-07-23 01:09:50.160413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.111 [2024-07-23 01:09:50.160584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.111 [2024-07-23 01:09:50.160608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.111 qpair failed and we were unable to recover it. 00:30:06.111 [2024-07-23 01:09:50.160791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.111 [2024-07-23 01:09:50.160919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.111 [2024-07-23 01:09:50.160944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.111 qpair failed and we were unable to recover it. 00:30:06.111 [2024-07-23 01:09:50.161080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.111 [2024-07-23 01:09:50.161241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.111 [2024-07-23 01:09:50.161265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.111 qpair failed and we were unable to recover it. 00:30:06.111 [2024-07-23 01:09:50.161417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.111 [2024-07-23 01:09:50.161560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.111 [2024-07-23 01:09:50.161585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.111 qpair failed and we were unable to recover it. 00:30:06.111 [2024-07-23 01:09:50.161775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.111 [2024-07-23 01:09:50.161905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.111 [2024-07-23 01:09:50.161929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.111 qpair failed and we were unable to recover it. 00:30:06.111 [2024-07-23 01:09:50.162100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.111 [2024-07-23 01:09:50.162261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.111 [2024-07-23 01:09:50.162286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.111 qpair failed and we were unable to recover it. 00:30:06.111 [2024-07-23 01:09:50.162414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.111 [2024-07-23 01:09:50.162551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.111 [2024-07-23 01:09:50.162575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.111 qpair failed and we were unable to recover it. 00:30:06.111 [2024-07-23 01:09:50.162746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.111 [2024-07-23 01:09:50.162883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.111 [2024-07-23 01:09:50.162907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.111 qpair failed and we were unable to recover it. 00:30:06.111 [2024-07-23 01:09:50.163037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.111 [2024-07-23 01:09:50.163199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.111 [2024-07-23 01:09:50.163224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.111 qpair failed and we were unable to recover it. 00:30:06.111 [2024-07-23 01:09:50.163381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.111 [2024-07-23 01:09:50.163509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.111 [2024-07-23 01:09:50.163533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.111 qpair failed and we were unable to recover it. 00:30:06.111 [2024-07-23 01:09:50.163697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.111 [2024-07-23 01:09:50.163875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.111 [2024-07-23 01:09:50.163900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.111 qpair failed and we were unable to recover it. 00:30:06.111 [2024-07-23 01:09:50.164049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.111 [2024-07-23 01:09:50.164200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.111 [2024-07-23 01:09:50.164224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.111 qpair failed and we were unable to recover it. 00:30:06.111 [2024-07-23 01:09:50.164353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.111 [2024-07-23 01:09:50.164516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.111 [2024-07-23 01:09:50.164542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.111 qpair failed and we were unable to recover it. 00:30:06.112 [2024-07-23 01:09:50.164671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.112 [2024-07-23 01:09:50.164836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.112 [2024-07-23 01:09:50.164860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.112 qpair failed and we were unable to recover it. 00:30:06.112 [2024-07-23 01:09:50.165011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.112 [2024-07-23 01:09:50.165150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.112 [2024-07-23 01:09:50.165175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.112 qpair failed and we were unable to recover it. 00:30:06.112 [2024-07-23 01:09:50.165335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.112 [2024-07-23 01:09:50.165525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.112 [2024-07-23 01:09:50.165549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.112 qpair failed and we were unable to recover it. 00:30:06.112 [2024-07-23 01:09:50.165681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.112 [2024-07-23 01:09:50.165821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.112 [2024-07-23 01:09:50.165847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.112 qpair failed and we were unable to recover it. 00:30:06.112 [2024-07-23 01:09:50.165985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.112 [2024-07-23 01:09:50.166141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.112 [2024-07-23 01:09:50.166166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.112 qpair failed and we were unable to recover it. 00:30:06.112 [2024-07-23 01:09:50.166307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.112 [2024-07-23 01:09:50.166443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.112 [2024-07-23 01:09:50.166470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.112 qpair failed and we were unable to recover it. 00:30:06.112 [2024-07-23 01:09:50.166603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.112 [2024-07-23 01:09:50.166772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.112 [2024-07-23 01:09:50.166796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.112 qpair failed and we were unable to recover it. 00:30:06.112 [2024-07-23 01:09:50.166952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.112 [2024-07-23 01:09:50.167078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.112 [2024-07-23 01:09:50.167102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.112 qpair failed and we were unable to recover it. 00:30:06.112 [2024-07-23 01:09:50.167252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.112 [2024-07-23 01:09:50.167379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.112 [2024-07-23 01:09:50.167403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.112 qpair failed and we were unable to recover it. 00:30:06.112 [2024-07-23 01:09:50.167567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.112 [2024-07-23 01:09:50.167734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.112 [2024-07-23 01:09:50.167760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.112 qpair failed and we were unable to recover it. 00:30:06.112 [2024-07-23 01:09:50.167890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.112 [2024-07-23 01:09:50.168017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.112 [2024-07-23 01:09:50.168043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.112 qpair failed and we were unable to recover it. 00:30:06.112 [2024-07-23 01:09:50.168187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.112 [2024-07-23 01:09:50.168319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.112 [2024-07-23 01:09:50.168344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.112 qpair failed and we were unable to recover it. 00:30:06.112 [2024-07-23 01:09:50.168501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.112 [2024-07-23 01:09:50.168657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.112 [2024-07-23 01:09:50.168682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.112 qpair failed and we were unable to recover it. 00:30:06.112 [2024-07-23 01:09:50.168809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.112 [2024-07-23 01:09:50.168948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.112 [2024-07-23 01:09:50.168973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.112 qpair failed and we were unable to recover it. 00:30:06.112 [2024-07-23 01:09:50.169111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.112 [2024-07-23 01:09:50.169237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.112 [2024-07-23 01:09:50.169261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.112 qpair failed and we were unable to recover it. 00:30:06.112 [2024-07-23 01:09:50.169391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.112 [2024-07-23 01:09:50.169560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.112 [2024-07-23 01:09:50.169590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.112 qpair failed and we were unable to recover it. 00:30:06.112 [2024-07-23 01:09:50.169755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.112 [2024-07-23 01:09:50.169889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.112 [2024-07-23 01:09:50.169914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.112 qpair failed and we were unable to recover it. 00:30:06.112 [2024-07-23 01:09:50.170083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.112 [2024-07-23 01:09:50.170235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.112 [2024-07-23 01:09:50.170260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.112 qpair failed and we were unable to recover it. 00:30:06.112 [2024-07-23 01:09:50.170423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.112 [2024-07-23 01:09:50.170558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.112 [2024-07-23 01:09:50.170582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.112 qpair failed and we were unable to recover it. 00:30:06.112 [2024-07-23 01:09:50.170751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.112 [2024-07-23 01:09:50.170885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.112 [2024-07-23 01:09:50.170911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.112 qpair failed and we were unable to recover it. 00:30:06.112 [2024-07-23 01:09:50.171059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.112 [2024-07-23 01:09:50.171186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.112 [2024-07-23 01:09:50.171211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.112 qpair failed and we were unable to recover it. 00:30:06.112 [2024-07-23 01:09:50.171334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.112 [2024-07-23 01:09:50.171499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.112 [2024-07-23 01:09:50.171525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.112 qpair failed and we were unable to recover it. 00:30:06.112 [2024-07-23 01:09:50.171668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.112 [2024-07-23 01:09:50.171809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.112 [2024-07-23 01:09:50.171835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.112 qpair failed and we were unable to recover it. 00:30:06.112 [2024-07-23 01:09:50.171973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.112 [2024-07-23 01:09:50.172130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.112 [2024-07-23 01:09:50.172154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.112 qpair failed and we were unable to recover it. 00:30:06.112 [2024-07-23 01:09:50.172310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.112 [2024-07-23 01:09:50.172444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.112 [2024-07-23 01:09:50.172469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.112 qpair failed and we were unable to recover it. 00:30:06.112 [2024-07-23 01:09:50.172619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.112 [2024-07-23 01:09:50.172782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.112 [2024-07-23 01:09:50.172810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.112 qpair failed and we were unable to recover it. 00:30:06.112 [2024-07-23 01:09:50.172940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.112 [2024-07-23 01:09:50.173067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.112 [2024-07-23 01:09:50.173091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.112 qpair failed and we were unable to recover it. 00:30:06.112 [2024-07-23 01:09:50.173249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.112 [2024-07-23 01:09:50.173395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.112 [2024-07-23 01:09:50.173419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.112 qpair failed and we were unable to recover it. 00:30:06.112 [2024-07-23 01:09:50.173608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.112 [2024-07-23 01:09:50.173751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.113 [2024-07-23 01:09:50.173775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.113 qpair failed and we were unable to recover it. 00:30:06.113 [2024-07-23 01:09:50.173951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.113 [2024-07-23 01:09:50.174109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.113 [2024-07-23 01:09:50.174133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.113 qpair failed and we were unable to recover it. 00:30:06.113 [2024-07-23 01:09:50.174288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.113 [2024-07-23 01:09:50.174432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.113 [2024-07-23 01:09:50.174456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.113 qpair failed and we were unable to recover it. 00:30:06.113 [2024-07-23 01:09:50.174641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.113 [2024-07-23 01:09:50.174774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.113 [2024-07-23 01:09:50.174799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.113 qpair failed and we were unable to recover it. 00:30:06.113 [2024-07-23 01:09:50.174941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.113 [2024-07-23 01:09:50.175074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.113 [2024-07-23 01:09:50.175099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.113 qpair failed and we were unable to recover it. 00:30:06.113 [2024-07-23 01:09:50.175238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.113 [2024-07-23 01:09:50.175370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.113 [2024-07-23 01:09:50.175394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.113 qpair failed and we were unable to recover it. 00:30:06.113 [2024-07-23 01:09:50.175525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.113 [2024-07-23 01:09:50.175658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.113 [2024-07-23 01:09:50.175683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.113 qpair failed and we were unable to recover it. 00:30:06.113 [2024-07-23 01:09:50.175868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.113 [2024-07-23 01:09:50.176002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.113 [2024-07-23 01:09:50.176031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.113 qpair failed and we were unable to recover it. 00:30:06.113 [2024-07-23 01:09:50.176188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.113 [2024-07-23 01:09:50.176316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.113 [2024-07-23 01:09:50.176340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.113 qpair failed and we were unable to recover it. 00:30:06.113 [2024-07-23 01:09:50.176496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.113 [2024-07-23 01:09:50.176654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.113 [2024-07-23 01:09:50.176678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.113 qpair failed and we were unable to recover it. 00:30:06.113 [2024-07-23 01:09:50.176832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.113 [2024-07-23 01:09:50.176966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.113 [2024-07-23 01:09:50.176992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.113 qpair failed and we were unable to recover it. 00:30:06.113 [2024-07-23 01:09:50.177139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.113 [2024-07-23 01:09:50.177262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.113 [2024-07-23 01:09:50.177286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.113 qpair failed and we were unable to recover it. 00:30:06.113 [2024-07-23 01:09:50.177431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.113 [2024-07-23 01:09:50.177587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.113 [2024-07-23 01:09:50.177611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.113 qpair failed and we were unable to recover it. 00:30:06.113 [2024-07-23 01:09:50.177760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.113 [2024-07-23 01:09:50.177906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.113 [2024-07-23 01:09:50.177930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.113 qpair failed and we were unable to recover it. 00:30:06.113 [2024-07-23 01:09:50.178066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.113 [2024-07-23 01:09:50.178225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.113 [2024-07-23 01:09:50.178249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.113 qpair failed and we were unable to recover it. 00:30:06.113 [2024-07-23 01:09:50.178407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.113 [2024-07-23 01:09:50.178569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.113 [2024-07-23 01:09:50.178593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.113 qpair failed and we were unable to recover it. 00:30:06.113 [2024-07-23 01:09:50.178739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.113 [2024-07-23 01:09:50.178862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.113 [2024-07-23 01:09:50.178886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.113 qpair failed and we were unable to recover it. 00:30:06.113 [2024-07-23 01:09:50.179015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.113 [2024-07-23 01:09:50.179147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.113 [2024-07-23 01:09:50.179176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.113 qpair failed and we were unable to recover it. 00:30:06.113 [2024-07-23 01:09:50.179303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.113 [2024-07-23 01:09:50.179462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.113 [2024-07-23 01:09:50.179486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.113 qpair failed and we were unable to recover it. 00:30:06.113 [2024-07-23 01:09:50.179623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.113 [2024-07-23 01:09:50.179750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.113 [2024-07-23 01:09:50.179775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.113 qpair failed and we were unable to recover it. 00:30:06.113 [2024-07-23 01:09:50.179924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.113 [2024-07-23 01:09:50.180065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.113 [2024-07-23 01:09:50.180090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.113 qpair failed and we were unable to recover it. 00:30:06.113 [2024-07-23 01:09:50.180246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.113 [2024-07-23 01:09:50.180380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.113 [2024-07-23 01:09:50.180404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.113 qpair failed and we were unable to recover it. 00:30:06.113 [2024-07-23 01:09:50.180563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.113 [2024-07-23 01:09:50.180693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.113 [2024-07-23 01:09:50.180718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.113 qpair failed and we were unable to recover it. 00:30:06.113 [2024-07-23 01:09:50.180853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.113 [2024-07-23 01:09:50.181013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.113 [2024-07-23 01:09:50.181037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.113 qpair failed and we were unable to recover it. 00:30:06.113 [2024-07-23 01:09:50.181214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.113 [2024-07-23 01:09:50.181344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.113 [2024-07-23 01:09:50.181369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.114 qpair failed and we were unable to recover it. 00:30:06.114 [2024-07-23 01:09:50.181501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.114 [2024-07-23 01:09:50.181641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.114 [2024-07-23 01:09:50.181666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.114 qpair failed and we were unable to recover it. 00:30:06.114 [2024-07-23 01:09:50.181807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.114 [2024-07-23 01:09:50.181969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.114 [2024-07-23 01:09:50.181993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.114 qpair failed and we were unable to recover it. 00:30:06.114 [2024-07-23 01:09:50.182125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.114 [2024-07-23 01:09:50.182272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.114 [2024-07-23 01:09:50.182298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.114 qpair failed and we were unable to recover it. 00:30:06.114 [2024-07-23 01:09:50.182462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.114 [2024-07-23 01:09:50.182597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.114 [2024-07-23 01:09:50.182631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.114 qpair failed and we were unable to recover it. 00:30:06.114 [2024-07-23 01:09:50.182778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.114 [2024-07-23 01:09:50.182927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.114 [2024-07-23 01:09:50.182953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.114 qpair failed and we were unable to recover it. 00:30:06.114 [2024-07-23 01:09:50.183086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.114 [2024-07-23 01:09:50.183250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.114 [2024-07-23 01:09:50.183276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.114 qpair failed and we were unable to recover it. 00:30:06.114 [2024-07-23 01:09:50.183417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.114 [2024-07-23 01:09:50.183581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.114 [2024-07-23 01:09:50.183607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.114 qpair failed and we were unable to recover it. 00:30:06.114 [2024-07-23 01:09:50.183744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.114 [2024-07-23 01:09:50.183903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.114 [2024-07-23 01:09:50.183927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.114 qpair failed and we were unable to recover it. 00:30:06.114 [2024-07-23 01:09:50.184055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.114 [2024-07-23 01:09:50.184179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.114 [2024-07-23 01:09:50.184204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.114 qpair failed and we were unable to recover it. 00:30:06.114 [2024-07-23 01:09:50.184381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.114 [2024-07-23 01:09:50.184507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.114 [2024-07-23 01:09:50.184532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.114 qpair failed and we were unable to recover it. 00:30:06.114 [2024-07-23 01:09:50.184694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.114 [2024-07-23 01:09:50.184829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.114 [2024-07-23 01:09:50.184854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.114 qpair failed and we were unable to recover it. 00:30:06.114 [2024-07-23 01:09:50.185013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.114 [2024-07-23 01:09:50.185147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.114 [2024-07-23 01:09:50.185171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.114 qpair failed and we were unable to recover it. 00:30:06.114 [2024-07-23 01:09:50.185338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.114 [2024-07-23 01:09:50.185484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.114 [2024-07-23 01:09:50.185508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.114 qpair failed and we were unable to recover it. 00:30:06.114 [2024-07-23 01:09:50.185672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.114 [2024-07-23 01:09:50.185814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.114 [2024-07-23 01:09:50.185839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.114 qpair failed and we were unable to recover it. 00:30:06.114 [2024-07-23 01:09:50.185971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.114 [2024-07-23 01:09:50.186097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.114 [2024-07-23 01:09:50.186121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.114 qpair failed and we were unable to recover it. 00:30:06.114 [2024-07-23 01:09:50.186288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.114 [2024-07-23 01:09:50.186413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.114 [2024-07-23 01:09:50.186437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.114 qpair failed and we were unable to recover it. 00:30:06.114 [2024-07-23 01:09:50.186600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.114 [2024-07-23 01:09:50.186738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.114 [2024-07-23 01:09:50.186763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.114 qpair failed and we were unable to recover it. 00:30:06.114 [2024-07-23 01:09:50.186890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.114 [2024-07-23 01:09:50.187035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.114 [2024-07-23 01:09:50.187059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.114 qpair failed and we were unable to recover it. 00:30:06.114 [2024-07-23 01:09:50.187188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.114 [2024-07-23 01:09:50.187378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.114 [2024-07-23 01:09:50.187401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.114 qpair failed and we were unable to recover it. 00:30:06.114 [2024-07-23 01:09:50.187530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.114 [2024-07-23 01:09:50.187659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.114 [2024-07-23 01:09:50.187684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.114 qpair failed and we were unable to recover it. 00:30:06.114 [2024-07-23 01:09:50.187822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.114 [2024-07-23 01:09:50.187977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.114 [2024-07-23 01:09:50.188002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.114 qpair failed and we were unable to recover it. 00:30:06.114 [2024-07-23 01:09:50.188150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.114 [2024-07-23 01:09:50.188280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.114 [2024-07-23 01:09:50.188304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.114 qpair failed and we were unable to recover it. 00:30:06.114 [2024-07-23 01:09:50.188446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.114 [2024-07-23 01:09:50.188634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.114 [2024-07-23 01:09:50.188660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.114 qpair failed and we were unable to recover it. 00:30:06.114 [2024-07-23 01:09:50.188799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.114 [2024-07-23 01:09:50.188951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.114 [2024-07-23 01:09:50.188975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.114 qpair failed and we were unable to recover it. 00:30:06.114 [2024-07-23 01:09:50.189151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.114 [2024-07-23 01:09:50.189291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.114 [2024-07-23 01:09:50.189315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.114 qpair failed and we were unable to recover it. 00:30:06.114 [2024-07-23 01:09:50.189476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.114 [2024-07-23 01:09:50.189605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.114 [2024-07-23 01:09:50.189641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.114 qpair failed and we were unable to recover it. 00:30:06.114 [2024-07-23 01:09:50.189815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.114 [2024-07-23 01:09:50.189959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.114 [2024-07-23 01:09:50.189983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.114 qpair failed and we were unable to recover it. 00:30:06.114 [2024-07-23 01:09:50.190148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.114 [2024-07-23 01:09:50.190278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.114 [2024-07-23 01:09:50.190304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.114 qpair failed and we were unable to recover it. 00:30:06.114 [2024-07-23 01:09:50.190438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.115 [2024-07-23 01:09:50.190607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.115 [2024-07-23 01:09:50.190637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.115 qpair failed and we were unable to recover it. 00:30:06.115 [2024-07-23 01:09:50.190774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.115 [2024-07-23 01:09:50.190918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.115 [2024-07-23 01:09:50.190942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.115 qpair failed and we were unable to recover it. 00:30:06.115 [2024-07-23 01:09:50.191071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.115 [2024-07-23 01:09:50.191227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.115 [2024-07-23 01:09:50.191252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.115 qpair failed and we were unable to recover it. 00:30:06.115 [2024-07-23 01:09:50.191403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.115 [2024-07-23 01:09:50.191561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.115 [2024-07-23 01:09:50.191585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.115 qpair failed and we were unable to recover it. 00:30:06.115 [2024-07-23 01:09:50.191715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.115 [2024-07-23 01:09:50.191855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.115 [2024-07-23 01:09:50.191884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.115 qpair failed and we were unable to recover it. 00:30:06.115 [2024-07-23 01:09:50.192054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.115 [2024-07-23 01:09:50.192245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.115 [2024-07-23 01:09:50.192269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.115 qpair failed and we were unable to recover it. 00:30:06.115 [2024-07-23 01:09:50.192399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.115 [2024-07-23 01:09:50.192556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.115 [2024-07-23 01:09:50.192580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.115 qpair failed and we were unable to recover it. 00:30:06.115 [2024-07-23 01:09:50.192718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.115 [2024-07-23 01:09:50.192847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.115 [2024-07-23 01:09:50.192871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.115 qpair failed and we were unable to recover it. 00:30:06.115 [2024-07-23 01:09:50.193038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.115 [2024-07-23 01:09:50.193196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.115 [2024-07-23 01:09:50.193220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.115 qpair failed and we were unable to recover it. 00:30:06.115 [2024-07-23 01:09:50.193359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.115 [2024-07-23 01:09:50.193523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.115 [2024-07-23 01:09:50.193548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.115 qpair failed and we were unable to recover it. 00:30:06.115 [2024-07-23 01:09:50.193677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.115 [2024-07-23 01:09:50.193837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.115 [2024-07-23 01:09:50.193863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.115 qpair failed and we were unable to recover it. 00:30:06.115 [2024-07-23 01:09:50.193992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.115 [2024-07-23 01:09:50.194138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.115 [2024-07-23 01:09:50.194162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.115 qpair failed and we were unable to recover it. 00:30:06.115 [2024-07-23 01:09:50.194303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.115 [2024-07-23 01:09:50.194486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.115 [2024-07-23 01:09:50.194510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.115 qpair failed and we were unable to recover it. 00:30:06.115 [2024-07-23 01:09:50.194656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.115 [2024-07-23 01:09:50.194816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.115 [2024-07-23 01:09:50.194842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.115 qpair failed and we were unable to recover it. 00:30:06.115 [2024-07-23 01:09:50.194977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.115 [2024-07-23 01:09:50.195132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.115 [2024-07-23 01:09:50.195157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.115 qpair failed and we were unable to recover it. 00:30:06.115 [2024-07-23 01:09:50.195321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.115 [2024-07-23 01:09:50.195486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.115 [2024-07-23 01:09:50.195513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.115 qpair failed and we were unable to recover it. 00:30:06.115 [2024-07-23 01:09:50.195654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.115 [2024-07-23 01:09:50.195799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.115 [2024-07-23 01:09:50.195824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.115 qpair failed and we were unable to recover it. 00:30:06.115 [2024-07-23 01:09:50.195981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.115 [2024-07-23 01:09:50.196104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.115 [2024-07-23 01:09:50.196128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.115 qpair failed and we were unable to recover it. 00:30:06.115 [2024-07-23 01:09:50.196269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.115 [2024-07-23 01:09:50.196434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.115 [2024-07-23 01:09:50.196459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.115 qpair failed and we were unable to recover it. 00:30:06.115 [2024-07-23 01:09:50.196626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.115 [2024-07-23 01:09:50.196759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.115 [2024-07-23 01:09:50.196784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.115 qpair failed and we were unable to recover it. 00:30:06.115 [2024-07-23 01:09:50.196924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.115 [2024-07-23 01:09:50.197067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.115 [2024-07-23 01:09:50.197092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.115 qpair failed and we were unable to recover it. 00:30:06.115 [2024-07-23 01:09:50.197252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.115 [2024-07-23 01:09:50.197387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.115 [2024-07-23 01:09:50.197410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.115 qpair failed and we were unable to recover it. 00:30:06.115 [2024-07-23 01:09:50.197541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.115 [2024-07-23 01:09:50.197705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.115 [2024-07-23 01:09:50.197730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.115 qpair failed and we were unable to recover it. 00:30:06.115 [2024-07-23 01:09:50.197872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.115 [2024-07-23 01:09:50.198002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.115 [2024-07-23 01:09:50.198026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.115 qpair failed and we were unable to recover it. 00:30:06.115 [2024-07-23 01:09:50.198182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.115 [2024-07-23 01:09:50.198346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.115 [2024-07-23 01:09:50.198370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.115 qpair failed and we were unable to recover it. 00:30:06.115 [2024-07-23 01:09:50.198515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.115 [2024-07-23 01:09:50.198699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.115 [2024-07-23 01:09:50.198724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.115 qpair failed and we were unable to recover it. 00:30:06.115 [2024-07-23 01:09:50.198852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.115 [2024-07-23 01:09:50.198983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.115 [2024-07-23 01:09:50.199009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.115 qpair failed and we were unable to recover it. 00:30:06.115 [2024-07-23 01:09:50.199173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.115 [2024-07-23 01:09:50.199318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.115 [2024-07-23 01:09:50.199342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.115 qpair failed and we were unable to recover it. 00:30:06.115 [2024-07-23 01:09:50.199477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.115 [2024-07-23 01:09:50.199603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.115 [2024-07-23 01:09:50.199632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.115 qpair failed and we were unable to recover it. 00:30:06.116 [2024-07-23 01:09:50.199810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.116 [2024-07-23 01:09:50.199947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.116 [2024-07-23 01:09:50.199973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.116 qpair failed and we were unable to recover it. 00:30:06.116 [2024-07-23 01:09:50.200135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.116 [2024-07-23 01:09:50.200308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.116 [2024-07-23 01:09:50.200332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.116 qpair failed and we were unable to recover it. 00:30:06.116 [2024-07-23 01:09:50.200468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.116 [2024-07-23 01:09:50.200627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.116 [2024-07-23 01:09:50.200652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.116 qpair failed and we were unable to recover it. 00:30:06.116 [2024-07-23 01:09:50.200787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.116 [2024-07-23 01:09:50.200924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.116 [2024-07-23 01:09:50.200948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.116 qpair failed and we were unable to recover it. 00:30:06.116 [2024-07-23 01:09:50.201108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.116 [2024-07-23 01:09:50.201246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.116 [2024-07-23 01:09:50.201270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.116 qpair failed and we were unable to recover it. 00:30:06.116 [2024-07-23 01:09:50.201408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.116 [2024-07-23 01:09:50.201557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.116 [2024-07-23 01:09:50.201583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.116 qpair failed and we were unable to recover it. 00:30:06.116 [2024-07-23 01:09:50.201772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.116 [2024-07-23 01:09:50.201927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.116 [2024-07-23 01:09:50.201952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.116 qpair failed and we were unable to recover it. 00:30:06.116 [2024-07-23 01:09:50.202084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.116 [2024-07-23 01:09:50.202229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.116 [2024-07-23 01:09:50.202254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.116 qpair failed and we were unable to recover it. 00:30:06.116 [2024-07-23 01:09:50.202442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.116 [2024-07-23 01:09:50.202580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.116 [2024-07-23 01:09:50.202606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.116 qpair failed and we were unable to recover it. 00:30:06.116 [2024-07-23 01:09:50.202793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.116 [2024-07-23 01:09:50.202928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.116 [2024-07-23 01:09:50.202952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.116 qpair failed and we were unable to recover it. 00:30:06.116 [2024-07-23 01:09:50.203083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.116 [2024-07-23 01:09:50.203241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.116 [2024-07-23 01:09:50.203265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.116 qpair failed and we were unable to recover it. 00:30:06.116 [2024-07-23 01:09:50.203426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.116 [2024-07-23 01:09:50.203554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.116 [2024-07-23 01:09:50.203578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.116 qpair failed and we were unable to recover it. 00:30:06.116 [2024-07-23 01:09:50.203725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.116 [2024-07-23 01:09:50.203853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.116 [2024-07-23 01:09:50.203878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.116 qpair failed and we were unable to recover it. 00:30:06.116 [2024-07-23 01:09:50.204039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.116 [2024-07-23 01:09:50.204202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.116 [2024-07-23 01:09:50.204226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.116 qpair failed and we were unable to recover it. 00:30:06.116 [2024-07-23 01:09:50.204356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.116 [2024-07-23 01:09:50.204517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.116 [2024-07-23 01:09:50.204543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.116 qpair failed and we were unable to recover it. 00:30:06.116 [2024-07-23 01:09:50.204704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.116 [2024-07-23 01:09:50.204829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.116 [2024-07-23 01:09:50.204853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.116 qpair failed and we were unable to recover it. 00:30:06.116 [2024-07-23 01:09:50.204992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.116 [2024-07-23 01:09:50.205117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.116 [2024-07-23 01:09:50.205141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.116 qpair failed and we were unable to recover it. 00:30:06.116 [2024-07-23 01:09:50.205302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.116 [2024-07-23 01:09:50.205434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.116 [2024-07-23 01:09:50.205458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.116 qpair failed and we were unable to recover it. 00:30:06.116 [2024-07-23 01:09:50.205587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.116 [2024-07-23 01:09:50.205716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.116 [2024-07-23 01:09:50.205742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.116 qpair failed and we were unable to recover it. 00:30:06.116 [2024-07-23 01:09:50.205890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.116 [2024-07-23 01:09:50.206052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.116 [2024-07-23 01:09:50.206077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.116 qpair failed and we were unable to recover it. 00:30:06.116 [2024-07-23 01:09:50.206253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.116 [2024-07-23 01:09:50.206409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.116 [2024-07-23 01:09:50.206434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.116 qpair failed and we were unable to recover it. 00:30:06.116 [2024-07-23 01:09:50.206625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.116 [2024-07-23 01:09:50.206768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.116 [2024-07-23 01:09:50.206793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.116 qpair failed and we were unable to recover it. 00:30:06.116 [2024-07-23 01:09:50.206969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.116 [2024-07-23 01:09:50.207105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.116 [2024-07-23 01:09:50.207129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.116 qpair failed and we were unable to recover it. 00:30:06.116 [2024-07-23 01:09:50.207314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.116 [2024-07-23 01:09:50.207445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.116 [2024-07-23 01:09:50.207469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.116 qpair failed and we were unable to recover it. 00:30:06.116 [2024-07-23 01:09:50.207606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.116 [2024-07-23 01:09:50.207767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.116 [2024-07-23 01:09:50.207791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.116 qpair failed and we were unable to recover it. 00:30:06.116 [2024-07-23 01:09:50.207952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.116 [2024-07-23 01:09:50.208108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.116 [2024-07-23 01:09:50.208132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.116 qpair failed and we were unable to recover it. 00:30:06.116 [2024-07-23 01:09:50.208262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.116 [2024-07-23 01:09:50.208431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.116 [2024-07-23 01:09:50.208457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.116 qpair failed and we were unable to recover it. 00:30:06.116 [2024-07-23 01:09:50.208603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.116 [2024-07-23 01:09:50.208783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.116 [2024-07-23 01:09:50.208808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.116 qpair failed and we were unable to recover it. 00:30:06.116 [2024-07-23 01:09:50.208948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.117 [2024-07-23 01:09:50.209093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.117 [2024-07-23 01:09:50.209119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.117 qpair failed and we were unable to recover it. 00:30:06.117 [2024-07-23 01:09:50.209283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.117 [2024-07-23 01:09:50.209429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.117 [2024-07-23 01:09:50.209454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.117 qpair failed and we were unable to recover it. 00:30:06.117 [2024-07-23 01:09:50.209597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.117 [2024-07-23 01:09:50.209777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.117 [2024-07-23 01:09:50.209803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.117 qpair failed and we were unable to recover it. 00:30:06.117 [2024-07-23 01:09:50.209952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.117 [2024-07-23 01:09:50.210097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.117 [2024-07-23 01:09:50.210120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.117 qpair failed and we were unable to recover it. 00:30:06.117 [2024-07-23 01:09:50.210278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.117 [2024-07-23 01:09:50.210426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.117 [2024-07-23 01:09:50.210450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.117 qpair failed and we were unable to recover it. 00:30:06.117 [2024-07-23 01:09:50.210582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.117 [2024-07-23 01:09:50.210715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.117 [2024-07-23 01:09:50.210741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.117 qpair failed and we were unable to recover it. 00:30:06.117 [2024-07-23 01:09:50.210897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.117 [2024-07-23 01:09:50.211074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.117 [2024-07-23 01:09:50.211099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.117 qpair failed and we were unable to recover it. 00:30:06.117 [2024-07-23 01:09:50.211261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.117 [2024-07-23 01:09:50.211398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.117 [2024-07-23 01:09:50.211423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.117 qpair failed and we were unable to recover it. 00:30:06.117 [2024-07-23 01:09:50.211566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.117 [2024-07-23 01:09:50.211727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.117 [2024-07-23 01:09:50.211753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.117 qpair failed and we were unable to recover it. 00:30:06.117 [2024-07-23 01:09:50.211886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.117 [2024-07-23 01:09:50.212012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.117 [2024-07-23 01:09:50.212036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.117 qpair failed and we were unable to recover it. 00:30:06.117 [2024-07-23 01:09:50.212160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.117 [2024-07-23 01:09:50.212291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.117 [2024-07-23 01:09:50.212315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.117 qpair failed and we were unable to recover it. 00:30:06.117 [2024-07-23 01:09:50.212502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.117 [2024-07-23 01:09:50.212638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.117 [2024-07-23 01:09:50.212663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.117 qpair failed and we were unable to recover it. 00:30:06.117 [2024-07-23 01:09:50.212790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.117 [2024-07-23 01:09:50.212924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.117 [2024-07-23 01:09:50.212948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.117 qpair failed and we were unable to recover it. 00:30:06.117 [2024-07-23 01:09:50.213076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.117 [2024-07-23 01:09:50.213203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.117 [2024-07-23 01:09:50.213227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.117 qpair failed and we were unable to recover it. 00:30:06.117 [2024-07-23 01:09:50.213358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.117 [2024-07-23 01:09:50.213484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.117 [2024-07-23 01:09:50.213508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.117 qpair failed and we were unable to recover it. 00:30:06.117 [2024-07-23 01:09:50.213675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.117 [2024-07-23 01:09:50.213804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.117 [2024-07-23 01:09:50.213828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.117 qpair failed and we were unable to recover it. 00:30:06.117 [2024-07-23 01:09:50.213989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.117 [2024-07-23 01:09:50.214137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.117 [2024-07-23 01:09:50.214161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.117 qpair failed and we were unable to recover it. 00:30:06.117 [2024-07-23 01:09:50.214286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.117 [2024-07-23 01:09:50.214448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.117 [2024-07-23 01:09:50.214472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.117 qpair failed and we were unable to recover it. 00:30:06.117 [2024-07-23 01:09:50.214606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.117 [2024-07-23 01:09:50.214748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.117 [2024-07-23 01:09:50.214772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.117 qpair failed and we were unable to recover it. 00:30:06.117 [2024-07-23 01:09:50.214898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.117 [2024-07-23 01:09:50.215035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.117 [2024-07-23 01:09:50.215060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.117 qpair failed and we were unable to recover it. 00:30:06.117 [2024-07-23 01:09:50.215220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.117 [2024-07-23 01:09:50.215351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.117 [2024-07-23 01:09:50.215377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.117 qpair failed and we were unable to recover it. 00:30:06.117 [2024-07-23 01:09:50.215540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.117 [2024-07-23 01:09:50.215679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.117 [2024-07-23 01:09:50.215704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.117 qpair failed and we were unable to recover it. 00:30:06.117 [2024-07-23 01:09:50.215839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.117 [2024-07-23 01:09:50.215966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.117 [2024-07-23 01:09:50.215990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.117 qpair failed and we were unable to recover it. 00:30:06.117 [2024-07-23 01:09:50.216161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.117 [2024-07-23 01:09:50.216290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.117 [2024-07-23 01:09:50.216314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.117 qpair failed and we were unable to recover it. 00:30:06.117 [2024-07-23 01:09:50.216446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.117 [2024-07-23 01:09:50.216609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.117 [2024-07-23 01:09:50.216641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.117 qpair failed and we were unable to recover it. 00:30:06.117 [2024-07-23 01:09:50.216796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.117 [2024-07-23 01:09:50.216930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.117 [2024-07-23 01:09:50.216954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.117 qpair failed and we were unable to recover it. 00:30:06.117 [2024-07-23 01:09:50.217087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.117 [2024-07-23 01:09:50.217215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.117 [2024-07-23 01:09:50.217241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.118 qpair failed and we were unable to recover it. 00:30:06.118 [2024-07-23 01:09:50.217372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.118 [2024-07-23 01:09:50.217531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.118 [2024-07-23 01:09:50.217555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.118 qpair failed and we were unable to recover it. 00:30:06.118 [2024-07-23 01:09:50.217691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.118 [2024-07-23 01:09:50.217818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.118 [2024-07-23 01:09:50.217847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.118 qpair failed and we were unable to recover it. 00:30:06.118 [2024-07-23 01:09:50.217991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.118 [2024-07-23 01:09:50.218177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.118 [2024-07-23 01:09:50.218201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.118 qpair failed and we were unable to recover it. 00:30:06.118 [2024-07-23 01:09:50.218334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.118 [2024-07-23 01:09:50.218466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.118 [2024-07-23 01:09:50.218489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.118 qpair failed and we were unable to recover it. 00:30:06.118 [2024-07-23 01:09:50.218620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.118 [2024-07-23 01:09:50.218750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.118 [2024-07-23 01:09:50.218776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.118 qpair failed and we were unable to recover it. 00:30:06.118 [2024-07-23 01:09:50.218906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.118 [2024-07-23 01:09:50.219039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.118 [2024-07-23 01:09:50.219066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.118 qpair failed and we were unable to recover it. 00:30:06.118 [2024-07-23 01:09:50.219224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.118 [2024-07-23 01:09:50.219358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.118 [2024-07-23 01:09:50.219382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.118 qpair failed and we were unable to recover it. 00:30:06.118 [2024-07-23 01:09:50.219541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.118 [2024-07-23 01:09:50.219681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.118 [2024-07-23 01:09:50.219707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.118 qpair failed and we were unable to recover it. 00:30:06.118 [2024-07-23 01:09:50.219851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.118 [2024-07-23 01:09:50.219984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.118 [2024-07-23 01:09:50.220008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.118 qpair failed and we were unable to recover it. 00:30:06.118 [2024-07-23 01:09:50.220154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.118 [2024-07-23 01:09:50.220334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.118 [2024-07-23 01:09:50.220358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.118 qpair failed and we were unable to recover it. 00:30:06.118 [2024-07-23 01:09:50.220491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.118 [2024-07-23 01:09:50.220624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.118 [2024-07-23 01:09:50.220651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.118 qpair failed and we were unable to recover it. 00:30:06.118 [2024-07-23 01:09:50.220812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.118 [2024-07-23 01:09:50.220936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.118 [2024-07-23 01:09:50.220965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.118 qpair failed and we were unable to recover it. 00:30:06.118 [2024-07-23 01:09:50.221096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.118 [2024-07-23 01:09:50.221257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.118 [2024-07-23 01:09:50.221281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.118 qpair failed and we were unable to recover it. 00:30:06.118 [2024-07-23 01:09:50.221437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.118 [2024-07-23 01:09:50.221594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.118 [2024-07-23 01:09:50.221638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.118 qpair failed and we were unable to recover it. 00:30:06.118 [2024-07-23 01:09:50.221805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.118 [2024-07-23 01:09:50.221993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.118 [2024-07-23 01:09:50.222018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.118 qpair failed and we were unable to recover it. 00:30:06.118 [2024-07-23 01:09:50.222165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.118 [2024-07-23 01:09:50.222289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.118 [2024-07-23 01:09:50.222313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.118 qpair failed and we were unable to recover it. 00:30:06.118 [2024-07-23 01:09:50.222477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.118 [2024-07-23 01:09:50.222633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.118 [2024-07-23 01:09:50.222658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.118 qpair failed and we were unable to recover it. 00:30:06.118 [2024-07-23 01:09:50.222803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.118 [2024-07-23 01:09:50.222929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.118 [2024-07-23 01:09:50.222954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.118 qpair failed and we were unable to recover it. 00:30:06.118 [2024-07-23 01:09:50.223125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.118 [2024-07-23 01:09:50.223263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.118 [2024-07-23 01:09:50.223289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.118 qpair failed and we were unable to recover it. 00:30:06.118 [2024-07-23 01:09:50.223449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.118 [2024-07-23 01:09:50.223610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.118 [2024-07-23 01:09:50.223640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.118 qpair failed and we were unable to recover it. 00:30:06.118 [2024-07-23 01:09:50.223791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.118 [2024-07-23 01:09:50.223923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.118 [2024-07-23 01:09:50.223950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.118 qpair failed and we were unable to recover it. 00:30:06.118 [2024-07-23 01:09:50.224141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.118 [2024-07-23 01:09:50.224270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.118 [2024-07-23 01:09:50.224301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.118 qpair failed and we were unable to recover it. 00:30:06.118 [2024-07-23 01:09:50.224457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.118 [2024-07-23 01:09:50.224621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.118 [2024-07-23 01:09:50.224646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.118 qpair failed and we were unable to recover it. 00:30:06.118 [2024-07-23 01:09:50.224786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.118 [2024-07-23 01:09:50.224921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.118 [2024-07-23 01:09:50.224945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.118 qpair failed and we were unable to recover it. 00:30:06.118 [2024-07-23 01:09:50.225076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.118 [2024-07-23 01:09:50.225218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.118 [2024-07-23 01:09:50.225244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.118 qpair failed and we were unable to recover it. 00:30:06.118 [2024-07-23 01:09:50.225375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.118 [2024-07-23 01:09:50.225553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.118 [2024-07-23 01:09:50.225578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.118 qpair failed and we were unable to recover it. 00:30:06.118 [2024-07-23 01:09:50.225781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.119 [2024-07-23 01:09:50.225927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.119 [2024-07-23 01:09:50.225951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.119 qpair failed and we were unable to recover it. 00:30:06.119 [2024-07-23 01:09:50.226109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.119 [2024-07-23 01:09:50.226266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.119 [2024-07-23 01:09:50.226290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.119 qpair failed and we were unable to recover it. 00:30:06.119 [2024-07-23 01:09:50.226429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.119 [2024-07-23 01:09:50.226555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.119 [2024-07-23 01:09:50.226579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.119 qpair failed and we were unable to recover it. 00:30:06.119 [2024-07-23 01:09:50.226720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.119 [2024-07-23 01:09:50.226864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.119 [2024-07-23 01:09:50.226888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.119 qpair failed and we were unable to recover it. 00:30:06.119 [2024-07-23 01:09:50.227051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.119 [2024-07-23 01:09:50.227185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.119 [2024-07-23 01:09:50.227210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.119 qpair failed and we were unable to recover it. 00:30:06.119 [2024-07-23 01:09:50.227338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.119 [2024-07-23 01:09:50.227467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.119 [2024-07-23 01:09:50.227495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.119 qpair failed and we were unable to recover it. 00:30:06.119 [2024-07-23 01:09:50.227654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.119 [2024-07-23 01:09:50.227820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.119 [2024-07-23 01:09:50.227844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.119 qpair failed and we were unable to recover it. 00:30:06.119 [2024-07-23 01:09:50.227966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.119 [2024-07-23 01:09:50.228092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.119 [2024-07-23 01:09:50.228116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.119 qpair failed and we were unable to recover it. 00:30:06.119 [2024-07-23 01:09:50.228276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.119 [2024-07-23 01:09:50.228432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.119 [2024-07-23 01:09:50.228456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.119 qpair failed and we were unable to recover it. 00:30:06.119 [2024-07-23 01:09:50.228603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.119 [2024-07-23 01:09:50.228731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.119 [2024-07-23 01:09:50.228756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.119 qpair failed and we were unable to recover it. 00:30:06.119 [2024-07-23 01:09:50.228888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.119 [2024-07-23 01:09:50.229025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.119 [2024-07-23 01:09:50.229049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.119 qpair failed and we were unable to recover it. 00:30:06.119 [2024-07-23 01:09:50.229213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.119 [2024-07-23 01:09:50.229379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.119 [2024-07-23 01:09:50.229405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.119 qpair failed and we were unable to recover it. 00:30:06.119 [2024-07-23 01:09:50.229534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.119 [2024-07-23 01:09:50.229669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.119 [2024-07-23 01:09:50.229696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.119 qpair failed and we were unable to recover it. 00:30:06.119 [2024-07-23 01:09:50.229823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.119 [2024-07-23 01:09:50.229965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.119 [2024-07-23 01:09:50.229989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.119 qpair failed and we were unable to recover it. 00:30:06.119 [2024-07-23 01:09:50.230131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.119 [2024-07-23 01:09:50.230275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.119 [2024-07-23 01:09:50.230300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.119 qpair failed and we were unable to recover it. 00:30:06.119 [2024-07-23 01:09:50.230463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.119 [2024-07-23 01:09:50.230590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.119 [2024-07-23 01:09:50.230620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.119 qpair failed and we were unable to recover it. 00:30:06.119 [2024-07-23 01:09:50.230761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.119 [2024-07-23 01:09:50.230916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.119 [2024-07-23 01:09:50.230940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.119 qpair failed and we were unable to recover it. 00:30:06.119 [2024-07-23 01:09:50.231085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.119 [2024-07-23 01:09:50.231248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.119 [2024-07-23 01:09:50.231272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.119 qpair failed and we were unable to recover it. 00:30:06.119 [2024-07-23 01:09:50.231446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.119 [2024-07-23 01:09:50.231610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.119 [2024-07-23 01:09:50.231641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.119 qpair failed and we were unable to recover it. 00:30:06.119 [2024-07-23 01:09:50.231780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.119 [2024-07-23 01:09:50.231942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.119 [2024-07-23 01:09:50.231966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.119 qpair failed and we were unable to recover it. 00:30:06.119 [2024-07-23 01:09:50.232099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.119 [2024-07-23 01:09:50.232254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.119 [2024-07-23 01:09:50.232278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.119 qpair failed and we were unable to recover it. 00:30:06.119 [2024-07-23 01:09:50.232413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.119 [2024-07-23 01:09:50.232569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.119 [2024-07-23 01:09:50.232594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.119 qpair failed and we were unable to recover it. 00:30:06.119 [2024-07-23 01:09:50.232736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.119 [2024-07-23 01:09:50.232914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.119 [2024-07-23 01:09:50.232939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.119 qpair failed and we were unable to recover it. 00:30:06.119 [2024-07-23 01:09:50.233087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.119 [2024-07-23 01:09:50.233251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.119 [2024-07-23 01:09:50.233275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.119 qpair failed and we were unable to recover it. 00:30:06.119 [2024-07-23 01:09:50.233404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.119 [2024-07-23 01:09:50.233548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.119 [2024-07-23 01:09:50.233572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.119 qpair failed and we were unable to recover it. 00:30:06.119 [2024-07-23 01:09:50.233746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.119 [2024-07-23 01:09:50.233878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.119 [2024-07-23 01:09:50.233902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.119 qpair failed and we were unable to recover it. 00:30:06.119 [2024-07-23 01:09:50.234058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.119 [2024-07-23 01:09:50.234243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.119 [2024-07-23 01:09:50.234267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.119 qpair failed and we were unable to recover it. 00:30:06.119 [2024-07-23 01:09:50.234394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.119 [2024-07-23 01:09:50.234530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.119 [2024-07-23 01:09:50.234554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.119 qpair failed and we were unable to recover it. 00:30:06.119 [2024-07-23 01:09:50.234700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.119 [2024-07-23 01:09:50.234867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.120 [2024-07-23 01:09:50.234892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.120 qpair failed and we were unable to recover it. 00:30:06.120 [2024-07-23 01:09:50.235035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.120 [2024-07-23 01:09:50.235169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.120 [2024-07-23 01:09:50.235193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.120 qpair failed and we were unable to recover it. 00:30:06.120 [2024-07-23 01:09:50.235351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.120 [2024-07-23 01:09:50.235479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.120 [2024-07-23 01:09:50.235503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.120 qpair failed and we were unable to recover it. 00:30:06.120 [2024-07-23 01:09:50.235662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.120 [2024-07-23 01:09:50.235792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.120 [2024-07-23 01:09:50.235816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.120 qpair failed and we were unable to recover it. 00:30:06.120 [2024-07-23 01:09:50.235985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.120 [2024-07-23 01:09:50.236111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.120 [2024-07-23 01:09:50.236134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.120 qpair failed and we were unable to recover it. 00:30:06.120 [2024-07-23 01:09:50.236277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.120 [2024-07-23 01:09:50.236440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.120 [2024-07-23 01:09:50.236464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.120 qpair failed and we were unable to recover it. 00:30:06.120 [2024-07-23 01:09:50.236629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.120 [2024-07-23 01:09:50.236755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.120 [2024-07-23 01:09:50.236779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.120 qpair failed and we were unable to recover it. 00:30:06.120 [2024-07-23 01:09:50.236912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.120 [2024-07-23 01:09:50.237043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.120 [2024-07-23 01:09:50.237067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.120 qpair failed and we were unable to recover it. 00:30:06.120 [2024-07-23 01:09:50.237235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.120 [2024-07-23 01:09:50.237372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.120 [2024-07-23 01:09:50.237399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.120 qpair failed and we were unable to recover it. 00:30:06.120 [2024-07-23 01:09:50.237566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.120 [2024-07-23 01:09:50.237699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.120 [2024-07-23 01:09:50.237724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.120 qpair failed and we were unable to recover it. 00:30:06.120 [2024-07-23 01:09:50.237886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.120 [2024-07-23 01:09:50.238017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.120 [2024-07-23 01:09:50.238042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.120 qpair failed and we were unable to recover it. 00:30:06.120 [2024-07-23 01:09:50.238177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.120 [2024-07-23 01:09:50.238319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.120 [2024-07-23 01:09:50.238344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.120 qpair failed and we were unable to recover it. 00:30:06.120 [2024-07-23 01:09:50.238486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.120 [2024-07-23 01:09:50.238639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.120 [2024-07-23 01:09:50.238666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.120 qpair failed and we were unable to recover it. 00:30:06.120 [2024-07-23 01:09:50.238804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.120 [2024-07-23 01:09:50.238961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.120 [2024-07-23 01:09:50.238985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.120 qpair failed and we were unable to recover it. 00:30:06.120 [2024-07-23 01:09:50.239146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.120 [2024-07-23 01:09:50.239275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.120 [2024-07-23 01:09:50.239299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.120 qpair failed and we were unable to recover it. 00:30:06.120 [2024-07-23 01:09:50.239428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.120 [2024-07-23 01:09:50.239567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.120 [2024-07-23 01:09:50.239591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.120 qpair failed and we were unable to recover it. 00:30:06.120 [2024-07-23 01:09:50.239744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.120 [2024-07-23 01:09:50.239873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.120 [2024-07-23 01:09:50.239896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.120 qpair failed and we were unable to recover it. 00:30:06.120 [2024-07-23 01:09:50.240057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.120 [2024-07-23 01:09:50.240185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.120 [2024-07-23 01:09:50.240209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.120 qpair failed and we were unable to recover it. 00:30:06.120 [2024-07-23 01:09:50.240380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.120 [2024-07-23 01:09:50.240506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.120 [2024-07-23 01:09:50.240530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.120 qpair failed and we were unable to recover it. 00:30:06.120 [2024-07-23 01:09:50.240661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.120 [2024-07-23 01:09:50.240826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.120 [2024-07-23 01:09:50.240850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.120 qpair failed and we were unable to recover it. 00:30:06.120 [2024-07-23 01:09:50.241010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.120 [2024-07-23 01:09:50.241158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.120 [2024-07-23 01:09:50.241182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.120 qpair failed and we were unable to recover it. 00:30:06.120 [2024-07-23 01:09:50.241346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.120 [2024-07-23 01:09:50.241500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.120 [2024-07-23 01:09:50.241525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.120 qpair failed and we were unable to recover it. 00:30:06.120 [2024-07-23 01:09:50.241701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.120 [2024-07-23 01:09:50.241830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.120 [2024-07-23 01:09:50.241854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.120 qpair failed and we were unable to recover it. 00:30:06.120 [2024-07-23 01:09:50.242002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.120 [2024-07-23 01:09:50.242140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.120 [2024-07-23 01:09:50.242165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.120 qpair failed and we were unable to recover it. 00:30:06.120 [2024-07-23 01:09:50.242293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.120 [2024-07-23 01:09:50.242447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.120 [2024-07-23 01:09:50.242472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.120 qpair failed and we were unable to recover it. 00:30:06.120 [2024-07-23 01:09:50.242626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.120 [2024-07-23 01:09:50.242779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.120 [2024-07-23 01:09:50.242803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.120 qpair failed and we were unable to recover it. 00:30:06.120 [2024-07-23 01:09:50.242968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.120 [2024-07-23 01:09:50.243100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.120 [2024-07-23 01:09:50.243125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.120 qpair failed and we were unable to recover it. 00:30:06.120 [2024-07-23 01:09:50.243260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.120 [2024-07-23 01:09:50.243391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.120 [2024-07-23 01:09:50.243415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.120 qpair failed and we were unable to recover it. 00:30:06.120 [2024-07-23 01:09:50.243604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.120 [2024-07-23 01:09:50.243740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.120 [2024-07-23 01:09:50.243765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.120 qpair failed and we were unable to recover it. 00:30:06.120 [2024-07-23 01:09:50.243907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.121 [2024-07-23 01:09:50.244051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.121 [2024-07-23 01:09:50.244075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.121 qpair failed and we were unable to recover it. 00:30:06.121 [2024-07-23 01:09:50.244263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.121 [2024-07-23 01:09:50.244394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.121 [2024-07-23 01:09:50.244420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.121 qpair failed and we were unable to recover it. 00:30:06.121 [2024-07-23 01:09:50.244552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.121 [2024-07-23 01:09:50.244709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.121 [2024-07-23 01:09:50.244734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.121 qpair failed and we were unable to recover it. 00:30:06.121 [2024-07-23 01:09:50.244873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.121 [2024-07-23 01:09:50.245001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.121 [2024-07-23 01:09:50.245027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.121 qpair failed and we were unable to recover it. 00:30:06.121 [2024-07-23 01:09:50.245197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.121 [2024-07-23 01:09:50.245342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.121 [2024-07-23 01:09:50.245366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.121 qpair failed and we were unable to recover it. 00:30:06.121 [2024-07-23 01:09:50.245496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.121 [2024-07-23 01:09:50.245629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.121 [2024-07-23 01:09:50.245653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.121 qpair failed and we were unable to recover it. 00:30:06.121 [2024-07-23 01:09:50.245818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.121 [2024-07-23 01:09:50.245959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.121 [2024-07-23 01:09:50.245983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.121 qpair failed and we were unable to recover it. 00:30:06.121 [2024-07-23 01:09:50.246106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.121 [2024-07-23 01:09:50.246297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.121 [2024-07-23 01:09:50.246321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.121 qpair failed and we were unable to recover it. 00:30:06.121 [2024-07-23 01:09:50.246456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.121 [2024-07-23 01:09:50.246600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.121 [2024-07-23 01:09:50.246630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.121 qpair failed and we were unable to recover it. 00:30:06.121 [2024-07-23 01:09:50.246774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.121 [2024-07-23 01:09:50.246940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.121 [2024-07-23 01:09:50.246965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.121 qpair failed and we were unable to recover it. 00:30:06.121 [2024-07-23 01:09:50.247096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.121 [2024-07-23 01:09:50.247241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.121 [2024-07-23 01:09:50.247266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.121 qpair failed and we were unable to recover it. 00:30:06.121 [2024-07-23 01:09:50.247401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.121 [2024-07-23 01:09:50.247562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.121 [2024-07-23 01:09:50.247586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.121 qpair failed and we were unable to recover it. 00:30:06.121 [2024-07-23 01:09:50.247732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.121 [2024-07-23 01:09:50.247855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.121 [2024-07-23 01:09:50.247880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.121 qpair failed and we were unable to recover it. 00:30:06.121 [2024-07-23 01:09:50.248036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.121 [2024-07-23 01:09:50.248175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.121 [2024-07-23 01:09:50.248201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.121 qpair failed and we were unable to recover it. 00:30:06.121 [2024-07-23 01:09:50.248389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.121 [2024-07-23 01:09:50.248514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.121 [2024-07-23 01:09:50.248538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.121 qpair failed and we were unable to recover it. 00:30:06.121 [2024-07-23 01:09:50.248669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.121 [2024-07-23 01:09:50.248808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.121 [2024-07-23 01:09:50.248832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.121 qpair failed and we were unable to recover it. 00:30:06.121 [2024-07-23 01:09:50.248959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.121 [2024-07-23 01:09:50.249101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.121 [2024-07-23 01:09:50.249125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.121 qpair failed and we were unable to recover it. 00:30:06.121 [2024-07-23 01:09:50.249267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.121 [2024-07-23 01:09:50.249409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.121 [2024-07-23 01:09:50.249433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.121 qpair failed and we were unable to recover it. 00:30:06.121 [2024-07-23 01:09:50.249563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.121 [2024-07-23 01:09:50.249712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.121 [2024-07-23 01:09:50.249738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.121 qpair failed and we were unable to recover it. 00:30:06.121 [2024-07-23 01:09:50.249895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.121 [2024-07-23 01:09:50.250052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.121 [2024-07-23 01:09:50.250077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.121 qpair failed and we were unable to recover it. 00:30:06.121 [2024-07-23 01:09:50.250251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.121 [2024-07-23 01:09:50.250394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.121 [2024-07-23 01:09:50.250418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.121 qpair failed and we were unable to recover it. 00:30:06.121 [2024-07-23 01:09:50.250543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.121 [2024-07-23 01:09:50.250677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.121 [2024-07-23 01:09:50.250704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.121 qpair failed and we were unable to recover it. 00:30:06.121 [2024-07-23 01:09:50.250835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.121 [2024-07-23 01:09:50.251007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.121 [2024-07-23 01:09:50.251031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.121 qpair failed and we were unable to recover it. 00:30:06.121 [2024-07-23 01:09:50.251220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.121 [2024-07-23 01:09:50.251363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.121 [2024-07-23 01:09:50.251387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.121 qpair failed and we were unable to recover it. 00:30:06.121 [2024-07-23 01:09:50.251517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.121 [2024-07-23 01:09:50.251678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.121 [2024-07-23 01:09:50.251703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.121 qpair failed and we were unable to recover it. 00:30:06.121 [2024-07-23 01:09:50.251843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.121 [2024-07-23 01:09:50.251970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.121 [2024-07-23 01:09:50.251995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.121 qpair failed and we were unable to recover it. 00:30:06.121 [2024-07-23 01:09:50.252132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.121 [2024-07-23 01:09:50.252265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.121 [2024-07-23 01:09:50.252291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.121 qpair failed and we were unable to recover it. 00:30:06.121 [2024-07-23 01:09:50.252424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.121 [2024-07-23 01:09:50.252549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.121 [2024-07-23 01:09:50.252573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.121 qpair failed and we were unable to recover it. 00:30:06.121 [2024-07-23 01:09:50.252717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.121 [2024-07-23 01:09:50.252865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.121 [2024-07-23 01:09:50.252891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.122 qpair failed and we were unable to recover it. 00:30:06.122 [2024-07-23 01:09:50.253052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.122 [2024-07-23 01:09:50.253207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.122 [2024-07-23 01:09:50.253231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.122 qpair failed and we were unable to recover it. 00:30:06.122 [2024-07-23 01:09:50.253364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.122 [2024-07-23 01:09:50.253558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.122 [2024-07-23 01:09:50.253582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.122 qpair failed and we were unable to recover it. 00:30:06.122 [2024-07-23 01:09:50.253736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.122 [2024-07-23 01:09:50.253862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.122 [2024-07-23 01:09:50.253886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.122 qpair failed and we were unable to recover it. 00:30:06.122 [2024-07-23 01:09:50.254011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.122 [2024-07-23 01:09:50.254174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.122 [2024-07-23 01:09:50.254198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.122 qpair failed and we were unable to recover it. 00:30:06.122 [2024-07-23 01:09:50.254362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.122 [2024-07-23 01:09:50.254519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.122 [2024-07-23 01:09:50.254544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.122 qpair failed and we were unable to recover it. 00:30:06.122 [2024-07-23 01:09:50.254674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.122 [2024-07-23 01:09:50.254810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.122 [2024-07-23 01:09:50.254835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.122 qpair failed and we were unable to recover it. 00:30:06.122 [2024-07-23 01:09:50.254982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.122 [2024-07-23 01:09:50.255107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.122 [2024-07-23 01:09:50.255131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.122 qpair failed and we were unable to recover it. 00:30:06.122 [2024-07-23 01:09:50.255291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.122 [2024-07-23 01:09:50.255449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.122 [2024-07-23 01:09:50.255473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.122 qpair failed and we were unable to recover it. 00:30:06.122 [2024-07-23 01:09:50.255607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.122 [2024-07-23 01:09:50.255759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.122 [2024-07-23 01:09:50.255784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.122 qpair failed and we were unable to recover it. 00:30:06.122 [2024-07-23 01:09:50.255935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.122 [2024-07-23 01:09:50.256094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.122 [2024-07-23 01:09:50.256119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.122 qpair failed and we were unable to recover it. 00:30:06.122 [2024-07-23 01:09:50.256245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.122 [2024-07-23 01:09:50.256407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.122 [2024-07-23 01:09:50.256432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.122 qpair failed and we were unable to recover it. 00:30:06.122 [2024-07-23 01:09:50.256594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.122 [2024-07-23 01:09:50.256725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.122 [2024-07-23 01:09:50.256750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.122 qpair failed and we were unable to recover it. 00:30:06.122 [2024-07-23 01:09:50.256909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.122 [2024-07-23 01:09:50.257037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.122 [2024-07-23 01:09:50.257061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.122 qpair failed and we were unable to recover it. 00:30:06.122 [2024-07-23 01:09:50.257193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.122 [2024-07-23 01:09:50.257351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.122 [2024-07-23 01:09:50.257375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.122 qpair failed and we were unable to recover it. 00:30:06.122 [2024-07-23 01:09:50.257534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.122 [2024-07-23 01:09:50.257663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.122 [2024-07-23 01:09:50.257688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.122 qpair failed and we were unable to recover it. 00:30:06.122 [2024-07-23 01:09:50.257845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.122 [2024-07-23 01:09:50.257981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.122 [2024-07-23 01:09:50.258004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.122 qpair failed and we were unable to recover it. 00:30:06.122 [2024-07-23 01:09:50.258197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.122 [2024-07-23 01:09:50.258326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.122 [2024-07-23 01:09:50.258350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.122 qpair failed and we were unable to recover it. 00:30:06.122 [2024-07-23 01:09:50.258488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.122 [2024-07-23 01:09:50.258632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.122 [2024-07-23 01:09:50.258658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.122 qpair failed and we were unable to recover it. 00:30:06.122 [2024-07-23 01:09:50.258849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.122 [2024-07-23 01:09:50.259011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.122 [2024-07-23 01:09:50.259035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.122 qpair failed and we were unable to recover it. 00:30:06.122 [2024-07-23 01:09:50.259178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.122 [2024-07-23 01:09:50.259306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.122 [2024-07-23 01:09:50.259330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.122 qpair failed and we were unable to recover it. 00:30:06.122 [2024-07-23 01:09:50.259473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.122 [2024-07-23 01:09:50.259623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.122 [2024-07-23 01:09:50.259649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.122 qpair failed and we were unable to recover it. 00:30:06.122 [2024-07-23 01:09:50.259814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.122 [2024-07-23 01:09:50.259945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.122 [2024-07-23 01:09:50.259969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.122 qpair failed and we were unable to recover it. 00:30:06.122 [2024-07-23 01:09:50.260158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.122 [2024-07-23 01:09:50.260297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.122 [2024-07-23 01:09:50.260321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.122 qpair failed and we were unable to recover it. 00:30:06.122 [2024-07-23 01:09:50.260450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.122 [2024-07-23 01:09:50.260620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.122 [2024-07-23 01:09:50.260645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.122 qpair failed and we were unable to recover it. 00:30:06.122 [2024-07-23 01:09:50.260802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.123 [2024-07-23 01:09:50.260935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.123 [2024-07-23 01:09:50.260959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.123 qpair failed and we were unable to recover it. 00:30:06.123 [2024-07-23 01:09:50.261116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.123 [2024-07-23 01:09:50.261273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.123 [2024-07-23 01:09:50.261297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.123 qpair failed and we were unable to recover it. 00:30:06.123 [2024-07-23 01:09:50.261428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.123 [2024-07-23 01:09:50.261558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.123 [2024-07-23 01:09:50.261582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.123 qpair failed and we were unable to recover it. 00:30:06.123 [2024-07-23 01:09:50.261737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.123 [2024-07-23 01:09:50.261875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.123 [2024-07-23 01:09:50.261899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.123 qpair failed and we were unable to recover it. 00:30:06.123 [2024-07-23 01:09:50.262041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.123 [2024-07-23 01:09:50.262196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.123 [2024-07-23 01:09:50.262220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.123 qpair failed and we were unable to recover it. 00:30:06.123 [2024-07-23 01:09:50.262383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.123 [2024-07-23 01:09:50.262526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.123 [2024-07-23 01:09:50.262552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.123 qpair failed and we were unable to recover it. 00:30:06.123 [2024-07-23 01:09:50.262687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.123 [2024-07-23 01:09:50.262847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.123 [2024-07-23 01:09:50.262872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.123 qpair failed and we were unable to recover it. 00:30:06.123 [2024-07-23 01:09:50.263050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.123 [2024-07-23 01:09:50.263181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.123 [2024-07-23 01:09:50.263205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.123 qpair failed and we were unable to recover it. 00:30:06.123 [2024-07-23 01:09:50.263337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.123 [2024-07-23 01:09:50.263475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.123 [2024-07-23 01:09:50.263501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.123 qpair failed and we were unable to recover it. 00:30:06.123 [2024-07-23 01:09:50.263671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.123 [2024-07-23 01:09:50.263798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.123 [2024-07-23 01:09:50.263822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.123 qpair failed and we were unable to recover it. 00:30:06.123 [2024-07-23 01:09:50.263953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.123 [2024-07-23 01:09:50.264074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.123 [2024-07-23 01:09:50.264098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.123 qpair failed and we were unable to recover it. 00:30:06.123 [2024-07-23 01:09:50.264257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.123 [2024-07-23 01:09:50.264422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.123 [2024-07-23 01:09:50.264446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.123 qpair failed and we were unable to recover it. 00:30:06.123 [2024-07-23 01:09:50.264590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.123 [2024-07-23 01:09:50.264742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.123 [2024-07-23 01:09:50.264767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.123 qpair failed and we were unable to recover it. 00:30:06.123 [2024-07-23 01:09:50.264896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.123 [2024-07-23 01:09:50.265081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.123 [2024-07-23 01:09:50.265105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.123 qpair failed and we were unable to recover it. 00:30:06.123 [2024-07-23 01:09:50.265249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.123 [2024-07-23 01:09:50.265408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.123 [2024-07-23 01:09:50.265432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.123 qpair failed and we were unable to recover it. 00:30:06.123 [2024-07-23 01:09:50.265594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.123 [2024-07-23 01:09:50.265724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.123 [2024-07-23 01:09:50.265748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.123 qpair failed and we were unable to recover it. 00:30:06.123 [2024-07-23 01:09:50.265913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.123 [2024-07-23 01:09:50.266058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.123 [2024-07-23 01:09:50.266083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.123 qpair failed and we were unable to recover it. 00:30:06.123 [2024-07-23 01:09:50.266251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.123 [2024-07-23 01:09:50.266381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.123 [2024-07-23 01:09:50.266406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.123 qpair failed and we were unable to recover it. 00:30:06.123 [2024-07-23 01:09:50.266553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.123 [2024-07-23 01:09:50.266699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.123 [2024-07-23 01:09:50.266724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.123 qpair failed and we were unable to recover it. 00:30:06.123 [2024-07-23 01:09:50.266885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.123 [2024-07-23 01:09:50.267013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.123 [2024-07-23 01:09:50.267037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.123 qpair failed and we were unable to recover it. 00:30:06.123 [2024-07-23 01:09:50.267199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.123 [2024-07-23 01:09:50.267343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.123 [2024-07-23 01:09:50.267368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.123 qpair failed and we were unable to recover it. 00:30:06.123 [2024-07-23 01:09:50.267502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.123 [2024-07-23 01:09:50.267663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.123 [2024-07-23 01:09:50.267689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.123 qpair failed and we were unable to recover it. 00:30:06.123 [2024-07-23 01:09:50.267835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.123 [2024-07-23 01:09:50.267969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.123 [2024-07-23 01:09:50.267994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.123 qpair failed and we were unable to recover it. 00:30:06.123 [2024-07-23 01:09:50.268136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.123 [2024-07-23 01:09:50.268294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.123 [2024-07-23 01:09:50.268318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.123 qpair failed and we were unable to recover it. 00:30:06.123 [2024-07-23 01:09:50.268483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.123 [2024-07-23 01:09:50.268652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.123 [2024-07-23 01:09:50.268678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.123 qpair failed and we were unable to recover it. 00:30:06.123 [2024-07-23 01:09:50.268852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.123 [2024-07-23 01:09:50.268986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.123 [2024-07-23 01:09:50.269010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.123 qpair failed and we were unable to recover it. 00:30:06.123 [2024-07-23 01:09:50.269149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.123 [2024-07-23 01:09:50.269274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.123 [2024-07-23 01:09:50.269303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.123 qpair failed and we were unable to recover it. 00:30:06.123 [2024-07-23 01:09:50.269477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.123 [2024-07-23 01:09:50.269600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.123 [2024-07-23 01:09:50.269630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.123 qpair failed and we were unable to recover it. 00:30:06.123 [2024-07-23 01:09:50.269767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.123 [2024-07-23 01:09:50.269895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.123 [2024-07-23 01:09:50.269920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.124 qpair failed and we were unable to recover it. 00:30:06.124 [2024-07-23 01:09:50.270084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.124 [2024-07-23 01:09:50.270243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.124 [2024-07-23 01:09:50.270268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.124 qpair failed and we were unable to recover it. 00:30:06.124 [2024-07-23 01:09:50.270422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.124 [2024-07-23 01:09:50.270581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.124 [2024-07-23 01:09:50.270606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.124 qpair failed and we were unable to recover it. 00:30:06.124 [2024-07-23 01:09:50.270759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.124 [2024-07-23 01:09:50.270905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.124 [2024-07-23 01:09:50.270929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.124 qpair failed and we were unable to recover it. 00:30:06.124 [2024-07-23 01:09:50.271082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.124 [2024-07-23 01:09:50.271223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.124 [2024-07-23 01:09:50.271248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.124 qpair failed and we were unable to recover it. 00:30:06.124 [2024-07-23 01:09:50.271394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.124 [2024-07-23 01:09:50.271555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.124 [2024-07-23 01:09:50.271579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.124 qpair failed and we were unable to recover it. 00:30:06.124 [2024-07-23 01:09:50.271748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.124 [2024-07-23 01:09:50.271879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.124 [2024-07-23 01:09:50.271903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.124 qpair failed and we were unable to recover it. 00:30:06.124 [2024-07-23 01:09:50.272054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.124 [2024-07-23 01:09:50.272214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.124 [2024-07-23 01:09:50.272238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.124 qpair failed and we were unable to recover it. 00:30:06.124 [2024-07-23 01:09:50.272399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.124 [2024-07-23 01:09:50.272548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.124 [2024-07-23 01:09:50.272576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.124 qpair failed and we were unable to recover it. 00:30:06.124 [2024-07-23 01:09:50.272718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.124 [2024-07-23 01:09:50.272867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.124 [2024-07-23 01:09:50.272892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.124 qpair failed and we were unable to recover it. 00:30:06.124 [2024-07-23 01:09:50.273055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.124 [2024-07-23 01:09:50.273187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.124 [2024-07-23 01:09:50.273211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.124 qpair failed and we were unable to recover it. 00:30:06.124 [2024-07-23 01:09:50.273385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.124 [2024-07-23 01:09:50.273552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.124 [2024-07-23 01:09:50.273576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.124 qpair failed and we were unable to recover it. 00:30:06.124 [2024-07-23 01:09:50.273745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.124 [2024-07-23 01:09:50.273913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.124 [2024-07-23 01:09:50.273937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.124 qpair failed and we were unable to recover it. 00:30:06.124 [2024-07-23 01:09:50.274072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.124 [2024-07-23 01:09:50.274233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.124 [2024-07-23 01:09:50.274257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.124 qpair failed and we were unable to recover it. 00:30:06.124 [2024-07-23 01:09:50.274388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.400 [2024-07-23 01:09:50.274529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.400 [2024-07-23 01:09:50.274554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.400 qpair failed and we were unable to recover it. 00:30:06.400 [2024-07-23 01:09:50.274721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.400 [2024-07-23 01:09:50.274861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.400 [2024-07-23 01:09:50.274887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.400 qpair failed and we were unable to recover it. 00:30:06.400 [2024-07-23 01:09:50.275024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.400 [2024-07-23 01:09:50.275173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.400 [2024-07-23 01:09:50.275198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.400 qpair failed and we were unable to recover it. 00:30:06.400 [2024-07-23 01:09:50.275357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.400 [2024-07-23 01:09:50.275497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.400 [2024-07-23 01:09:50.275521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.400 qpair failed and we were unable to recover it. 00:30:06.400 [2024-07-23 01:09:50.275674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.400 [2024-07-23 01:09:50.275823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.400 [2024-07-23 01:09:50.275852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.400 qpair failed and we were unable to recover it. 00:30:06.400 [2024-07-23 01:09:50.275988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.400 [2024-07-23 01:09:50.276163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.400 [2024-07-23 01:09:50.276188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.400 qpair failed and we were unable to recover it. 00:30:06.400 [2024-07-23 01:09:50.276319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.400 [2024-07-23 01:09:50.276507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.400 [2024-07-23 01:09:50.276531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.400 qpair failed and we were unable to recover it. 00:30:06.400 [2024-07-23 01:09:50.276699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.400 [2024-07-23 01:09:50.276836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.400 [2024-07-23 01:09:50.276860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.400 qpair failed and we were unable to recover it. 00:30:06.400 [2024-07-23 01:09:50.277026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.400 [2024-07-23 01:09:50.277155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.400 [2024-07-23 01:09:50.277179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.400 qpair failed and we were unable to recover it. 00:30:06.400 [2024-07-23 01:09:50.277352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.400 [2024-07-23 01:09:50.277512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.400 [2024-07-23 01:09:50.277537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.400 qpair failed and we were unable to recover it. 00:30:06.400 [2024-07-23 01:09:50.277704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.400 [2024-07-23 01:09:50.277869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.401 [2024-07-23 01:09:50.277893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.401 qpair failed and we were unable to recover it. 00:30:06.401 [2024-07-23 01:09:50.278030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.401 [2024-07-23 01:09:50.278192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.401 [2024-07-23 01:09:50.278216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.401 qpair failed and we were unable to recover it. 00:30:06.401 [2024-07-23 01:09:50.278372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.401 [2024-07-23 01:09:50.278500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.401 [2024-07-23 01:09:50.278525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.401 qpair failed and we were unable to recover it. 00:30:06.401 [2024-07-23 01:09:50.278685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.401 [2024-07-23 01:09:50.278851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.401 [2024-07-23 01:09:50.278877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.401 qpair failed and we were unable to recover it. 00:30:06.401 [2024-07-23 01:09:50.279029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.401 [2024-07-23 01:09:50.279188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.401 [2024-07-23 01:09:50.279216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.401 qpair failed and we were unable to recover it. 00:30:06.401 [2024-07-23 01:09:50.279377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.401 [2024-07-23 01:09:50.279531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.401 [2024-07-23 01:09:50.279555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.401 qpair failed and we were unable to recover it. 00:30:06.401 [2024-07-23 01:09:50.279690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.401 [2024-07-23 01:09:50.279819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.401 [2024-07-23 01:09:50.279843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.401 qpair failed and we were unable to recover it. 00:30:06.401 [2024-07-23 01:09:50.279991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.401 [2024-07-23 01:09:50.280154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.401 [2024-07-23 01:09:50.280180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.401 qpair failed and we were unable to recover it. 00:30:06.401 [2024-07-23 01:09:50.280341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.401 [2024-07-23 01:09:50.280502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.401 [2024-07-23 01:09:50.280528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.401 qpair failed and we were unable to recover it. 00:30:06.401 [2024-07-23 01:09:50.280655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.401 [2024-07-23 01:09:50.280791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.401 [2024-07-23 01:09:50.280815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.401 qpair failed and we were unable to recover it. 00:30:06.401 [2024-07-23 01:09:50.280975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.401 [2024-07-23 01:09:50.281102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.401 [2024-07-23 01:09:50.281127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.401 qpair failed and we were unable to recover it. 00:30:06.401 [2024-07-23 01:09:50.281259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.401 [2024-07-23 01:09:50.281384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.401 [2024-07-23 01:09:50.281408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.401 qpair failed and we were unable to recover it. 00:30:06.401 [2024-07-23 01:09:50.281571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.401 [2024-07-23 01:09:50.281721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.401 [2024-07-23 01:09:50.281747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.401 qpair failed and we were unable to recover it. 00:30:06.401 [2024-07-23 01:09:50.281879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.401 [2024-07-23 01:09:50.282006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.401 [2024-07-23 01:09:50.282030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.401 qpair failed and we were unable to recover it. 00:30:06.401 [2024-07-23 01:09:50.282173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.401 [2024-07-23 01:09:50.282362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.401 [2024-07-23 01:09:50.282386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.401 qpair failed and we were unable to recover it. 00:30:06.401 [2024-07-23 01:09:50.282570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.401 [2024-07-23 01:09:50.282741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.401 [2024-07-23 01:09:50.282767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.401 qpair failed and we were unable to recover it. 00:30:06.401 [2024-07-23 01:09:50.282908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.401 [2024-07-23 01:09:50.283041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.401 [2024-07-23 01:09:50.283065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.401 qpair failed and we were unable to recover it. 00:30:06.401 [2024-07-23 01:09:50.283243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.401 [2024-07-23 01:09:50.283370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.401 [2024-07-23 01:09:50.283394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.401 qpair failed and we were unable to recover it. 00:30:06.401 [2024-07-23 01:09:50.283554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.401 [2024-07-23 01:09:50.283698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.401 [2024-07-23 01:09:50.283724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.401 qpair failed and we were unable to recover it. 00:30:06.401 [2024-07-23 01:09:50.283857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.401 [2024-07-23 01:09:50.283983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.401 [2024-07-23 01:09:50.284007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.401 qpair failed and we were unable to recover it. 00:30:06.401 [2024-07-23 01:09:50.284145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.401 [2024-07-23 01:09:50.284272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.401 [2024-07-23 01:09:50.284297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.401 qpair failed and we were unable to recover it. 00:30:06.401 [2024-07-23 01:09:50.284473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.401 [2024-07-23 01:09:50.284623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.401 [2024-07-23 01:09:50.284648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.401 qpair failed and we were unable to recover it. 00:30:06.401 [2024-07-23 01:09:50.284784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.401 [2024-07-23 01:09:50.284913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.401 [2024-07-23 01:09:50.284939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.401 qpair failed and we were unable to recover it. 00:30:06.401 [2024-07-23 01:09:50.285102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.401 [2024-07-23 01:09:50.285231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.401 [2024-07-23 01:09:50.285255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.401 qpair failed and we were unable to recover it. 00:30:06.401 [2024-07-23 01:09:50.285399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.401 [2024-07-23 01:09:50.285529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.401 [2024-07-23 01:09:50.285555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.401 qpair failed and we were unable to recover it. 00:30:06.401 [2024-07-23 01:09:50.285726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.401 [2024-07-23 01:09:50.285860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.401 [2024-07-23 01:09:50.285884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.401 qpair failed and we were unable to recover it. 00:30:06.401 [2024-07-23 01:09:50.286017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.401 [2024-07-23 01:09:50.286177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.401 [2024-07-23 01:09:50.286201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.401 qpair failed and we were unable to recover it. 00:30:06.401 [2024-07-23 01:09:50.286362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.401 [2024-07-23 01:09:50.286494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.401 [2024-07-23 01:09:50.286518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.401 qpair failed and we were unable to recover it. 00:30:06.401 [2024-07-23 01:09:50.286679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.401 [2024-07-23 01:09:50.286845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.401 [2024-07-23 01:09:50.286869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.401 qpair failed and we were unable to recover it. 00:30:06.401 [2024-07-23 01:09:50.287022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.402 [2024-07-23 01:09:50.287153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.402 [2024-07-23 01:09:50.287177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.402 qpair failed and we were unable to recover it. 00:30:06.402 [2024-07-23 01:09:50.287312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.402 [2024-07-23 01:09:50.287442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.402 [2024-07-23 01:09:50.287467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.402 qpair failed and we were unable to recover it. 00:30:06.402 [2024-07-23 01:09:50.287636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.402 [2024-07-23 01:09:50.287772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.402 [2024-07-23 01:09:50.287796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.402 qpair failed and we were unable to recover it. 00:30:06.402 [2024-07-23 01:09:50.287955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.402 [2024-07-23 01:09:50.288080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.402 [2024-07-23 01:09:50.288105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.402 qpair failed and we were unable to recover it. 00:30:06.402 [2024-07-23 01:09:50.288243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.402 [2024-07-23 01:09:50.288401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.402 [2024-07-23 01:09:50.288425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.402 qpair failed and we were unable to recover it. 00:30:06.402 [2024-07-23 01:09:50.288554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.402 [2024-07-23 01:09:50.288689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.402 [2024-07-23 01:09:50.288716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.402 qpair failed and we were unable to recover it. 00:30:06.402 [2024-07-23 01:09:50.288861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.402 [2024-07-23 01:09:50.289017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.402 [2024-07-23 01:09:50.289041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.402 qpair failed and we were unable to recover it. 00:30:06.402 [2024-07-23 01:09:50.289179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.402 [2024-07-23 01:09:50.289338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.402 [2024-07-23 01:09:50.289362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.402 qpair failed and we were unable to recover it. 00:30:06.402 [2024-07-23 01:09:50.289496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.402 [2024-07-23 01:09:50.289640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.402 [2024-07-23 01:09:50.289665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.402 qpair failed and we were unable to recover it. 00:30:06.402 [2024-07-23 01:09:50.289793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.402 [2024-07-23 01:09:50.289949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.402 [2024-07-23 01:09:50.289973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.402 qpair failed and we were unable to recover it. 00:30:06.402 [2024-07-23 01:09:50.290108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.402 [2024-07-23 01:09:50.290236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.402 [2024-07-23 01:09:50.290260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.402 qpair failed and we were unable to recover it. 00:30:06.402 [2024-07-23 01:09:50.290393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.402 [2024-07-23 01:09:50.290536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.402 [2024-07-23 01:09:50.290562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.402 qpair failed and we were unable to recover it. 00:30:06.402 [2024-07-23 01:09:50.290699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.402 [2024-07-23 01:09:50.290841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.402 [2024-07-23 01:09:50.290865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.402 qpair failed and we were unable to recover it. 00:30:06.402 [2024-07-23 01:09:50.291013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.402 [2024-07-23 01:09:50.291180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.402 [2024-07-23 01:09:50.291204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.402 qpair failed and we were unable to recover it. 00:30:06.402 [2024-07-23 01:09:50.291327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.402 [2024-07-23 01:09:50.291464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.402 [2024-07-23 01:09:50.291488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.402 qpair failed and we were unable to recover it. 00:30:06.402 [2024-07-23 01:09:50.291641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.402 [2024-07-23 01:09:50.291781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.402 [2024-07-23 01:09:50.291807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.402 qpair failed and we were unable to recover it. 00:30:06.402 [2024-07-23 01:09:50.291984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.402 [2024-07-23 01:09:50.292112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.402 [2024-07-23 01:09:50.292137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.402 qpair failed and we were unable to recover it. 00:30:06.402 [2024-07-23 01:09:50.292269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.402 [2024-07-23 01:09:50.292406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.402 [2024-07-23 01:09:50.292431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.402 qpair failed and we were unable to recover it. 00:30:06.402 [2024-07-23 01:09:50.292569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.402 [2024-07-23 01:09:50.292701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.402 [2024-07-23 01:09:50.292726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.402 qpair failed and we were unable to recover it. 00:30:06.402 [2024-07-23 01:09:50.292859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.402 [2024-07-23 01:09:50.292986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.402 [2024-07-23 01:09:50.293011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.402 qpair failed and we were unable to recover it. 00:30:06.402 [2024-07-23 01:09:50.293139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.402 [2024-07-23 01:09:50.293279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.402 [2024-07-23 01:09:50.293303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.402 qpair failed and we were unable to recover it. 00:30:06.402 [2024-07-23 01:09:50.293462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.402 [2024-07-23 01:09:50.293594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.402 [2024-07-23 01:09:50.293623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.402 qpair failed and we were unable to recover it. 00:30:06.402 [2024-07-23 01:09:50.293757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.402 [2024-07-23 01:09:50.293889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.402 [2024-07-23 01:09:50.293913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.402 qpair failed and we were unable to recover it. 00:30:06.402 [2024-07-23 01:09:50.294101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.402 [2024-07-23 01:09:50.294235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.402 [2024-07-23 01:09:50.294260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.402 qpair failed and we were unable to recover it. 00:30:06.402 [2024-07-23 01:09:50.294398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.402 [2024-07-23 01:09:50.294540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.402 [2024-07-23 01:09:50.294564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.402 qpair failed and we were unable to recover it. 00:30:06.402 [2024-07-23 01:09:50.294725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.402 [2024-07-23 01:09:50.294871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.402 [2024-07-23 01:09:50.294896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.402 qpair failed and we were unable to recover it. 00:30:06.402 [2024-07-23 01:09:50.295092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.402 [2024-07-23 01:09:50.295263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.402 [2024-07-23 01:09:50.295288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.402 qpair failed and we were unable to recover it. 00:30:06.402 [2024-07-23 01:09:50.295435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.402 [2024-07-23 01:09:50.295562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.402 [2024-07-23 01:09:50.295586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.402 qpair failed and we were unable to recover it. 00:30:06.402 [2024-07-23 01:09:50.295719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.402 [2024-07-23 01:09:50.295850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.403 [2024-07-23 01:09:50.295876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.403 qpair failed and we were unable to recover it. 00:30:06.403 [2024-07-23 01:09:50.296044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.403 [2024-07-23 01:09:50.296170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.403 [2024-07-23 01:09:50.296195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.403 qpair failed and we were unable to recover it. 00:30:06.403 [2024-07-23 01:09:50.296333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.403 [2024-07-23 01:09:50.296472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.403 [2024-07-23 01:09:50.296496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.403 qpair failed and we were unable to recover it. 00:30:06.403 [2024-07-23 01:09:50.296674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.403 [2024-07-23 01:09:50.296808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.403 [2024-07-23 01:09:50.296832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.403 qpair failed and we were unable to recover it. 00:30:06.403 [2024-07-23 01:09:50.296964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.403 [2024-07-23 01:09:50.297086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.403 [2024-07-23 01:09:50.297110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.403 qpair failed and we were unable to recover it. 00:30:06.403 [2024-07-23 01:09:50.297267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.403 [2024-07-23 01:09:50.297408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.403 [2024-07-23 01:09:50.297432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.403 qpair failed and we were unable to recover it. 00:30:06.403 [2024-07-23 01:09:50.297587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.403 [2024-07-23 01:09:50.297732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.403 [2024-07-23 01:09:50.297759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.403 qpair failed and we were unable to recover it. 00:30:06.403 [2024-07-23 01:09:50.297899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.403 [2024-07-23 01:09:50.298057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.403 [2024-07-23 01:09:50.298081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.403 qpair failed and we were unable to recover it. 00:30:06.403 [2024-07-23 01:09:50.298242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.403 [2024-07-23 01:09:50.298400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.403 [2024-07-23 01:09:50.298424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.403 qpair failed and we were unable to recover it. 00:30:06.403 [2024-07-23 01:09:50.298579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.403 [2024-07-23 01:09:50.298716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.403 [2024-07-23 01:09:50.298742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.403 qpair failed and we were unable to recover it. 00:30:06.403 [2024-07-23 01:09:50.298883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.403 [2024-07-23 01:09:50.299008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.403 [2024-07-23 01:09:50.299032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.403 qpair failed and we were unable to recover it. 00:30:06.403 [2024-07-23 01:09:50.299220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.403 [2024-07-23 01:09:50.299367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.403 [2024-07-23 01:09:50.299391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.403 qpair failed and we were unable to recover it. 00:30:06.403 [2024-07-23 01:09:50.299534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.403 [2024-07-23 01:09:50.299696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.403 [2024-07-23 01:09:50.299721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.403 qpair failed and we were unable to recover it. 00:30:06.403 [2024-07-23 01:09:50.299915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.403 [2024-07-23 01:09:50.300048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.403 [2024-07-23 01:09:50.300075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.403 qpair failed and we were unable to recover it. 00:30:06.403 [2024-07-23 01:09:50.300210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.403 [2024-07-23 01:09:50.300367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.403 [2024-07-23 01:09:50.300392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.403 qpair failed and we were unable to recover it. 00:30:06.403 [2024-07-23 01:09:50.300555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.403 [2024-07-23 01:09:50.300692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.403 [2024-07-23 01:09:50.300718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.403 qpair failed and we were unable to recover it. 00:30:06.403 [2024-07-23 01:09:50.300855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.403 [2024-07-23 01:09:50.300994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.403 [2024-07-23 01:09:50.301018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.403 qpair failed and we were unable to recover it. 00:30:06.403 [2024-07-23 01:09:50.301142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.403 [2024-07-23 01:09:50.301287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.403 [2024-07-23 01:09:50.301311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.403 qpair failed and we were unable to recover it. 00:30:06.403 [2024-07-23 01:09:50.301462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.403 [2024-07-23 01:09:50.301610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.403 [2024-07-23 01:09:50.301639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.403 qpair failed and we were unable to recover it. 00:30:06.403 [2024-07-23 01:09:50.301809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.403 [2024-07-23 01:09:50.301950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.403 [2024-07-23 01:09:50.301975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.403 qpair failed and we were unable to recover it. 00:30:06.403 [2024-07-23 01:09:50.302107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.403 [2024-07-23 01:09:50.302272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.403 [2024-07-23 01:09:50.302297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.403 qpair failed and we were unable to recover it. 00:30:06.403 [2024-07-23 01:09:50.302443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.403 [2024-07-23 01:09:50.302571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.403 [2024-07-23 01:09:50.302595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.403 qpair failed and we were unable to recover it. 00:30:06.403 [2024-07-23 01:09:50.302759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.403 [2024-07-23 01:09:50.302900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.403 [2024-07-23 01:09:50.302924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.403 qpair failed and we were unable to recover it. 00:30:06.403 [2024-07-23 01:09:50.303085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.403 [2024-07-23 01:09:50.303210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.403 [2024-07-23 01:09:50.303234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.403 qpair failed and we were unable to recover it. 00:30:06.403 [2024-07-23 01:09:50.303371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.403 [2024-07-23 01:09:50.303527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.403 [2024-07-23 01:09:50.303551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.403 qpair failed and we were unable to recover it. 00:30:06.403 [2024-07-23 01:09:50.303688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.403 [2024-07-23 01:09:50.303820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.403 [2024-07-23 01:09:50.303844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.403 qpair failed and we were unable to recover it. 00:30:06.403 [2024-07-23 01:09:50.304010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.403 [2024-07-23 01:09:50.304146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.403 [2024-07-23 01:09:50.304170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.403 qpair failed and we were unable to recover it. 00:30:06.403 [2024-07-23 01:09:50.304310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.403 [2024-07-23 01:09:50.304455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.403 [2024-07-23 01:09:50.304478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.403 qpair failed and we were unable to recover it. 00:30:06.403 [2024-07-23 01:09:50.304628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.403 [2024-07-23 01:09:50.304803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.403 [2024-07-23 01:09:50.304828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.403 qpair failed and we were unable to recover it. 00:30:06.403 [2024-07-23 01:09:50.304989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.404 [2024-07-23 01:09:50.305150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.404 [2024-07-23 01:09:50.305174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.404 qpair failed and we were unable to recover it. 00:30:06.404 [2024-07-23 01:09:50.305346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.404 [2024-07-23 01:09:50.305470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.404 [2024-07-23 01:09:50.305495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.404 qpair failed and we were unable to recover it. 00:30:06.404 [2024-07-23 01:09:50.305659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.404 [2024-07-23 01:09:50.305796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.404 [2024-07-23 01:09:50.305821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.404 qpair failed and we were unable to recover it. 00:30:06.404 [2024-07-23 01:09:50.305946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.404 [2024-07-23 01:09:50.306094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.404 [2024-07-23 01:09:50.306118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.404 qpair failed and we were unable to recover it. 00:30:06.404 [2024-07-23 01:09:50.306251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.404 [2024-07-23 01:09:50.306386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.404 [2024-07-23 01:09:50.306410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.404 qpair failed and we were unable to recover it. 00:30:06.404 [2024-07-23 01:09:50.306569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.404 [2024-07-23 01:09:50.306699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.404 [2024-07-23 01:09:50.306724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.404 qpair failed and we were unable to recover it. 00:30:06.404 [2024-07-23 01:09:50.306886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.404 [2024-07-23 01:09:50.307034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.404 [2024-07-23 01:09:50.307060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.404 qpair failed and we were unable to recover it. 00:30:06.404 [2024-07-23 01:09:50.307189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.404 [2024-07-23 01:09:50.307326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.404 [2024-07-23 01:09:50.307350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.404 qpair failed and we were unable to recover it. 00:30:06.404 [2024-07-23 01:09:50.307481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.404 [2024-07-23 01:09:50.307643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.404 [2024-07-23 01:09:50.307668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.404 qpair failed and we were unable to recover it. 00:30:06.404 [2024-07-23 01:09:50.307801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.404 [2024-07-23 01:09:50.307931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.404 [2024-07-23 01:09:50.307956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.404 qpair failed and we were unable to recover it. 00:30:06.404 [2024-07-23 01:09:50.308109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.404 [2024-07-23 01:09:50.308270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.404 [2024-07-23 01:09:50.308296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.404 qpair failed and we were unable to recover it. 00:30:06.404 [2024-07-23 01:09:50.308459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.404 [2024-07-23 01:09:50.308625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.404 [2024-07-23 01:09:50.308650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.404 qpair failed and we were unable to recover it. 00:30:06.404 [2024-07-23 01:09:50.308783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.404 [2024-07-23 01:09:50.308940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.404 [2024-07-23 01:09:50.308964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.404 qpair failed and we were unable to recover it. 00:30:06.404 [2024-07-23 01:09:50.309104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.404 [2024-07-23 01:09:50.309264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.404 [2024-07-23 01:09:50.309288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.404 qpair failed and we were unable to recover it. 00:30:06.404 [2024-07-23 01:09:50.309438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.404 [2024-07-23 01:09:50.309581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.404 [2024-07-23 01:09:50.309605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.404 qpair failed and we were unable to recover it. 00:30:06.404 [2024-07-23 01:09:50.309745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.404 [2024-07-23 01:09:50.309885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.404 [2024-07-23 01:09:50.309909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.404 qpair failed and we were unable to recover it. 00:30:06.404 [2024-07-23 01:09:50.310072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.404 [2024-07-23 01:09:50.310210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.404 [2024-07-23 01:09:50.310236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.404 qpair failed and we were unable to recover it. 00:30:06.404 [2024-07-23 01:09:50.310400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.404 [2024-07-23 01:09:50.310567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.404 [2024-07-23 01:09:50.310592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.404 qpair failed and we were unable to recover it. 00:30:06.404 [2024-07-23 01:09:50.310729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.404 [2024-07-23 01:09:50.310872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.404 [2024-07-23 01:09:50.310897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.404 qpair failed and we were unable to recover it. 00:30:06.404 [2024-07-23 01:09:50.311060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.404 [2024-07-23 01:09:50.311230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.404 [2024-07-23 01:09:50.311254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.404 qpair failed and we were unable to recover it. 00:30:06.404 [2024-07-23 01:09:50.311386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.404 [2024-07-23 01:09:50.311519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.404 [2024-07-23 01:09:50.311545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.404 qpair failed and we were unable to recover it. 00:30:06.404 [2024-07-23 01:09:50.311683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.404 [2024-07-23 01:09:50.311810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.404 [2024-07-23 01:09:50.311834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.404 qpair failed and we were unable to recover it. 00:30:06.404 [2024-07-23 01:09:50.311972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.404 [2024-07-23 01:09:50.312117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.404 [2024-07-23 01:09:50.312143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.404 qpair failed and we were unable to recover it. 00:30:06.404 [2024-07-23 01:09:50.312313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.404 [2024-07-23 01:09:50.312444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.404 [2024-07-23 01:09:50.312468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.404 qpair failed and we were unable to recover it. 00:30:06.404 [2024-07-23 01:09:50.312624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.404 [2024-07-23 01:09:50.312783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.404 [2024-07-23 01:09:50.312808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.404 qpair failed and we were unable to recover it. 00:30:06.404 [2024-07-23 01:09:50.312968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.404 [2024-07-23 01:09:50.313128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.404 [2024-07-23 01:09:50.313153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.404 qpair failed and we were unable to recover it. 00:30:06.404 [2024-07-23 01:09:50.313341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.404 [2024-07-23 01:09:50.313498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.404 [2024-07-23 01:09:50.313522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.404 qpair failed and we were unable to recover it. 00:30:06.404 [2024-07-23 01:09:50.313652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.404 [2024-07-23 01:09:50.313813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.404 [2024-07-23 01:09:50.313837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.404 qpair failed and we were unable to recover it. 00:30:06.404 [2024-07-23 01:09:50.313995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.404 [2024-07-23 01:09:50.314132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.404 [2024-07-23 01:09:50.314158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.405 qpair failed and we were unable to recover it. 00:30:06.405 [2024-07-23 01:09:50.314319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.405 [2024-07-23 01:09:50.314447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.405 [2024-07-23 01:09:50.314471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.405 qpair failed and we were unable to recover it. 00:30:06.405 [2024-07-23 01:09:50.314636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.405 [2024-07-23 01:09:50.314773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.405 [2024-07-23 01:09:50.314797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.405 qpair failed and we were unable to recover it. 00:30:06.405 [2024-07-23 01:09:50.314936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.405 [2024-07-23 01:09:50.315071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.405 [2024-07-23 01:09:50.315096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.405 qpair failed and we were unable to recover it. 00:30:06.405 [2024-07-23 01:09:50.315255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.405 [2024-07-23 01:09:50.315388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.405 [2024-07-23 01:09:50.315414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.405 qpair failed and we were unable to recover it. 00:30:06.405 [2024-07-23 01:09:50.315573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.405 [2024-07-23 01:09:50.315769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.405 [2024-07-23 01:09:50.315796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.405 qpair failed and we were unable to recover it. 00:30:06.405 [2024-07-23 01:09:50.315923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.405 [2024-07-23 01:09:50.316053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.405 [2024-07-23 01:09:50.316077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.405 qpair failed and we were unable to recover it. 00:30:06.405 [2024-07-23 01:09:50.316238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.405 [2024-07-23 01:09:50.316371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.405 [2024-07-23 01:09:50.316395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.405 qpair failed and we were unable to recover it. 00:30:06.405 [2024-07-23 01:09:50.316557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.405 [2024-07-23 01:09:50.316746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.405 [2024-07-23 01:09:50.316772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.405 qpair failed and we were unable to recover it. 00:30:06.405 [2024-07-23 01:09:50.316908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.405 [2024-07-23 01:09:50.317038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.405 [2024-07-23 01:09:50.317064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.405 qpair failed and we were unable to recover it. 00:30:06.405 [2024-07-23 01:09:50.317196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.405 [2024-07-23 01:09:50.317336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.405 [2024-07-23 01:09:50.317360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.405 qpair failed and we were unable to recover it. 00:30:06.405 [2024-07-23 01:09:50.317487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.405 [2024-07-23 01:09:50.317620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.405 [2024-07-23 01:09:50.317645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.405 qpair failed and we were unable to recover it. 00:30:06.405 [2024-07-23 01:09:50.317781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.405 [2024-07-23 01:09:50.317920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.405 [2024-07-23 01:09:50.317947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.405 qpair failed and we were unable to recover it. 00:30:06.405 [2024-07-23 01:09:50.318092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.405 [2024-07-23 01:09:50.318217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.405 [2024-07-23 01:09:50.318241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.405 qpair failed and we were unable to recover it. 00:30:06.405 [2024-07-23 01:09:50.318396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.405 [2024-07-23 01:09:50.318559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.405 [2024-07-23 01:09:50.318583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.405 qpair failed and we were unable to recover it. 00:30:06.405 [2024-07-23 01:09:50.318735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.405 [2024-07-23 01:09:50.318863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.405 [2024-07-23 01:09:50.318887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.405 qpair failed and we were unable to recover it. 00:30:06.405 [2024-07-23 01:09:50.319020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.405 [2024-07-23 01:09:50.319180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.405 [2024-07-23 01:09:50.319204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.405 qpair failed and we were unable to recover it. 00:30:06.405 [2024-07-23 01:09:50.319364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.405 [2024-07-23 01:09:50.319493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.405 [2024-07-23 01:09:50.319518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.405 qpair failed and we were unable to recover it. 00:30:06.405 [2024-07-23 01:09:50.319657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.405 [2024-07-23 01:09:50.319822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.405 [2024-07-23 01:09:50.319846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.405 qpair failed and we were unable to recover it. 00:30:06.405 [2024-07-23 01:09:50.319998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.405 [2024-07-23 01:09:50.320149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.405 [2024-07-23 01:09:50.320174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.405 qpair failed and we were unable to recover it. 00:30:06.405 [2024-07-23 01:09:50.320329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.405 [2024-07-23 01:09:50.320492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.405 [2024-07-23 01:09:50.320516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.405 qpair failed and we were unable to recover it. 00:30:06.405 [2024-07-23 01:09:50.320648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.405 [2024-07-23 01:09:50.320804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.405 [2024-07-23 01:09:50.320833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.405 qpair failed and we were unable to recover it. 00:30:06.405 [2024-07-23 01:09:50.320978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.405 [2024-07-23 01:09:50.321107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.405 [2024-07-23 01:09:50.321131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.405 qpair failed and we were unable to recover it. 00:30:06.405 [2024-07-23 01:09:50.321294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.405 [2024-07-23 01:09:50.321418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.405 [2024-07-23 01:09:50.321443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.405 qpair failed and we were unable to recover it. 00:30:06.405 [2024-07-23 01:09:50.321579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.406 [2024-07-23 01:09:50.321747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.406 [2024-07-23 01:09:50.321772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.406 qpair failed and we were unable to recover it. 00:30:06.406 [2024-07-23 01:09:50.321937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.406 [2024-07-23 01:09:50.322098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.406 [2024-07-23 01:09:50.322122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.406 qpair failed and we were unable to recover it. 00:30:06.406 [2024-07-23 01:09:50.322262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.406 [2024-07-23 01:09:50.322388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.406 [2024-07-23 01:09:50.322413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.406 qpair failed and we were unable to recover it. 00:30:06.406 [2024-07-23 01:09:50.322575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.406 [2024-07-23 01:09:50.322711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.406 [2024-07-23 01:09:50.322736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.406 qpair failed and we were unable to recover it. 00:30:06.406 [2024-07-23 01:09:50.322896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.406 [2024-07-23 01:09:50.323041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.406 [2024-07-23 01:09:50.323065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.406 qpair failed and we were unable to recover it. 00:30:06.406 [2024-07-23 01:09:50.323206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.406 [2024-07-23 01:09:50.323353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.406 [2024-07-23 01:09:50.323377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.406 qpair failed and we were unable to recover it. 00:30:06.406 [2024-07-23 01:09:50.323542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.406 [2024-07-23 01:09:50.323678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.406 [2024-07-23 01:09:50.323703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.406 qpair failed and we were unable to recover it. 00:30:06.406 [2024-07-23 01:09:50.323835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.406 [2024-07-23 01:09:50.323969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.406 [2024-07-23 01:09:50.323998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.406 qpair failed and we were unable to recover it. 00:30:06.406 [2024-07-23 01:09:50.324164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.406 [2024-07-23 01:09:50.324322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.406 [2024-07-23 01:09:50.324346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.406 qpair failed and we were unable to recover it. 00:30:06.406 [2024-07-23 01:09:50.324505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.406 [2024-07-23 01:09:50.324636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.406 [2024-07-23 01:09:50.324663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.406 qpair failed and we were unable to recover it. 00:30:06.406 [2024-07-23 01:09:50.324799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.406 [2024-07-23 01:09:50.324959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.406 [2024-07-23 01:09:50.324984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.406 qpair failed and we were unable to recover it. 00:30:06.406 [2024-07-23 01:09:50.325114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.406 [2024-07-23 01:09:50.325269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.406 [2024-07-23 01:09:50.325294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.406 qpair failed and we were unable to recover it. 00:30:06.406 [2024-07-23 01:09:50.325422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.406 [2024-07-23 01:09:50.325555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.406 [2024-07-23 01:09:50.325579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.406 qpair failed and we were unable to recover it. 00:30:06.406 [2024-07-23 01:09:50.325752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.406 [2024-07-23 01:09:50.325882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.406 [2024-07-23 01:09:50.325908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.406 qpair failed and we were unable to recover it. 00:30:06.406 [2024-07-23 01:09:50.326044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.406 [2024-07-23 01:09:50.326207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.406 [2024-07-23 01:09:50.326231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.406 qpair failed and we were unable to recover it. 00:30:06.406 [2024-07-23 01:09:50.326373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.406 [2024-07-23 01:09:50.326500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.406 [2024-07-23 01:09:50.326525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.406 qpair failed and we were unable to recover it. 00:30:06.406 [2024-07-23 01:09:50.326691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.406 [2024-07-23 01:09:50.326827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.406 [2024-07-23 01:09:50.326852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.406 qpair failed and we were unable to recover it. 00:30:06.406 [2024-07-23 01:09:50.327015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.406 [2024-07-23 01:09:50.327173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.406 [2024-07-23 01:09:50.327201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.406 qpair failed and we were unable to recover it. 00:30:06.406 [2024-07-23 01:09:50.327334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.406 [2024-07-23 01:09:50.327471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.406 [2024-07-23 01:09:50.327496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.406 qpair failed and we were unable to recover it. 00:30:06.406 [2024-07-23 01:09:50.327633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.406 [2024-07-23 01:09:50.327772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.406 [2024-07-23 01:09:50.327796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.406 qpair failed and we were unable to recover it. 00:30:06.406 [2024-07-23 01:09:50.327987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.406 [2024-07-23 01:09:50.328117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.406 [2024-07-23 01:09:50.328144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.406 qpair failed and we were unable to recover it. 00:30:06.406 [2024-07-23 01:09:50.328305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.406 [2024-07-23 01:09:50.328461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.406 [2024-07-23 01:09:50.328486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.406 qpair failed and we were unable to recover it. 00:30:06.406 [2024-07-23 01:09:50.328653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.406 [2024-07-23 01:09:50.328776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.406 [2024-07-23 01:09:50.328802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.406 qpair failed and we were unable to recover it. 00:30:06.406 [2024-07-23 01:09:50.328949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.406 [2024-07-23 01:09:50.329108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.406 [2024-07-23 01:09:50.329133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.406 qpair failed and we were unable to recover it. 00:30:06.406 [2024-07-23 01:09:50.329286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.406 [2024-07-23 01:09:50.329448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.406 [2024-07-23 01:09:50.329472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.406 qpair failed and we were unable to recover it. 00:30:06.406 [2024-07-23 01:09:50.329605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.406 [2024-07-23 01:09:50.329783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.406 [2024-07-23 01:09:50.329807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.406 qpair failed and we were unable to recover it. 00:30:06.406 [2024-07-23 01:09:50.329941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.406 [2024-07-23 01:09:50.330095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.406 [2024-07-23 01:09:50.330119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.406 qpair failed and we were unable to recover it. 00:30:06.406 [2024-07-23 01:09:50.330246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.406 [2024-07-23 01:09:50.330367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.406 [2024-07-23 01:09:50.330396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.406 qpair failed and we were unable to recover it. 00:30:06.406 [2024-07-23 01:09:50.330559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.406 [2024-07-23 01:09:50.330714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.406 [2024-07-23 01:09:50.330739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.407 qpair failed and we were unable to recover it. 00:30:06.407 [2024-07-23 01:09:50.330866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.407 [2024-07-23 01:09:50.330998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.407 [2024-07-23 01:09:50.331022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.407 qpair failed and we were unable to recover it. 00:30:06.407 [2024-07-23 01:09:50.331178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.407 [2024-07-23 01:09:50.331305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.407 [2024-07-23 01:09:50.331329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.407 qpair failed and we were unable to recover it. 00:30:06.407 [2024-07-23 01:09:50.331465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.407 [2024-07-23 01:09:50.331635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.407 [2024-07-23 01:09:50.331661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.407 qpair failed and we were unable to recover it. 00:30:06.407 [2024-07-23 01:09:50.331834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.407 [2024-07-23 01:09:50.331992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.407 [2024-07-23 01:09:50.332016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.407 qpair failed and we were unable to recover it. 00:30:06.407 [2024-07-23 01:09:50.332178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.407 [2024-07-23 01:09:50.332309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.407 [2024-07-23 01:09:50.332334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.407 qpair failed and we were unable to recover it. 00:30:06.407 [2024-07-23 01:09:50.332462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.407 [2024-07-23 01:09:50.332603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.407 [2024-07-23 01:09:50.332635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.407 qpair failed and we were unable to recover it. 00:30:06.407 [2024-07-23 01:09:50.332801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.407 [2024-07-23 01:09:50.332972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.407 [2024-07-23 01:09:50.332996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.407 qpair failed and we were unable to recover it. 00:30:06.407 [2024-07-23 01:09:50.333161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.407 [2024-07-23 01:09:50.333304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.407 [2024-07-23 01:09:50.333328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.407 qpair failed and we were unable to recover it. 00:30:06.407 [2024-07-23 01:09:50.333488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.407 [2024-07-23 01:09:50.333622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.407 [2024-07-23 01:09:50.333648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.407 qpair failed and we were unable to recover it. 00:30:06.407 [2024-07-23 01:09:50.333828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.407 [2024-07-23 01:09:50.333971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.407 [2024-07-23 01:09:50.333995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.407 qpair failed and we were unable to recover it. 00:30:06.407 [2024-07-23 01:09:50.334131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.407 [2024-07-23 01:09:50.334294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.407 [2024-07-23 01:09:50.334319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.407 qpair failed and we were unable to recover it. 00:30:06.407 [2024-07-23 01:09:50.334447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.407 [2024-07-23 01:09:50.334592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.407 [2024-07-23 01:09:50.334622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.407 qpair failed and we were unable to recover it. 00:30:06.407 [2024-07-23 01:09:50.334759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.407 [2024-07-23 01:09:50.334904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.407 [2024-07-23 01:09:50.334928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.407 qpair failed and we were unable to recover it. 00:30:06.407 [2024-07-23 01:09:50.335077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.407 [2024-07-23 01:09:50.335238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.407 [2024-07-23 01:09:50.335263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.407 qpair failed and we were unable to recover it. 00:30:06.407 [2024-07-23 01:09:50.335393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.407 [2024-07-23 01:09:50.335522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.407 [2024-07-23 01:09:50.335547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.407 qpair failed and we were unable to recover it. 00:30:06.407 [2024-07-23 01:09:50.335686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.407 [2024-07-23 01:09:50.335831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.407 [2024-07-23 01:09:50.335856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.407 qpair failed and we were unable to recover it. 00:30:06.407 [2024-07-23 01:09:50.336021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.407 [2024-07-23 01:09:50.336182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.407 [2024-07-23 01:09:50.336206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.407 qpair failed and we were unable to recover it. 00:30:06.407 [2024-07-23 01:09:50.336338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.407 [2024-07-23 01:09:50.336474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.407 [2024-07-23 01:09:50.336499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.407 qpair failed and we were unable to recover it. 00:30:06.407 [2024-07-23 01:09:50.336640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.407 [2024-07-23 01:09:50.336771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.407 [2024-07-23 01:09:50.336797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.407 qpair failed and we were unable to recover it. 00:30:06.407 [2024-07-23 01:09:50.336936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.407 [2024-07-23 01:09:50.337071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.407 [2024-07-23 01:09:50.337095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.407 qpair failed and we were unable to recover it. 00:30:06.407 [2024-07-23 01:09:50.337253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.407 [2024-07-23 01:09:50.337440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.407 [2024-07-23 01:09:50.337464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.407 qpair failed and we were unable to recover it. 00:30:06.407 [2024-07-23 01:09:50.337628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.407 [2024-07-23 01:09:50.337778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.407 [2024-07-23 01:09:50.337805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.407 qpair failed and we were unable to recover it. 00:30:06.407 [2024-07-23 01:09:50.337949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.407 [2024-07-23 01:09:50.338081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.407 [2024-07-23 01:09:50.338106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.407 qpair failed and we were unable to recover it. 00:30:06.407 [2024-07-23 01:09:50.338263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.407 [2024-07-23 01:09:50.338393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.407 [2024-07-23 01:09:50.338418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.407 qpair failed and we were unable to recover it. 00:30:06.407 [2024-07-23 01:09:50.338563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.407 [2024-07-23 01:09:50.338694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.407 [2024-07-23 01:09:50.338719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.407 qpair failed and we were unable to recover it. 00:30:06.407 [2024-07-23 01:09:50.338884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.407 [2024-07-23 01:09:50.339013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.407 [2024-07-23 01:09:50.339038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.407 qpair failed and we were unable to recover it. 00:30:06.407 [2024-07-23 01:09:50.339186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.407 [2024-07-23 01:09:50.339323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.407 [2024-07-23 01:09:50.339347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.407 qpair failed and we were unable to recover it. 00:30:06.407 [2024-07-23 01:09:50.339509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.407 [2024-07-23 01:09:50.339671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.407 [2024-07-23 01:09:50.339696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.407 qpair failed and we were unable to recover it. 00:30:06.407 [2024-07-23 01:09:50.339836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.407 [2024-07-23 01:09:50.339993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.408 [2024-07-23 01:09:50.340017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.408 qpair failed and we were unable to recover it. 00:30:06.408 [2024-07-23 01:09:50.340152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.408 [2024-07-23 01:09:50.340311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.408 [2024-07-23 01:09:50.340335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.408 qpair failed and we were unable to recover it. 00:30:06.408 [2024-07-23 01:09:50.340459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.408 [2024-07-23 01:09:50.340602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.408 [2024-07-23 01:09:50.340643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.408 qpair failed and we were unable to recover it. 00:30:06.408 [2024-07-23 01:09:50.340807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.408 [2024-07-23 01:09:50.340997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.408 [2024-07-23 01:09:50.341021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.408 qpair failed and we were unable to recover it. 00:30:06.408 [2024-07-23 01:09:50.341151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.408 [2024-07-23 01:09:50.341287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.408 [2024-07-23 01:09:50.341312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.408 qpair failed and we were unable to recover it. 00:30:06.408 [2024-07-23 01:09:50.341446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.408 [2024-07-23 01:09:50.341578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.408 [2024-07-23 01:09:50.341604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.408 qpair failed and we were unable to recover it. 00:30:06.408 [2024-07-23 01:09:50.341779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.408 [2024-07-23 01:09:50.341908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.408 [2024-07-23 01:09:50.341932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.408 qpair failed and we were unable to recover it. 00:30:06.408 [2024-07-23 01:09:50.342134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.408 [2024-07-23 01:09:50.342272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.408 [2024-07-23 01:09:50.342298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.408 qpair failed and we were unable to recover it. 00:30:06.408 [2024-07-23 01:09:50.342435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.408 [2024-07-23 01:09:50.342596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.408 [2024-07-23 01:09:50.342625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.408 qpair failed and we were unable to recover it. 00:30:06.408 [2024-07-23 01:09:50.342761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.408 [2024-07-23 01:09:50.342915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.408 [2024-07-23 01:09:50.342940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.408 qpair failed and we were unable to recover it. 00:30:06.408 [2024-07-23 01:09:50.343095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.408 [2024-07-23 01:09:50.343222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.408 [2024-07-23 01:09:50.343246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.408 qpair failed and we were unable to recover it. 00:30:06.408 [2024-07-23 01:09:50.343442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.408 [2024-07-23 01:09:50.343575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.408 [2024-07-23 01:09:50.343600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.408 qpair failed and we were unable to recover it. 00:30:06.408 [2024-07-23 01:09:50.343733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.408 [2024-07-23 01:09:50.343887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.408 [2024-07-23 01:09:50.343912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.408 qpair failed and we were unable to recover it. 00:30:06.408 [2024-07-23 01:09:50.344059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.408 [2024-07-23 01:09:50.344231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.408 [2024-07-23 01:09:50.344256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.408 qpair failed and we were unable to recover it. 00:30:06.408 [2024-07-23 01:09:50.344386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.408 [2024-07-23 01:09:50.344514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.408 [2024-07-23 01:09:50.344539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.408 qpair failed and we were unable to recover it. 00:30:06.408 [2024-07-23 01:09:50.344699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.408 [2024-07-23 01:09:50.344867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.408 [2024-07-23 01:09:50.344893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.408 qpair failed and we were unable to recover it. 00:30:06.408 [2024-07-23 01:09:50.345033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.408 [2024-07-23 01:09:50.345223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.408 [2024-07-23 01:09:50.345247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.408 qpair failed and we were unable to recover it. 00:30:06.408 [2024-07-23 01:09:50.345376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.408 [2024-07-23 01:09:50.345533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.408 [2024-07-23 01:09:50.345557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.408 qpair failed and we were unable to recover it. 00:30:06.408 [2024-07-23 01:09:50.345691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.408 [2024-07-23 01:09:50.345867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.408 [2024-07-23 01:09:50.345891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.408 qpair failed and we were unable to recover it. 00:30:06.408 [2024-07-23 01:09:50.346022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.408 [2024-07-23 01:09:50.346178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.408 [2024-07-23 01:09:50.346202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.408 qpair failed and we were unable to recover it. 00:30:06.408 [2024-07-23 01:09:50.346361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.408 [2024-07-23 01:09:50.346496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.408 [2024-07-23 01:09:50.346522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.408 qpair failed and we were unable to recover it. 00:30:06.408 [2024-07-23 01:09:50.346691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.408 [2024-07-23 01:09:50.346853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.408 [2024-07-23 01:09:50.346879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.408 qpair failed and we were unable to recover it. 00:30:06.408 [2024-07-23 01:09:50.347018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.408 [2024-07-23 01:09:50.347179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.408 [2024-07-23 01:09:50.347203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.408 qpair failed and we were unable to recover it. 00:30:06.408 [2024-07-23 01:09:50.347328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.408 [2024-07-23 01:09:50.347472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.408 [2024-07-23 01:09:50.347496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.408 qpair failed and we were unable to recover it. 00:30:06.408 [2024-07-23 01:09:50.347666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.408 [2024-07-23 01:09:50.347810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.408 [2024-07-23 01:09:50.347834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.408 qpair failed and we were unable to recover it. 00:30:06.408 [2024-07-23 01:09:50.347965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.408 [2024-07-23 01:09:50.348096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.408 [2024-07-23 01:09:50.348120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.408 qpair failed and we were unable to recover it. 00:30:06.408 [2024-07-23 01:09:50.348281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.408 [2024-07-23 01:09:50.348414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.408 [2024-07-23 01:09:50.348438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.408 qpair failed and we were unable to recover it. 00:30:06.408 [2024-07-23 01:09:50.348600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.408 [2024-07-23 01:09:50.348749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.408 [2024-07-23 01:09:50.348774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.408 qpair failed and we were unable to recover it. 00:30:06.408 [2024-07-23 01:09:50.348906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.408 [2024-07-23 01:09:50.349056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.408 [2024-07-23 01:09:50.349082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.408 qpair failed and we were unable to recover it. 00:30:06.409 [2024-07-23 01:09:50.349220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.409 [2024-07-23 01:09:50.349409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.409 [2024-07-23 01:09:50.349433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.409 qpair failed and we were unable to recover it. 00:30:06.409 [2024-07-23 01:09:50.349570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.409 [2024-07-23 01:09:50.349707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.409 [2024-07-23 01:09:50.349734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.409 qpair failed and we were unable to recover it. 00:30:06.409 [2024-07-23 01:09:50.349868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.409 [2024-07-23 01:09:50.349992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.409 [2024-07-23 01:09:50.350017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.409 qpair failed and we were unable to recover it. 00:30:06.409 [2024-07-23 01:09:50.350175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.409 [2024-07-23 01:09:50.350306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.409 [2024-07-23 01:09:50.350330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.409 qpair failed and we were unable to recover it. 00:30:06.409 [2024-07-23 01:09:50.350486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.409 [2024-07-23 01:09:50.350609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.409 [2024-07-23 01:09:50.350640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.409 qpair failed and we were unable to recover it. 00:30:06.409 [2024-07-23 01:09:50.350798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.409 [2024-07-23 01:09:50.350943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.409 [2024-07-23 01:09:50.350968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.409 qpair failed and we were unable to recover it. 00:30:06.409 [2024-07-23 01:09:50.351102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.409 [2024-07-23 01:09:50.351259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.409 [2024-07-23 01:09:50.351284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.409 qpair failed and we were unable to recover it. 00:30:06.409 [2024-07-23 01:09:50.351445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.409 [2024-07-23 01:09:50.351573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.409 [2024-07-23 01:09:50.351598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.409 qpair failed and we were unable to recover it. 00:30:06.409 [2024-07-23 01:09:50.351736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.409 [2024-07-23 01:09:50.351867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.409 [2024-07-23 01:09:50.351893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.409 qpair failed and we were unable to recover it. 00:30:06.409 [2024-07-23 01:09:50.352026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.409 [2024-07-23 01:09:50.352154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.409 [2024-07-23 01:09:50.352179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.409 qpair failed and we were unable to recover it. 00:30:06.409 [2024-07-23 01:09:50.352306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.409 [2024-07-23 01:09:50.352471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.409 [2024-07-23 01:09:50.352497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.409 qpair failed and we were unable to recover it. 00:30:06.409 [2024-07-23 01:09:50.352632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.409 [2024-07-23 01:09:50.352797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.409 [2024-07-23 01:09:50.352823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.409 qpair failed and we were unable to recover it. 00:30:06.409 [2024-07-23 01:09:50.352989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.409 [2024-07-23 01:09:50.353116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.409 [2024-07-23 01:09:50.353140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.409 qpair failed and we were unable to recover it. 00:30:06.409 [2024-07-23 01:09:50.353319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.409 [2024-07-23 01:09:50.353479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.409 [2024-07-23 01:09:50.353504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.409 qpair failed and we were unable to recover it. 00:30:06.409 [2024-07-23 01:09:50.353670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.409 [2024-07-23 01:09:50.353806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.409 [2024-07-23 01:09:50.353830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.409 qpair failed and we were unable to recover it. 00:30:06.409 [2024-07-23 01:09:50.353989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.409 [2024-07-23 01:09:50.354153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.409 [2024-07-23 01:09:50.354180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.409 qpair failed and we were unable to recover it. 00:30:06.409 [2024-07-23 01:09:50.354315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.409 [2024-07-23 01:09:50.354444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.409 [2024-07-23 01:09:50.354471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.409 qpair failed and we were unable to recover it. 00:30:06.409 [2024-07-23 01:09:50.354600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.409 [2024-07-23 01:09:50.354754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.409 [2024-07-23 01:09:50.354780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.409 qpair failed and we were unable to recover it. 00:30:06.409 [2024-07-23 01:09:50.354957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.409 [2024-07-23 01:09:50.355116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.409 [2024-07-23 01:09:50.355140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.409 qpair failed and we were unable to recover it. 00:30:06.409 [2024-07-23 01:09:50.355279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.409 [2024-07-23 01:09:50.355436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.409 [2024-07-23 01:09:50.355461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.409 qpair failed and we were unable to recover it. 00:30:06.409 [2024-07-23 01:09:50.355619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.409 [2024-07-23 01:09:50.355751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.409 [2024-07-23 01:09:50.355775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.409 qpair failed and we were unable to recover it. 00:30:06.409 [2024-07-23 01:09:50.355902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.409 [2024-07-23 01:09:50.356063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.409 [2024-07-23 01:09:50.356088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.409 qpair failed and we were unable to recover it. 00:30:06.409 [2024-07-23 01:09:50.356253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.409 [2024-07-23 01:09:50.356386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.409 [2024-07-23 01:09:50.356413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.409 qpair failed and we were unable to recover it. 00:30:06.409 [2024-07-23 01:09:50.356558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.409 [2024-07-23 01:09:50.356683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.409 [2024-07-23 01:09:50.356709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.409 qpair failed and we were unable to recover it. 00:30:06.410 [2024-07-23 01:09:50.356859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.410 [2024-07-23 01:09:50.356991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.410 [2024-07-23 01:09:50.357016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.410 qpair failed and we were unable to recover it. 00:30:06.410 [2024-07-23 01:09:50.357160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.410 [2024-07-23 01:09:50.357320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.410 [2024-07-23 01:09:50.357345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.410 qpair failed and we were unable to recover it. 00:30:06.410 [2024-07-23 01:09:50.357478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.410 [2024-07-23 01:09:50.357617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.410 [2024-07-23 01:09:50.357643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.410 qpair failed and we were unable to recover it. 00:30:06.410 [2024-07-23 01:09:50.357809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.410 [2024-07-23 01:09:50.357953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.410 [2024-07-23 01:09:50.357978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.410 qpair failed and we were unable to recover it. 00:30:06.410 [2024-07-23 01:09:50.358127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.410 [2024-07-23 01:09:50.358263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.410 [2024-07-23 01:09:50.358290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.410 qpair failed and we were unable to recover it. 00:30:06.410 [2024-07-23 01:09:50.358429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.410 [2024-07-23 01:09:50.358568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.410 [2024-07-23 01:09:50.358594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.410 qpair failed and we were unable to recover it. 00:30:06.410 [2024-07-23 01:09:50.358735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.410 [2024-07-23 01:09:50.358868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.410 [2024-07-23 01:09:50.358894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.410 qpair failed and we were unable to recover it. 00:30:06.410 [2024-07-23 01:09:50.359021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.410 [2024-07-23 01:09:50.359195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.410 [2024-07-23 01:09:50.359220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.410 qpair failed and we were unable to recover it. 00:30:06.410 [2024-07-23 01:09:50.359351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.410 [2024-07-23 01:09:50.359491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.410 [2024-07-23 01:09:50.359517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.410 qpair failed and we were unable to recover it. 00:30:06.410 [2024-07-23 01:09:50.359695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.410 [2024-07-23 01:09:50.359831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.410 [2024-07-23 01:09:50.359857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.410 qpair failed and we were unable to recover it. 00:30:06.410 [2024-07-23 01:09:50.359987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.410 [2024-07-23 01:09:50.360149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.410 [2024-07-23 01:09:50.360173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.410 qpair failed and we were unable to recover it. 00:30:06.410 [2024-07-23 01:09:50.360307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.410 [2024-07-23 01:09:50.360431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.410 [2024-07-23 01:09:50.360455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.410 qpair failed and we were unable to recover it. 00:30:06.410 [2024-07-23 01:09:50.360633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.410 [2024-07-23 01:09:50.360806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.410 [2024-07-23 01:09:50.360831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.410 qpair failed and we were unable to recover it. 00:30:06.410 [2024-07-23 01:09:50.360995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.410 [2024-07-23 01:09:50.361153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.410 [2024-07-23 01:09:50.361178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.410 qpair failed and we were unable to recover it. 00:30:06.410 [2024-07-23 01:09:50.361308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.410 [2024-07-23 01:09:50.361447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.410 [2024-07-23 01:09:50.361471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.410 qpair failed and we were unable to recover it. 00:30:06.410 [2024-07-23 01:09:50.361609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.410 [2024-07-23 01:09:50.361751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.410 [2024-07-23 01:09:50.361777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.410 qpair failed and we were unable to recover it. 00:30:06.410 [2024-07-23 01:09:50.361913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.410 [2024-07-23 01:09:50.362073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.410 [2024-07-23 01:09:50.362097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.410 qpair failed and we were unable to recover it. 00:30:06.410 [2024-07-23 01:09:50.362255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.410 [2024-07-23 01:09:50.362388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.410 [2024-07-23 01:09:50.362413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.410 qpair failed and we were unable to recover it. 00:30:06.410 [2024-07-23 01:09:50.362555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.410 [2024-07-23 01:09:50.362714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.410 [2024-07-23 01:09:50.362740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.410 qpair failed and we were unable to recover it. 00:30:06.410 [2024-07-23 01:09:50.362873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.410 [2024-07-23 01:09:50.363008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.410 [2024-07-23 01:09:50.363033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.410 qpair failed and we were unable to recover it. 00:30:06.410 [2024-07-23 01:09:50.363164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.410 [2024-07-23 01:09:50.363324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.410 [2024-07-23 01:09:50.363348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.410 qpair failed and we were unable to recover it. 00:30:06.410 [2024-07-23 01:09:50.363484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.410 [2024-07-23 01:09:50.363671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.410 [2024-07-23 01:09:50.363696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.410 qpair failed and we were unable to recover it. 00:30:06.410 [2024-07-23 01:09:50.363856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.410 [2024-07-23 01:09:50.364000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.410 [2024-07-23 01:09:50.364024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.410 qpair failed and we were unable to recover it. 00:30:06.410 [2024-07-23 01:09:50.364184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.410 [2024-07-23 01:09:50.364315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.410 [2024-07-23 01:09:50.364340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.410 qpair failed and we were unable to recover it. 00:30:06.410 [2024-07-23 01:09:50.364478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.410 [2024-07-23 01:09:50.364625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.410 [2024-07-23 01:09:50.364651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.410 qpair failed and we were unable to recover it. 00:30:06.410 [2024-07-23 01:09:50.364812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.410 [2024-07-23 01:09:50.364943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.410 [2024-07-23 01:09:50.364970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.410 qpair failed and we were unable to recover it. 00:30:06.410 [2024-07-23 01:09:50.365105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.410 [2024-07-23 01:09:50.365231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.410 [2024-07-23 01:09:50.365256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.410 qpair failed and we were unable to recover it. 00:30:06.410 [2024-07-23 01:09:50.365394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.410 [2024-07-23 01:09:50.365545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.410 [2024-07-23 01:09:50.365570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.410 qpair failed and we were unable to recover it. 00:30:06.410 [2024-07-23 01:09:50.365735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.410 [2024-07-23 01:09:50.365872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.411 [2024-07-23 01:09:50.365897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.411 qpair failed and we were unable to recover it. 00:30:06.411 [2024-07-23 01:09:50.366041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.411 [2024-07-23 01:09:50.366186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.411 [2024-07-23 01:09:50.366210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.411 qpair failed and we were unable to recover it. 00:30:06.411 [2024-07-23 01:09:50.366351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.411 [2024-07-23 01:09:50.366508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.411 [2024-07-23 01:09:50.366533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.411 qpair failed and we were unable to recover it. 00:30:06.411 [2024-07-23 01:09:50.366664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.411 [2024-07-23 01:09:50.366851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.411 [2024-07-23 01:09:50.366876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.411 qpair failed and we were unable to recover it. 00:30:06.411 [2024-07-23 01:09:50.367026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.411 [2024-07-23 01:09:50.367156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.411 [2024-07-23 01:09:50.367180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.411 qpair failed and we were unable to recover it. 00:30:06.411 [2024-07-23 01:09:50.367337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.411 [2024-07-23 01:09:50.367474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.411 [2024-07-23 01:09:50.367498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.411 qpair failed and we were unable to recover it. 00:30:06.411 [2024-07-23 01:09:50.367636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.411 [2024-07-23 01:09:50.367768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.411 [2024-07-23 01:09:50.367792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.411 qpair failed and we were unable to recover it. 00:30:06.411 [2024-07-23 01:09:50.367929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.411 [2024-07-23 01:09:50.368066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.411 [2024-07-23 01:09:50.368092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.411 qpair failed and we were unable to recover it. 00:30:06.411 [2024-07-23 01:09:50.368254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.411 [2024-07-23 01:09:50.368379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.411 [2024-07-23 01:09:50.368404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.411 qpair failed and we were unable to recover it. 00:30:06.411 [2024-07-23 01:09:50.368543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.411 [2024-07-23 01:09:50.368669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.411 [2024-07-23 01:09:50.368694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.411 qpair failed and we were unable to recover it. 00:30:06.411 [2024-07-23 01:09:50.368847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.411 [2024-07-23 01:09:50.368994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.411 [2024-07-23 01:09:50.369023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.411 qpair failed and we were unable to recover it. 00:30:06.411 [2024-07-23 01:09:50.369196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.411 [2024-07-23 01:09:50.369336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.411 [2024-07-23 01:09:50.369360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.411 qpair failed and we were unable to recover it. 00:30:06.411 [2024-07-23 01:09:50.369515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.411 [2024-07-23 01:09:50.369679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.411 [2024-07-23 01:09:50.369704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.411 qpair failed and we were unable to recover it. 00:30:06.411 [2024-07-23 01:09:50.369845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.411 [2024-07-23 01:09:50.369973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.411 [2024-07-23 01:09:50.369998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.411 qpair failed and we were unable to recover it. 00:30:06.411 [2024-07-23 01:09:50.370131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.411 [2024-07-23 01:09:50.370295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.411 [2024-07-23 01:09:50.370321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.411 qpair failed and we were unable to recover it. 00:30:06.411 [2024-07-23 01:09:50.370455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.411 [2024-07-23 01:09:50.370618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.411 [2024-07-23 01:09:50.370643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.411 qpair failed and we were unable to recover it. 00:30:06.411 [2024-07-23 01:09:50.370778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.411 [2024-07-23 01:09:50.370906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.411 [2024-07-23 01:09:50.370930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.411 qpair failed and we were unable to recover it. 00:30:06.411 [2024-07-23 01:09:50.371089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.411 [2024-07-23 01:09:50.371214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.411 [2024-07-23 01:09:50.371238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.411 qpair failed and we were unable to recover it. 00:30:06.411 [2024-07-23 01:09:50.371370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.411 [2024-07-23 01:09:50.371563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.411 [2024-07-23 01:09:50.371588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.411 qpair failed and we were unable to recover it. 00:30:06.411 [2024-07-23 01:09:50.371730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.411 [2024-07-23 01:09:50.371875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.411 [2024-07-23 01:09:50.371900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.411 qpair failed and we were unable to recover it. 00:30:06.411 [2024-07-23 01:09:50.372043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.411 [2024-07-23 01:09:50.372188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.411 [2024-07-23 01:09:50.372218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.411 qpair failed and we were unable to recover it. 00:30:06.411 [2024-07-23 01:09:50.372380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.411 [2024-07-23 01:09:50.372540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.411 [2024-07-23 01:09:50.372565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.411 qpair failed and we were unable to recover it. 00:30:06.411 [2024-07-23 01:09:50.372752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.411 [2024-07-23 01:09:50.372912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.411 [2024-07-23 01:09:50.372936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.411 qpair failed and we were unable to recover it. 00:30:06.411 [2024-07-23 01:09:50.373100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.411 [2024-07-23 01:09:50.373266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.411 [2024-07-23 01:09:50.373290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.411 qpair failed and we were unable to recover it. 00:30:06.411 [2024-07-23 01:09:50.373416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.411 [2024-07-23 01:09:50.373582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.411 [2024-07-23 01:09:50.373608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.411 qpair failed and we were unable to recover it. 00:30:06.411 [2024-07-23 01:09:50.373760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.411 [2024-07-23 01:09:50.373901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.411 [2024-07-23 01:09:50.373926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.411 qpair failed and we were unable to recover it. 00:30:06.411 [2024-07-23 01:09:50.374108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.411 [2024-07-23 01:09:50.374268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.411 [2024-07-23 01:09:50.374293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.411 qpair failed and we were unable to recover it. 00:30:06.411 [2024-07-23 01:09:50.374450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.411 [2024-07-23 01:09:50.374597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.411 [2024-07-23 01:09:50.374627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.411 qpair failed and we were unable to recover it. 00:30:06.411 [2024-07-23 01:09:50.374774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.411 [2024-07-23 01:09:50.374907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.411 [2024-07-23 01:09:50.374933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.411 qpair failed and we were unable to recover it. 00:30:06.411 [2024-07-23 01:09:50.375066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.412 [2024-07-23 01:09:50.375208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.412 [2024-07-23 01:09:50.375233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.412 qpair failed and we were unable to recover it. 00:30:06.412 [2024-07-23 01:09:50.375391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.412 [2024-07-23 01:09:50.375536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.412 [2024-07-23 01:09:50.375564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.412 qpair failed and we were unable to recover it. 00:30:06.412 [2024-07-23 01:09:50.375704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.412 [2024-07-23 01:09:50.375837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.412 [2024-07-23 01:09:50.375864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.412 qpair failed and we were unable to recover it. 00:30:06.412 [2024-07-23 01:09:50.375996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.412 [2024-07-23 01:09:50.376154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.412 [2024-07-23 01:09:50.376178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.412 qpair failed and we were unable to recover it. 00:30:06.412 [2024-07-23 01:09:50.376365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.412 [2024-07-23 01:09:50.376492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.412 [2024-07-23 01:09:50.376516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.412 qpair failed and we were unable to recover it. 00:30:06.412 [2024-07-23 01:09:50.376671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.412 [2024-07-23 01:09:50.376837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.412 [2024-07-23 01:09:50.376862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.412 qpair failed and we were unable to recover it. 00:30:06.412 [2024-07-23 01:09:50.377007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.412 [2024-07-23 01:09:50.377134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.412 [2024-07-23 01:09:50.377159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.412 qpair failed and we were unable to recover it. 00:30:06.412 [2024-07-23 01:09:50.377316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.412 [2024-07-23 01:09:50.377452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.412 [2024-07-23 01:09:50.377476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.412 qpair failed and we were unable to recover it. 00:30:06.412 [2024-07-23 01:09:50.377643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.412 [2024-07-23 01:09:50.377772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.412 [2024-07-23 01:09:50.377797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.412 qpair failed and we were unable to recover it. 00:30:06.412 [2024-07-23 01:09:50.377945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.412 [2024-07-23 01:09:50.378131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.412 [2024-07-23 01:09:50.378155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.412 qpair failed and we were unable to recover it. 00:30:06.412 [2024-07-23 01:09:50.378288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.412 [2024-07-23 01:09:50.378419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.412 [2024-07-23 01:09:50.378443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.412 qpair failed and we were unable to recover it. 00:30:06.412 [2024-07-23 01:09:50.378572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.412 [2024-07-23 01:09:50.378753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.412 [2024-07-23 01:09:50.378783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.412 qpair failed and we were unable to recover it. 00:30:06.412 [2024-07-23 01:09:50.378923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.412 [2024-07-23 01:09:50.379054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.412 [2024-07-23 01:09:50.379079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.412 qpair failed and we were unable to recover it. 00:30:06.412 [2024-07-23 01:09:50.379212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.412 [2024-07-23 01:09:50.379348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.412 [2024-07-23 01:09:50.379375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.412 qpair failed and we were unable to recover it. 00:30:06.412 [2024-07-23 01:09:50.379535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.412 [2024-07-23 01:09:50.379674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.412 [2024-07-23 01:09:50.379699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.412 qpair failed and we were unable to recover it. 00:30:06.412 [2024-07-23 01:09:50.379833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.412 [2024-07-23 01:09:50.379975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.412 [2024-07-23 01:09:50.380001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.412 qpair failed and we were unable to recover it. 00:30:06.412 [2024-07-23 01:09:50.380140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.412 [2024-07-23 01:09:50.380267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.412 [2024-07-23 01:09:50.380291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.412 qpair failed and we were unable to recover it. 00:30:06.412 [2024-07-23 01:09:50.380437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.412 [2024-07-23 01:09:50.380564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.412 [2024-07-23 01:09:50.380588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.412 qpair failed and we were unable to recover it. 00:30:06.412 [2024-07-23 01:09:50.380752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.412 [2024-07-23 01:09:50.380897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.412 [2024-07-23 01:09:50.380921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.412 qpair failed and we were unable to recover it. 00:30:06.412 [2024-07-23 01:09:50.381084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.412 [2024-07-23 01:09:50.381227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.412 [2024-07-23 01:09:50.381252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.412 qpair failed and we were unable to recover it. 00:30:06.412 [2024-07-23 01:09:50.381442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.412 [2024-07-23 01:09:50.381575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.412 [2024-07-23 01:09:50.381599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.412 qpair failed and we were unable to recover it. 00:30:06.412 [2024-07-23 01:09:50.381739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.412 [2024-07-23 01:09:50.381870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.412 [2024-07-23 01:09:50.381894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.412 qpair failed and we were unable to recover it. 00:30:06.412 [2024-07-23 01:09:50.382030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.412 [2024-07-23 01:09:50.382194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.412 [2024-07-23 01:09:50.382219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.412 qpair failed and we were unable to recover it. 00:30:06.412 [2024-07-23 01:09:50.382347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.412 [2024-07-23 01:09:50.382477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.412 [2024-07-23 01:09:50.382501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.412 qpair failed and we were unable to recover it. 00:30:06.412 [2024-07-23 01:09:50.382642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.412 [2024-07-23 01:09:50.382815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.412 [2024-07-23 01:09:50.382840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.412 qpair failed and we were unable to recover it. 00:30:06.412 [2024-07-23 01:09:50.382999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.412 [2024-07-23 01:09:50.383140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.412 [2024-07-23 01:09:50.383164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.412 qpair failed and we were unable to recover it. 00:30:06.412 [2024-07-23 01:09:50.383324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.412 [2024-07-23 01:09:50.383450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.412 [2024-07-23 01:09:50.383474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.412 qpair failed and we were unable to recover it. 00:30:06.412 [2024-07-23 01:09:50.383622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.412 [2024-07-23 01:09:50.383761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.412 [2024-07-23 01:09:50.383785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.412 qpair failed and we were unable to recover it. 00:30:06.412 [2024-07-23 01:09:50.383966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.412 [2024-07-23 01:09:50.384120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.412 [2024-07-23 01:09:50.384144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.413 qpair failed and we were unable to recover it. 00:30:06.413 [2024-07-23 01:09:50.384273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.413 [2024-07-23 01:09:50.384422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.413 [2024-07-23 01:09:50.384446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.413 qpair failed and we were unable to recover it. 00:30:06.413 [2024-07-23 01:09:50.384610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.413 [2024-07-23 01:09:50.384768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.413 [2024-07-23 01:09:50.384794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.413 qpair failed and we were unable to recover it. 00:30:06.413 [2024-07-23 01:09:50.384931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.413 [2024-07-23 01:09:50.385083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.413 [2024-07-23 01:09:50.385108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.413 qpair failed and we were unable to recover it. 00:30:06.413 [2024-07-23 01:09:50.385248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.413 [2024-07-23 01:09:50.385377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.413 [2024-07-23 01:09:50.385403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.413 qpair failed and we were unable to recover it. 00:30:06.413 [2024-07-23 01:09:50.385544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.413 [2024-07-23 01:09:50.385711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.413 [2024-07-23 01:09:50.385736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.413 qpair failed and we were unable to recover it. 00:30:06.413 [2024-07-23 01:09:50.385872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.413 [2024-07-23 01:09:50.386004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.413 [2024-07-23 01:09:50.386028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.413 qpair failed and we were unable to recover it. 00:30:06.413 [2024-07-23 01:09:50.386187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.413 [2024-07-23 01:09:50.386347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.413 [2024-07-23 01:09:50.386371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.413 qpair failed and we were unable to recover it. 00:30:06.413 [2024-07-23 01:09:50.386513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.413 [2024-07-23 01:09:50.386704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.413 [2024-07-23 01:09:50.386729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.413 qpair failed and we were unable to recover it. 00:30:06.413 [2024-07-23 01:09:50.386866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.413 [2024-07-23 01:09:50.387002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.413 [2024-07-23 01:09:50.387027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.413 qpair failed and we were unable to recover it. 00:30:06.413 [2024-07-23 01:09:50.387185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.413 [2024-07-23 01:09:50.387326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.413 [2024-07-23 01:09:50.387351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.413 qpair failed and we were unable to recover it. 00:30:06.413 [2024-07-23 01:09:50.387479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.413 [2024-07-23 01:09:50.387636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.413 [2024-07-23 01:09:50.387663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.413 qpair failed and we were unable to recover it. 00:30:06.413 [2024-07-23 01:09:50.387795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.413 [2024-07-23 01:09:50.387937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.413 [2024-07-23 01:09:50.387961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.413 qpair failed and we were unable to recover it. 00:30:06.413 [2024-07-23 01:09:50.388095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.413 [2024-07-23 01:09:50.388226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.413 [2024-07-23 01:09:50.388251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.413 qpair failed and we were unable to recover it. 00:30:06.413 [2024-07-23 01:09:50.388383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.413 [2024-07-23 01:09:50.388549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.413 [2024-07-23 01:09:50.388576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.413 qpair failed and we were unable to recover it. 00:30:06.413 [2024-07-23 01:09:50.388743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.413 [2024-07-23 01:09:50.388908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.413 [2024-07-23 01:09:50.388932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.413 qpair failed and we were unable to recover it. 00:30:06.413 [2024-07-23 01:09:50.389100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.413 [2024-07-23 01:09:50.389238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.413 [2024-07-23 01:09:50.389264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.413 qpair failed and we were unable to recover it. 00:30:06.413 [2024-07-23 01:09:50.389426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.413 [2024-07-23 01:09:50.389584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.413 [2024-07-23 01:09:50.389610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.413 qpair failed and we were unable to recover it. 00:30:06.413 [2024-07-23 01:09:50.389757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.413 [2024-07-23 01:09:50.389915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.413 [2024-07-23 01:09:50.389939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.413 qpair failed and we were unable to recover it. 00:30:06.413 [2024-07-23 01:09:50.390066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.413 [2024-07-23 01:09:50.390208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.413 [2024-07-23 01:09:50.390233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.413 qpair failed and we were unable to recover it. 00:30:06.413 [2024-07-23 01:09:50.390377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.413 [2024-07-23 01:09:50.390520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.413 [2024-07-23 01:09:50.390544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.413 qpair failed and we were unable to recover it. 00:30:06.413 [2024-07-23 01:09:50.390678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.413 [2024-07-23 01:09:50.390814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.413 [2024-07-23 01:09:50.390839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.413 qpair failed and we were unable to recover it. 00:30:06.413 [2024-07-23 01:09:50.390975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.413 [2024-07-23 01:09:50.391138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.413 [2024-07-23 01:09:50.391162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.413 qpair failed and we were unable to recover it. 00:30:06.413 [2024-07-23 01:09:50.391289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.413 [2024-07-23 01:09:50.391462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.413 [2024-07-23 01:09:50.391487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.413 qpair failed and we were unable to recover it. 00:30:06.413 [2024-07-23 01:09:50.391639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.413 [2024-07-23 01:09:50.391796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.413 [2024-07-23 01:09:50.391821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.413 qpair failed and we were unable to recover it. 00:30:06.413 [2024-07-23 01:09:50.391990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.413 [2024-07-23 01:09:50.392152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.413 [2024-07-23 01:09:50.392176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.413 qpair failed and we were unable to recover it. 00:30:06.413 [2024-07-23 01:09:50.392334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.413 [2024-07-23 01:09:50.392476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.413 [2024-07-23 01:09:50.392500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.413 qpair failed and we were unable to recover it. 00:30:06.413 [2024-07-23 01:09:50.392677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.413 [2024-07-23 01:09:50.392823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.413 [2024-07-23 01:09:50.392847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.413 qpair failed and we were unable to recover it. 00:30:06.413 [2024-07-23 01:09:50.392995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.413 [2024-07-23 01:09:50.393154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.413 [2024-07-23 01:09:50.393179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.413 qpair failed and we were unable to recover it. 00:30:06.413 [2024-07-23 01:09:50.393303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.413 [2024-07-23 01:09:50.393459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.414 [2024-07-23 01:09:50.393484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.414 qpair failed and we were unable to recover it. 00:30:06.414 [2024-07-23 01:09:50.393626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.414 [2024-07-23 01:09:50.393790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.414 [2024-07-23 01:09:50.393814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.414 qpair failed and we were unable to recover it. 00:30:06.414 [2024-07-23 01:09:50.393985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.414 [2024-07-23 01:09:50.394111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.414 [2024-07-23 01:09:50.394135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.414 qpair failed and we were unable to recover it. 00:30:06.414 [2024-07-23 01:09:50.394268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.414 [2024-07-23 01:09:50.394454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.414 [2024-07-23 01:09:50.394478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.414 qpair failed and we were unable to recover it. 00:30:06.414 [2024-07-23 01:09:50.394622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.414 [2024-07-23 01:09:50.394790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.414 [2024-07-23 01:09:50.394815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.414 qpair failed and we were unable to recover it. 00:30:06.414 [2024-07-23 01:09:50.394970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.414 [2024-07-23 01:09:50.395100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.414 [2024-07-23 01:09:50.395125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.414 qpair failed and we were unable to recover it. 00:30:06.414 [2024-07-23 01:09:50.395280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.414 [2024-07-23 01:09:50.395408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.414 [2024-07-23 01:09:50.395434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.414 qpair failed and we were unable to recover it. 00:30:06.414 [2024-07-23 01:09:50.395593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.414 [2024-07-23 01:09:50.395744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.414 [2024-07-23 01:09:50.395769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.414 qpair failed and we were unable to recover it. 00:30:06.414 [2024-07-23 01:09:50.395909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.414 [2024-07-23 01:09:50.396056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.414 [2024-07-23 01:09:50.396082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.414 qpair failed and we were unable to recover it. 00:30:06.414 [2024-07-23 01:09:50.396223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.414 [2024-07-23 01:09:50.396351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.414 [2024-07-23 01:09:50.396376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.414 qpair failed and we were unable to recover it. 00:30:06.414 [2024-07-23 01:09:50.396509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.414 [2024-07-23 01:09:50.396638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.414 [2024-07-23 01:09:50.396663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.414 qpair failed and we were unable to recover it. 00:30:06.414 [2024-07-23 01:09:50.396791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.414 [2024-07-23 01:09:50.396948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.414 [2024-07-23 01:09:50.396973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.414 qpair failed and we were unable to recover it. 00:30:06.414 [2024-07-23 01:09:50.397120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.414 [2024-07-23 01:09:50.397256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.414 [2024-07-23 01:09:50.397280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.414 qpair failed and we were unable to recover it. 00:30:06.414 [2024-07-23 01:09:50.397453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.414 [2024-07-23 01:09:50.397597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.414 [2024-07-23 01:09:50.397638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.414 qpair failed and we were unable to recover it. 00:30:06.414 [2024-07-23 01:09:50.397777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.414 [2024-07-23 01:09:50.397906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.414 [2024-07-23 01:09:50.397930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.414 qpair failed and we were unable to recover it. 00:30:06.414 [2024-07-23 01:09:50.398067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.414 [2024-07-23 01:09:50.398229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.414 [2024-07-23 01:09:50.398253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.414 qpair failed and we were unable to recover it. 00:30:06.414 [2024-07-23 01:09:50.398412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.414 [2024-07-23 01:09:50.398571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.414 [2024-07-23 01:09:50.398596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.414 qpair failed and we were unable to recover it. 00:30:06.414 [2024-07-23 01:09:50.398730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.414 [2024-07-23 01:09:50.398861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.414 [2024-07-23 01:09:50.398887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.414 qpair failed and we were unable to recover it. 00:30:06.414 [2024-07-23 01:09:50.399049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.414 [2024-07-23 01:09:50.399177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.414 [2024-07-23 01:09:50.399201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.414 qpair failed and we were unable to recover it. 00:30:06.414 [2024-07-23 01:09:50.399327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.414 [2024-07-23 01:09:50.399469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.414 [2024-07-23 01:09:50.399494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.414 qpair failed and we were unable to recover it. 00:30:06.414 [2024-07-23 01:09:50.399624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.414 [2024-07-23 01:09:50.399791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.414 [2024-07-23 01:09:50.399815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.414 qpair failed and we were unable to recover it. 00:30:06.414 [2024-07-23 01:09:50.399956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.414 [2024-07-23 01:09:50.400112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.414 [2024-07-23 01:09:50.400136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.414 qpair failed and we were unable to recover it. 00:30:06.414 [2024-07-23 01:09:50.400266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.414 [2024-07-23 01:09:50.400406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.414 [2024-07-23 01:09:50.400430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.414 qpair failed and we were unable to recover it. 00:30:06.414 [2024-07-23 01:09:50.400568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.414 [2024-07-23 01:09:50.400716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.414 [2024-07-23 01:09:50.400743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.414 qpair failed and we were unable to recover it. 00:30:06.414 [2024-07-23 01:09:50.400876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.414 [2024-07-23 01:09:50.401002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.414 [2024-07-23 01:09:50.401027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.414 qpair failed and we were unable to recover it. 00:30:06.415 [2024-07-23 01:09:50.401157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.415 [2024-07-23 01:09:50.401292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.415 [2024-07-23 01:09:50.401319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.415 qpair failed and we were unable to recover it. 00:30:06.415 [2024-07-23 01:09:50.401481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.415 [2024-07-23 01:09:50.401636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.415 [2024-07-23 01:09:50.401661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.415 qpair failed and we were unable to recover it. 00:30:06.415 [2024-07-23 01:09:50.401796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.415 [2024-07-23 01:09:50.401927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.415 [2024-07-23 01:09:50.401952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.415 qpair failed and we were unable to recover it. 00:30:06.415 [2024-07-23 01:09:50.402081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.415 [2024-07-23 01:09:50.402206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.415 [2024-07-23 01:09:50.402230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.415 qpair failed and we were unable to recover it. 00:30:06.415 [2024-07-23 01:09:50.402362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.415 [2024-07-23 01:09:50.402496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.415 [2024-07-23 01:09:50.402520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.415 qpair failed and we were unable to recover it. 00:30:06.415 [2024-07-23 01:09:50.402659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.415 [2024-07-23 01:09:50.402795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.415 [2024-07-23 01:09:50.402820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.415 qpair failed and we were unable to recover it. 00:30:06.415 [2024-07-23 01:09:50.402953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.415 [2024-07-23 01:09:50.403098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.415 [2024-07-23 01:09:50.403122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.415 qpair failed and we were unable to recover it. 00:30:06.415 [2024-07-23 01:09:50.403256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.415 [2024-07-23 01:09:50.403416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.415 [2024-07-23 01:09:50.403441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.415 qpair failed and we were unable to recover it. 00:30:06.415 [2024-07-23 01:09:50.403605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.415 [2024-07-23 01:09:50.403748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.415 [2024-07-23 01:09:50.403773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.415 qpair failed and we were unable to recover it. 00:30:06.415 [2024-07-23 01:09:50.403923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.415 [2024-07-23 01:09:50.404095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.415 [2024-07-23 01:09:50.404119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.415 qpair failed and we were unable to recover it. 00:30:06.415 [2024-07-23 01:09:50.404279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.415 [2024-07-23 01:09:50.404420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.415 [2024-07-23 01:09:50.404447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.415 qpair failed and we were unable to recover it. 00:30:06.415 [2024-07-23 01:09:50.404627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.415 [2024-07-23 01:09:50.404757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.415 [2024-07-23 01:09:50.404782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.415 qpair failed and we were unable to recover it. 00:30:06.415 [2024-07-23 01:09:50.404929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.415 [2024-07-23 01:09:50.405060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.415 [2024-07-23 01:09:50.405085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.415 qpair failed and we were unable to recover it. 00:30:06.415 [2024-07-23 01:09:50.405231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.415 [2024-07-23 01:09:50.405394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.415 [2024-07-23 01:09:50.405418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.415 qpair failed and we were unable to recover it. 00:30:06.415 [2024-07-23 01:09:50.405580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.415 [2024-07-23 01:09:50.405726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.415 [2024-07-23 01:09:50.405754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.415 qpair failed and we were unable to recover it. 00:30:06.415 [2024-07-23 01:09:50.405884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.415 [2024-07-23 01:09:50.406028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.415 [2024-07-23 01:09:50.406052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.415 qpair failed and we were unable to recover it. 00:30:06.415 [2024-07-23 01:09:50.406241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.415 [2024-07-23 01:09:50.406373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.415 [2024-07-23 01:09:50.406400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.415 qpair failed and we were unable to recover it. 00:30:06.415 [2024-07-23 01:09:50.406536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.415 [2024-07-23 01:09:50.406669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.415 [2024-07-23 01:09:50.406694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.415 qpair failed and we were unable to recover it. 00:30:06.415 [2024-07-23 01:09:50.406840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.415 [2024-07-23 01:09:50.406973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.415 [2024-07-23 01:09:50.406998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.415 qpair failed and we were unable to recover it. 00:30:06.415 [2024-07-23 01:09:50.407128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.415 [2024-07-23 01:09:50.407261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.415 [2024-07-23 01:09:50.407287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.415 qpair failed and we were unable to recover it. 00:30:06.415 [2024-07-23 01:09:50.407454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.415 [2024-07-23 01:09:50.407584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.415 [2024-07-23 01:09:50.407610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.415 qpair failed and we were unable to recover it. 00:30:06.415 [2024-07-23 01:09:50.407767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.415 [2024-07-23 01:09:50.407940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.415 [2024-07-23 01:09:50.407965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.415 qpair failed and we were unable to recover it. 00:30:06.415 [2024-07-23 01:09:50.408156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.415 [2024-07-23 01:09:50.408293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.415 [2024-07-23 01:09:50.408317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.415 qpair failed and we were unable to recover it. 00:30:06.415 [2024-07-23 01:09:50.408454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.415 [2024-07-23 01:09:50.408593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.415 [2024-07-23 01:09:50.408626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.415 qpair failed and we were unable to recover it. 00:30:06.415 [2024-07-23 01:09:50.408768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.415 [2024-07-23 01:09:50.408904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.415 [2024-07-23 01:09:50.408928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.415 qpair failed and we were unable to recover it. 00:30:06.415 [2024-07-23 01:09:50.409062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.415 [2024-07-23 01:09:50.409193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.415 [2024-07-23 01:09:50.409218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.415 qpair failed and we were unable to recover it. 00:30:06.415 [2024-07-23 01:09:50.409365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.415 [2024-07-23 01:09:50.409499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.415 [2024-07-23 01:09:50.409525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.415 qpair failed and we were unable to recover it. 00:30:06.415 [2024-07-23 01:09:50.409693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.415 [2024-07-23 01:09:50.409840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.415 [2024-07-23 01:09:50.409865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.415 qpair failed and we were unable to recover it. 00:30:06.415 [2024-07-23 01:09:50.409992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.415 [2024-07-23 01:09:50.410148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.416 [2024-07-23 01:09:50.410172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.416 qpair failed and we were unable to recover it. 00:30:06.416 [2024-07-23 01:09:50.410335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.416 [2024-07-23 01:09:50.410493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.416 [2024-07-23 01:09:50.410518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.416 qpair failed and we were unable to recover it. 00:30:06.416 [2024-07-23 01:09:50.410719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.416 [2024-07-23 01:09:50.410864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.416 [2024-07-23 01:09:50.410890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.416 qpair failed and we were unable to recover it. 00:30:06.416 [2024-07-23 01:09:50.411041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.416 [2024-07-23 01:09:50.411203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.416 [2024-07-23 01:09:50.411227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.416 qpair failed and we were unable to recover it. 00:30:06.416 [2024-07-23 01:09:50.411405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.416 [2024-07-23 01:09:50.411540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.416 [2024-07-23 01:09:50.411566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.416 qpair failed and we were unable to recover it. 00:30:06.416 [2024-07-23 01:09:50.411733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.416 [2024-07-23 01:09:50.411879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.416 [2024-07-23 01:09:50.411904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.416 qpair failed and we were unable to recover it. 00:30:06.416 [2024-07-23 01:09:50.412039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.416 [2024-07-23 01:09:50.412161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.416 [2024-07-23 01:09:50.412186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.416 qpair failed and we were unable to recover it. 00:30:06.416 [2024-07-23 01:09:50.412347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.416 [2024-07-23 01:09:50.412488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.416 [2024-07-23 01:09:50.412512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.416 qpair failed and we were unable to recover it. 00:30:06.416 [2024-07-23 01:09:50.412682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.416 [2024-07-23 01:09:50.412842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.416 [2024-07-23 01:09:50.412867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.416 qpair failed and we were unable to recover it. 00:30:06.416 [2024-07-23 01:09:50.413026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.416 [2024-07-23 01:09:50.413159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.416 [2024-07-23 01:09:50.413183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.416 qpair failed and we were unable to recover it. 00:30:06.416 [2024-07-23 01:09:50.413330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.416 [2024-07-23 01:09:50.413456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.416 [2024-07-23 01:09:50.413481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.416 qpair failed and we were unable to recover it. 00:30:06.416 [2024-07-23 01:09:50.413610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.416 [2024-07-23 01:09:50.413749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.416 [2024-07-23 01:09:50.413774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.416 qpair failed and we were unable to recover it. 00:30:06.416 [2024-07-23 01:09:50.413913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.416 [2024-07-23 01:09:50.414083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.416 [2024-07-23 01:09:50.414108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.416 qpair failed and we were unable to recover it. 00:30:06.416 [2024-07-23 01:09:50.414247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.416 [2024-07-23 01:09:50.414381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.416 [2024-07-23 01:09:50.414408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.416 qpair failed and we were unable to recover it. 00:30:06.416 [2024-07-23 01:09:50.414555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.416 [2024-07-23 01:09:50.414720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.416 [2024-07-23 01:09:50.414746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.416 qpair failed and we were unable to recover it. 00:30:06.416 [2024-07-23 01:09:50.414911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.416 [2024-07-23 01:09:50.415051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.416 [2024-07-23 01:09:50.415076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.416 qpair failed and we were unable to recover it. 00:30:06.416 [2024-07-23 01:09:50.415254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.416 [2024-07-23 01:09:50.415413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.416 [2024-07-23 01:09:50.415437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.416 qpair failed and we were unable to recover it. 00:30:06.416 [2024-07-23 01:09:50.415569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.416 [2024-07-23 01:09:50.415703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.416 [2024-07-23 01:09:50.415729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.416 qpair failed and we were unable to recover it. 00:30:06.416 [2024-07-23 01:09:50.415895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.416 [2024-07-23 01:09:50.416068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.416 [2024-07-23 01:09:50.416092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.416 qpair failed and we were unable to recover it. 00:30:06.416 [2024-07-23 01:09:50.416225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.416 [2024-07-23 01:09:50.416356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.416 [2024-07-23 01:09:50.416381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.416 qpair failed and we were unable to recover it. 00:30:06.416 [2024-07-23 01:09:50.416573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.416 [2024-07-23 01:09:50.416718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.416 [2024-07-23 01:09:50.416745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.416 qpair failed and we were unable to recover it. 00:30:06.416 [2024-07-23 01:09:50.416877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.416 [2024-07-23 01:09:50.417015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.416 [2024-07-23 01:09:50.417040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.416 qpair failed and we were unable to recover it. 00:30:06.416 [2024-07-23 01:09:50.417168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.416 [2024-07-23 01:09:50.417338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.416 [2024-07-23 01:09:50.417363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.416 qpair failed and we were unable to recover it. 00:30:06.416 [2024-07-23 01:09:50.417504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.416 [2024-07-23 01:09:50.417658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.416 [2024-07-23 01:09:50.417683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.416 qpair failed and we were unable to recover it. 00:30:06.416 [2024-07-23 01:09:50.417841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.416 [2024-07-23 01:09:50.417973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.416 [2024-07-23 01:09:50.417998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.416 qpair failed and we were unable to recover it. 00:30:06.416 [2024-07-23 01:09:50.418160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.416 [2024-07-23 01:09:50.418290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.416 [2024-07-23 01:09:50.418314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.416 qpair failed and we were unable to recover it. 00:30:06.416 [2024-07-23 01:09:50.418487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.416 [2024-07-23 01:09:50.418610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.416 [2024-07-23 01:09:50.418641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.416 qpair failed and we were unable to recover it. 00:30:06.416 [2024-07-23 01:09:50.418787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.416 [2024-07-23 01:09:50.418930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.416 [2024-07-23 01:09:50.418957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.416 qpair failed and we were unable to recover it. 00:30:06.416 [2024-07-23 01:09:50.419149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.416 [2024-07-23 01:09:50.419319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.416 [2024-07-23 01:09:50.419346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.416 qpair failed and we were unable to recover it. 00:30:06.417 [2024-07-23 01:09:50.419476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.417 [2024-07-23 01:09:50.419638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.417 [2024-07-23 01:09:50.419664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.417 qpair failed and we were unable to recover it. 00:30:06.417 [2024-07-23 01:09:50.419812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.417 [2024-07-23 01:09:50.419941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.417 [2024-07-23 01:09:50.419966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.417 qpair failed and we were unable to recover it. 00:30:06.417 [2024-07-23 01:09:50.420136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.417 [2024-07-23 01:09:50.420277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.417 [2024-07-23 01:09:50.420303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.417 qpair failed and we were unable to recover it. 00:30:06.417 [2024-07-23 01:09:50.420438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.417 [2024-07-23 01:09:50.420645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.417 [2024-07-23 01:09:50.420676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.417 qpair failed and we were unable to recover it. 00:30:06.417 [2024-07-23 01:09:50.420803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.417 [2024-07-23 01:09:50.420950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.417 [2024-07-23 01:09:50.420975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.417 qpair failed and we were unable to recover it. 00:30:06.417 [2024-07-23 01:09:50.421117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.417 [2024-07-23 01:09:50.421287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.417 [2024-07-23 01:09:50.421312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.417 qpair failed and we were unable to recover it. 00:30:06.417 [2024-07-23 01:09:50.421468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.417 [2024-07-23 01:09:50.421597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.417 [2024-07-23 01:09:50.421637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.417 qpair failed and we were unable to recover it. 00:30:06.417 [2024-07-23 01:09:50.421804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.417 [2024-07-23 01:09:50.421967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.417 [2024-07-23 01:09:50.421991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.417 qpair failed and we were unable to recover it. 00:30:06.417 [2024-07-23 01:09:50.422140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.417 [2024-07-23 01:09:50.422302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.417 [2024-07-23 01:09:50.422327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.417 qpair failed and we were unable to recover it. 00:30:06.417 [2024-07-23 01:09:50.422463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.417 [2024-07-23 01:09:50.422594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.417 [2024-07-23 01:09:50.422624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.417 qpair failed and we were unable to recover it. 00:30:06.417 [2024-07-23 01:09:50.422785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.417 [2024-07-23 01:09:50.422939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.417 [2024-07-23 01:09:50.422963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.417 qpair failed and we were unable to recover it. 00:30:06.417 [2024-07-23 01:09:50.423140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.417 [2024-07-23 01:09:50.423297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.417 [2024-07-23 01:09:50.423322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.417 qpair failed and we were unable to recover it. 00:30:06.417 [2024-07-23 01:09:50.423480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.417 [2024-07-23 01:09:50.423625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.417 [2024-07-23 01:09:50.423650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.417 qpair failed and we were unable to recover it. 00:30:06.417 [2024-07-23 01:09:50.423796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.417 [2024-07-23 01:09:50.423930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.417 [2024-07-23 01:09:50.423961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.417 qpair failed and we were unable to recover it. 00:30:06.417 [2024-07-23 01:09:50.424127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.417 [2024-07-23 01:09:50.424274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.417 [2024-07-23 01:09:50.424299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.417 qpair failed and we were unable to recover it. 00:30:06.417 [2024-07-23 01:09:50.424448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.417 [2024-07-23 01:09:50.424631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.417 [2024-07-23 01:09:50.424656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.417 qpair failed and we were unable to recover it. 00:30:06.417 [2024-07-23 01:09:50.424817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.417 [2024-07-23 01:09:50.424982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.417 [2024-07-23 01:09:50.425007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.417 qpair failed and we were unable to recover it. 00:30:06.417 [2024-07-23 01:09:50.425178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.417 [2024-07-23 01:09:50.425315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.417 [2024-07-23 01:09:50.425340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.417 qpair failed and we were unable to recover it. 00:30:06.417 [2024-07-23 01:09:50.425478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.417 [2024-07-23 01:09:50.425623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.417 [2024-07-23 01:09:50.425648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.417 qpair failed and we were unable to recover it. 00:30:06.417 [2024-07-23 01:09:50.425782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.417 [2024-07-23 01:09:50.425953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.417 [2024-07-23 01:09:50.425979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.417 qpair failed and we were unable to recover it. 00:30:06.417 [2024-07-23 01:09:50.426139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.417 [2024-07-23 01:09:50.426273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.417 [2024-07-23 01:09:50.426300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.417 qpair failed and we were unable to recover it. 00:30:06.417 [2024-07-23 01:09:50.426459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.417 [2024-07-23 01:09:50.426603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.417 [2024-07-23 01:09:50.426637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.417 qpair failed and we were unable to recover it. 00:30:06.417 [2024-07-23 01:09:50.426806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.417 [2024-07-23 01:09:50.426980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.417 [2024-07-23 01:09:50.427007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.417 qpair failed and we were unable to recover it. 00:30:06.417 [2024-07-23 01:09:50.427150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.417 [2024-07-23 01:09:50.427310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.417 [2024-07-23 01:09:50.427339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.417 qpair failed and we were unable to recover it. 00:30:06.417 [2024-07-23 01:09:50.427479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.417 [2024-07-23 01:09:50.427606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.417 [2024-07-23 01:09:50.427644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.417 qpair failed and we were unable to recover it. 00:30:06.417 [2024-07-23 01:09:50.427785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.417 [2024-07-23 01:09:50.427936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.417 [2024-07-23 01:09:50.427961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.417 qpair failed and we were unable to recover it. 00:30:06.417 [2024-07-23 01:09:50.428096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.417 [2024-07-23 01:09:50.428219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.417 [2024-07-23 01:09:50.428243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.417 qpair failed and we were unable to recover it. 00:30:06.417 [2024-07-23 01:09:50.428415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.417 [2024-07-23 01:09:50.428577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.417 [2024-07-23 01:09:50.428603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.417 qpair failed and we were unable to recover it. 00:30:06.417 [2024-07-23 01:09:50.428754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.417 [2024-07-23 01:09:50.428880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.418 [2024-07-23 01:09:50.428905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.418 qpair failed and we were unable to recover it. 00:30:06.418 [2024-07-23 01:09:50.429035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.418 [2024-07-23 01:09:50.429174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.418 [2024-07-23 01:09:50.429200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.418 qpair failed and we were unable to recover it. 00:30:06.418 [2024-07-23 01:09:50.429391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.418 [2024-07-23 01:09:50.429536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.418 [2024-07-23 01:09:50.429560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.418 qpair failed and we were unable to recover it. 00:30:06.418 [2024-07-23 01:09:50.429701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.418 [2024-07-23 01:09:50.429867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.418 [2024-07-23 01:09:50.429892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.418 qpair failed and we were unable to recover it. 00:30:06.418 [2024-07-23 01:09:50.430057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.418 [2024-07-23 01:09:50.430188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.418 [2024-07-23 01:09:50.430212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.418 qpair failed and we were unable to recover it. 00:30:06.418 [2024-07-23 01:09:50.430351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.418 [2024-07-23 01:09:50.430495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.418 [2024-07-23 01:09:50.430525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.418 qpair failed and we were unable to recover it. 00:30:06.418 [2024-07-23 01:09:50.430689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.418 [2024-07-23 01:09:50.430823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.418 [2024-07-23 01:09:50.430847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.418 qpair failed and we were unable to recover it. 00:30:06.418 [2024-07-23 01:09:50.431010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.418 [2024-07-23 01:09:50.431168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.418 [2024-07-23 01:09:50.431193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.418 qpair failed and we were unable to recover it. 00:30:06.418 [2024-07-23 01:09:50.431324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.418 [2024-07-23 01:09:50.431498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.418 [2024-07-23 01:09:50.431522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.418 qpair failed and we were unable to recover it. 00:30:06.418 [2024-07-23 01:09:50.431654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.418 [2024-07-23 01:09:50.431787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.418 [2024-07-23 01:09:50.431811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.418 qpair failed and we were unable to recover it. 00:30:06.418 [2024-07-23 01:09:50.431944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.418 [2024-07-23 01:09:50.432077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.418 [2024-07-23 01:09:50.432101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.418 qpair failed and we were unable to recover it. 00:30:06.418 [2024-07-23 01:09:50.432240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.418 [2024-07-23 01:09:50.432437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.418 [2024-07-23 01:09:50.432462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.418 qpair failed and we were unable to recover it. 00:30:06.418 [2024-07-23 01:09:50.432594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.418 [2024-07-23 01:09:50.432731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.418 [2024-07-23 01:09:50.432756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.418 qpair failed and we were unable to recover it. 00:30:06.418 [2024-07-23 01:09:50.432890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.418 [2024-07-23 01:09:50.433031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.418 [2024-07-23 01:09:50.433055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.418 qpair failed and we were unable to recover it. 00:30:06.418 [2024-07-23 01:09:50.433219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.418 [2024-07-23 01:09:50.433359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.418 [2024-07-23 01:09:50.433383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.418 qpair failed and we were unable to recover it. 00:30:06.418 [2024-07-23 01:09:50.433545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.418 [2024-07-23 01:09:50.433699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.418 [2024-07-23 01:09:50.433725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.418 qpair failed and we were unable to recover it. 00:30:06.418 [2024-07-23 01:09:50.433868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.418 [2024-07-23 01:09:50.434005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.418 [2024-07-23 01:09:50.434030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.418 qpair failed and we were unable to recover it. 00:30:06.418 [2024-07-23 01:09:50.434200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.418 [2024-07-23 01:09:50.434374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.418 [2024-07-23 01:09:50.434400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.418 qpair failed and we were unable to recover it. 00:30:06.418 [2024-07-23 01:09:50.434538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.418 [2024-07-23 01:09:50.434690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.418 [2024-07-23 01:09:50.434715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.418 qpair failed and we were unable to recover it. 00:30:06.418 [2024-07-23 01:09:50.434882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.418 [2024-07-23 01:09:50.435039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.418 [2024-07-23 01:09:50.435065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.418 qpair failed and we were unable to recover it. 00:30:06.418 [2024-07-23 01:09:50.435220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.418 [2024-07-23 01:09:50.435383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.418 [2024-07-23 01:09:50.435407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.418 qpair failed and we were unable to recover it. 00:30:06.418 [2024-07-23 01:09:50.435551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.418 [2024-07-23 01:09:50.435693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.418 [2024-07-23 01:09:50.435718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.418 qpair failed and we were unable to recover it. 00:30:06.418 [2024-07-23 01:09:50.435851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.418 [2024-07-23 01:09:50.435975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.418 [2024-07-23 01:09:50.436008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.418 qpair failed and we were unable to recover it. 00:30:06.418 [2024-07-23 01:09:50.436171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.418 [2024-07-23 01:09:50.436336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.418 [2024-07-23 01:09:50.436360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.418 qpair failed and we were unable to recover it. 00:30:06.418 [2024-07-23 01:09:50.436482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.418 [2024-07-23 01:09:50.436635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.418 [2024-07-23 01:09:50.436660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.418 qpair failed and we were unable to recover it. 00:30:06.418 [2024-07-23 01:09:50.436815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.418 [2024-07-23 01:09:50.436963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.418 [2024-07-23 01:09:50.436998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.418 qpair failed and we were unable to recover it. 00:30:06.418 [2024-07-23 01:09:50.437150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.418 [2024-07-23 01:09:50.437303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.418 [2024-07-23 01:09:50.437328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.419 qpair failed and we were unable to recover it. 00:30:06.419 [2024-07-23 01:09:50.437493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.419 [2024-07-23 01:09:50.437655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.419 [2024-07-23 01:09:50.437681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.419 qpair failed and we were unable to recover it. 00:30:06.419 [2024-07-23 01:09:50.437845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.419 [2024-07-23 01:09:50.437972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.419 [2024-07-23 01:09:50.437996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.419 qpair failed and we were unable to recover it. 00:30:06.419 [2024-07-23 01:09:50.438154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.419 [2024-07-23 01:09:50.438282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.419 [2024-07-23 01:09:50.438307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.419 qpair failed and we were unable to recover it. 00:30:06.419 [2024-07-23 01:09:50.438475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.419 [2024-07-23 01:09:50.438609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.419 [2024-07-23 01:09:50.438640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.419 qpair failed and we were unable to recover it. 00:30:06.419 [2024-07-23 01:09:50.438772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.419 [2024-07-23 01:09:50.438914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.419 [2024-07-23 01:09:50.438944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.419 qpair failed and we were unable to recover it. 00:30:06.419 [2024-07-23 01:09:50.439134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.419 [2024-07-23 01:09:50.439265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.419 [2024-07-23 01:09:50.439290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.419 qpair failed and we were unable to recover it. 00:30:06.419 [2024-07-23 01:09:50.439463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.419 [2024-07-23 01:09:50.439592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.419 [2024-07-23 01:09:50.439621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.419 qpair failed and we were unable to recover it. 00:30:06.419 [2024-07-23 01:09:50.439761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.419 [2024-07-23 01:09:50.439887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.419 [2024-07-23 01:09:50.439912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.419 qpair failed and we were unable to recover it. 00:30:06.419 [2024-07-23 01:09:50.440073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.419 [2024-07-23 01:09:50.440240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.419 [2024-07-23 01:09:50.440266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.419 qpair failed and we were unable to recover it. 00:30:06.419 [2024-07-23 01:09:50.440411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.419 [2024-07-23 01:09:50.440543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.419 [2024-07-23 01:09:50.440570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.419 qpair failed and we were unable to recover it. 00:30:06.419 [2024-07-23 01:09:50.440712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.419 [2024-07-23 01:09:50.440857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.419 [2024-07-23 01:09:50.440881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.419 qpair failed and we were unable to recover it. 00:30:06.419 [2024-07-23 01:09:50.441022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.419 [2024-07-23 01:09:50.441162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.419 [2024-07-23 01:09:50.441188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.419 qpair failed and we were unable to recover it. 00:30:06.419 [2024-07-23 01:09:50.441351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.419 [2024-07-23 01:09:50.441495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.419 [2024-07-23 01:09:50.441520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.419 qpair failed and we were unable to recover it. 00:30:06.419 [2024-07-23 01:09:50.441697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.419 [2024-07-23 01:09:50.441837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.419 [2024-07-23 01:09:50.441863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.419 qpair failed and we were unable to recover it. 00:30:06.419 [2024-07-23 01:09:50.442032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.419 [2024-07-23 01:09:50.442169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.419 [2024-07-23 01:09:50.442196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.419 qpair failed and we were unable to recover it. 00:30:06.419 [2024-07-23 01:09:50.442370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.419 [2024-07-23 01:09:50.442508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.419 [2024-07-23 01:09:50.442533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.419 qpair failed and we were unable to recover it. 00:30:06.419 [2024-07-23 01:09:50.442682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.419 [2024-07-23 01:09:50.442820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.419 [2024-07-23 01:09:50.442844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.419 qpair failed and we were unable to recover it. 00:30:06.419 [2024-07-23 01:09:50.442983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.419 [2024-07-23 01:09:50.443140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.419 [2024-07-23 01:09:50.443175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.419 qpair failed and we were unable to recover it. 00:30:06.419 [2024-07-23 01:09:50.443373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.419 [2024-07-23 01:09:50.443522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.419 [2024-07-23 01:09:50.443547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.419 qpair failed and we were unable to recover it. 00:30:06.419 [2024-07-23 01:09:50.443719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.419 [2024-07-23 01:09:50.443867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.419 [2024-07-23 01:09:50.443892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.419 qpair failed and we were unable to recover it. 00:30:06.419 [2024-07-23 01:09:50.444093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.419 [2024-07-23 01:09:50.444230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.419 [2024-07-23 01:09:50.444256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.419 qpair failed and we were unable to recover it. 00:30:06.419 [2024-07-23 01:09:50.444388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.419 [2024-07-23 01:09:50.444549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.419 [2024-07-23 01:09:50.444573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.419 qpair failed and we were unable to recover it. 00:30:06.419 [2024-07-23 01:09:50.444755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.419 [2024-07-23 01:09:50.444885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.419 [2024-07-23 01:09:50.444910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.419 qpair failed and we were unable to recover it. 00:30:06.419 [2024-07-23 01:09:50.445070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.419 [2024-07-23 01:09:50.445196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.419 [2024-07-23 01:09:50.445220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.419 qpair failed and we were unable to recover it. 00:30:06.419 [2024-07-23 01:09:50.445363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.419 [2024-07-23 01:09:50.445504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.419 [2024-07-23 01:09:50.445529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.419 qpair failed and we were unable to recover it. 00:30:06.419 [2024-07-23 01:09:50.445666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.419 [2024-07-23 01:09:50.445836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.419 [2024-07-23 01:09:50.445861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.419 qpair failed and we were unable to recover it. 00:30:06.419 [2024-07-23 01:09:50.445996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.419 [2024-07-23 01:09:50.446138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.420 [2024-07-23 01:09:50.446163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.420 qpair failed and we were unable to recover it. 00:30:06.420 [2024-07-23 01:09:50.446304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.420 [2024-07-23 01:09:50.446461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.420 [2024-07-23 01:09:50.446485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.420 qpair failed and we were unable to recover it. 00:30:06.420 [2024-07-23 01:09:50.446655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.420 [2024-07-23 01:09:50.446816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.420 [2024-07-23 01:09:50.446841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.420 qpair failed and we were unable to recover it. 00:30:06.420 [2024-07-23 01:09:50.447011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.420 [2024-07-23 01:09:50.447170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.420 [2024-07-23 01:09:50.447194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.420 qpair failed and we were unable to recover it. 00:30:06.420 [2024-07-23 01:09:50.447326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.420 [2024-07-23 01:09:50.447452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.420 [2024-07-23 01:09:50.447475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.420 qpair failed and we were unable to recover it. 00:30:06.420 [2024-07-23 01:09:50.447632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.420 [2024-07-23 01:09:50.447778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.420 [2024-07-23 01:09:50.447802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.420 qpair failed and we were unable to recover it. 00:30:06.420 [2024-07-23 01:09:50.447995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.420 [2024-07-23 01:09:50.448132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.420 [2024-07-23 01:09:50.448156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.420 qpair failed and we were unable to recover it. 00:30:06.420 [2024-07-23 01:09:50.448317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.420 [2024-07-23 01:09:50.448466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.420 [2024-07-23 01:09:50.448491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.420 qpair failed and we were unable to recover it. 00:30:06.420 [2024-07-23 01:09:50.448626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.420 [2024-07-23 01:09:50.448792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.420 [2024-07-23 01:09:50.448819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.420 qpair failed and we were unable to recover it. 00:30:06.420 [2024-07-23 01:09:50.448956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.420 [2024-07-23 01:09:50.449091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.420 [2024-07-23 01:09:50.449116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.420 qpair failed and we were unable to recover it. 00:30:06.420 [2024-07-23 01:09:50.449309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.420 [2024-07-23 01:09:50.449473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.420 [2024-07-23 01:09:50.449498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.420 qpair failed and we were unable to recover it. 00:30:06.420 [2024-07-23 01:09:50.449677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.420 [2024-07-23 01:09:50.449836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.420 [2024-07-23 01:09:50.449861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.420 qpair failed and we were unable to recover it. 00:30:06.420 [2024-07-23 01:09:50.450001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.420 [2024-07-23 01:09:50.450138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.420 [2024-07-23 01:09:50.450162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.420 qpair failed and we were unable to recover it. 00:30:06.420 [2024-07-23 01:09:50.450334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.420 [2024-07-23 01:09:50.450501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.420 [2024-07-23 01:09:50.450525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.420 qpair failed and we were unable to recover it. 00:30:06.420 [2024-07-23 01:09:50.450701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.420 [2024-07-23 01:09:50.450835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.420 [2024-07-23 01:09:50.450860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.420 qpair failed and we were unable to recover it. 00:30:06.420 [2024-07-23 01:09:50.450990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.420 [2024-07-23 01:09:50.451149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.420 [2024-07-23 01:09:50.451175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.420 qpair failed and we were unable to recover it. 00:30:06.420 [2024-07-23 01:09:50.451342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.420 [2024-07-23 01:09:50.451506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.420 [2024-07-23 01:09:50.451530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.420 qpair failed and we were unable to recover it. 00:30:06.420 [2024-07-23 01:09:50.451659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.420 [2024-07-23 01:09:50.451788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.420 [2024-07-23 01:09:50.451812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.420 qpair failed and we were unable to recover it. 00:30:06.420 [2024-07-23 01:09:50.451976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.420 [2024-07-23 01:09:50.452103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.420 [2024-07-23 01:09:50.452127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.420 qpair failed and we were unable to recover it. 00:30:06.420 [2024-07-23 01:09:50.452267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.420 [2024-07-23 01:09:50.452430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.420 [2024-07-23 01:09:50.452456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.420 qpair failed and we were unable to recover it. 00:30:06.420 [2024-07-23 01:09:50.452621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.420 [2024-07-23 01:09:50.452749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.420 [2024-07-23 01:09:50.452774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.420 qpair failed and we were unable to recover it. 00:30:06.420 [2024-07-23 01:09:50.452934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.420 [2024-07-23 01:09:50.453095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.420 [2024-07-23 01:09:50.453120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.420 qpair failed and we were unable to recover it. 00:30:06.420 [2024-07-23 01:09:50.453292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.420 [2024-07-23 01:09:50.453419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.420 [2024-07-23 01:09:50.453447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.420 qpair failed and we were unable to recover it. 00:30:06.420 [2024-07-23 01:09:50.453619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.420 [2024-07-23 01:09:50.453749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.420 [2024-07-23 01:09:50.453774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.420 qpair failed and we were unable to recover it. 00:30:06.420 [2024-07-23 01:09:50.453929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.420 [2024-07-23 01:09:50.454062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.420 [2024-07-23 01:09:50.454088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.420 qpair failed and we were unable to recover it. 00:30:06.420 [2024-07-23 01:09:50.454225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.420 [2024-07-23 01:09:50.454372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.420 [2024-07-23 01:09:50.454398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.420 qpair failed and we were unable to recover it. 00:30:06.420 [2024-07-23 01:09:50.454608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.420 [2024-07-23 01:09:50.454748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.420 [2024-07-23 01:09:50.454773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.420 qpair failed and we were unable to recover it. 00:30:06.420 [2024-07-23 01:09:50.454912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.420 [2024-07-23 01:09:50.455075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.420 [2024-07-23 01:09:50.455100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.420 qpair failed and we were unable to recover it. 00:30:06.420 [2024-07-23 01:09:50.455255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.420 [2024-07-23 01:09:50.455386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.420 [2024-07-23 01:09:50.455410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.420 qpair failed and we were unable to recover it. 00:30:06.421 [2024-07-23 01:09:50.455540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.421 [2024-07-23 01:09:50.455690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.421 [2024-07-23 01:09:50.455717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.421 qpair failed and we were unable to recover it. 00:30:06.421 [2024-07-23 01:09:50.455858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.421 [2024-07-23 01:09:50.456033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.421 [2024-07-23 01:09:50.456060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.421 qpair failed and we were unable to recover it. 00:30:06.421 [2024-07-23 01:09:50.456186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.421 [2024-07-23 01:09:50.456338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.421 [2024-07-23 01:09:50.456362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.421 qpair failed and we were unable to recover it. 00:30:06.421 [2024-07-23 01:09:50.456493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.421 [2024-07-23 01:09:50.456640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.421 [2024-07-23 01:09:50.456665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.421 qpair failed and we were unable to recover it. 00:30:06.421 [2024-07-23 01:09:50.456806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.421 [2024-07-23 01:09:50.456966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.421 [2024-07-23 01:09:50.457002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.421 qpair failed and we were unable to recover it. 00:30:06.421 [2024-07-23 01:09:50.457132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.421 [2024-07-23 01:09:50.457283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.421 [2024-07-23 01:09:50.457309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.421 qpair failed and we were unable to recover it. 00:30:06.421 [2024-07-23 01:09:50.457445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.421 [2024-07-23 01:09:50.457583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.421 [2024-07-23 01:09:50.457608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.421 qpair failed and we were unable to recover it. 00:30:06.421 [2024-07-23 01:09:50.457748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.421 [2024-07-23 01:09:50.457878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.421 [2024-07-23 01:09:50.457904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.421 qpair failed and we were unable to recover it. 00:30:06.421 [2024-07-23 01:09:50.458068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.421 [2024-07-23 01:09:50.458225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.421 [2024-07-23 01:09:50.458250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.421 qpair failed and we were unable to recover it. 00:30:06.421 [2024-07-23 01:09:50.458414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.421 [2024-07-23 01:09:50.458558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.421 [2024-07-23 01:09:50.458583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.421 qpair failed and we were unable to recover it. 00:30:06.421 [2024-07-23 01:09:50.458738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.421 [2024-07-23 01:09:50.458875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.421 [2024-07-23 01:09:50.458900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.421 qpair failed and we were unable to recover it. 00:30:06.421 [2024-07-23 01:09:50.459075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.421 [2024-07-23 01:09:50.459215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.421 [2024-07-23 01:09:50.459250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.421 qpair failed and we were unable to recover it. 00:30:06.421 [2024-07-23 01:09:50.459384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.421 [2024-07-23 01:09:50.459512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.421 [2024-07-23 01:09:50.459537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.421 qpair failed and we were unable to recover it. 00:30:06.421 [2024-07-23 01:09:50.459684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.421 [2024-07-23 01:09:50.459841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.421 [2024-07-23 01:09:50.459865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.421 qpair failed and we were unable to recover it. 00:30:06.421 [2024-07-23 01:09:50.460010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.421 [2024-07-23 01:09:50.460151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.421 [2024-07-23 01:09:50.460176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.421 qpair failed and we were unable to recover it. 00:30:06.421 [2024-07-23 01:09:50.460320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.421 [2024-07-23 01:09:50.460483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.421 [2024-07-23 01:09:50.460517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.421 qpair failed and we were unable to recover it. 00:30:06.421 [2024-07-23 01:09:50.460652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.421 [2024-07-23 01:09:50.460825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.421 [2024-07-23 01:09:50.460851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.421 qpair failed and we were unable to recover it. 00:30:06.421 [2024-07-23 01:09:50.460989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.421 [2024-07-23 01:09:50.461143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.421 [2024-07-23 01:09:50.461168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.421 qpair failed and we were unable to recover it. 00:30:06.421 [2024-07-23 01:09:50.461354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.421 [2024-07-23 01:09:50.461479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.421 [2024-07-23 01:09:50.461504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.421 qpair failed and we were unable to recover it. 00:30:06.421 [2024-07-23 01:09:50.461663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.421 [2024-07-23 01:09:50.461789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.421 [2024-07-23 01:09:50.461813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.421 qpair failed and we were unable to recover it. 00:30:06.421 [2024-07-23 01:09:50.461961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.421 [2024-07-23 01:09:50.462120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.421 [2024-07-23 01:09:50.462145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.421 qpair failed and we were unable to recover it. 00:30:06.421 [2024-07-23 01:09:50.462310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.421 [2024-07-23 01:09:50.462484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.421 [2024-07-23 01:09:50.462510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.421 qpair failed and we were unable to recover it. 00:30:06.421 [2024-07-23 01:09:50.462645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.421 [2024-07-23 01:09:50.462793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.421 [2024-07-23 01:09:50.462819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.421 qpair failed and we were unable to recover it. 00:30:06.421 [2024-07-23 01:09:50.462966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.421 [2024-07-23 01:09:50.463099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.421 [2024-07-23 01:09:50.463123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.421 qpair failed and we were unable to recover it. 00:30:06.421 [2024-07-23 01:09:50.463311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.421 [2024-07-23 01:09:50.463449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.421 [2024-07-23 01:09:50.463475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.421 qpair failed and we were unable to recover it. 00:30:06.421 [2024-07-23 01:09:50.463640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.421 [2024-07-23 01:09:50.463788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.421 [2024-07-23 01:09:50.463814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.421 qpair failed and we were unable to recover it. 00:30:06.421 [2024-07-23 01:09:50.463951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.421 [2024-07-23 01:09:50.464078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.421 [2024-07-23 01:09:50.464103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.421 qpair failed and we were unable to recover it. 00:30:06.421 [2024-07-23 01:09:50.464243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.421 [2024-07-23 01:09:50.464374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.421 [2024-07-23 01:09:50.464411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.421 qpair failed and we were unable to recover it. 00:30:06.421 [2024-07-23 01:09:50.464570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.421 [2024-07-23 01:09:50.464732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.422 [2024-07-23 01:09:50.464758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.422 qpair failed and we were unable to recover it. 00:30:06.422 [2024-07-23 01:09:50.464942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.422 [2024-07-23 01:09:50.465075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.422 [2024-07-23 01:09:50.465099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.422 qpair failed and we were unable to recover it. 00:30:06.422 [2024-07-23 01:09:50.465272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.422 [2024-07-23 01:09:50.465436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.422 [2024-07-23 01:09:50.465461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.422 qpair failed and we were unable to recover it. 00:30:06.422 [2024-07-23 01:09:50.465622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.422 [2024-07-23 01:09:50.465760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.422 [2024-07-23 01:09:50.465784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.422 qpair failed and we were unable to recover it. 00:30:06.422 [2024-07-23 01:09:50.465912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.422 [2024-07-23 01:09:50.466088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.422 [2024-07-23 01:09:50.466117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.422 qpair failed and we were unable to recover it. 00:30:06.422 [2024-07-23 01:09:50.466283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.422 [2024-07-23 01:09:50.466445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.422 [2024-07-23 01:09:50.466469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.422 qpair failed and we were unable to recover it. 00:30:06.422 [2024-07-23 01:09:50.466602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.422 [2024-07-23 01:09:50.466751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.422 [2024-07-23 01:09:50.466775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.422 qpair failed and we were unable to recover it. 00:30:06.422 [2024-07-23 01:09:50.466926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.422 [2024-07-23 01:09:50.467097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.422 [2024-07-23 01:09:50.467121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.422 qpair failed and we were unable to recover it. 00:30:06.422 [2024-07-23 01:09:50.467292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.422 [2024-07-23 01:09:50.467416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.422 [2024-07-23 01:09:50.467440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.422 qpair failed and we were unable to recover it. 00:30:06.422 [2024-07-23 01:09:50.467576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.422 [2024-07-23 01:09:50.467745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.422 [2024-07-23 01:09:50.467771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.422 qpair failed and we were unable to recover it. 00:30:06.422 [2024-07-23 01:09:50.467918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.422 [2024-07-23 01:09:50.468065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.422 [2024-07-23 01:09:50.468090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.422 qpair failed and we were unable to recover it. 00:30:06.422 [2024-07-23 01:09:50.468231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.422 [2024-07-23 01:09:50.468385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.422 [2024-07-23 01:09:50.468410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.422 qpair failed and we were unable to recover it. 00:30:06.422 [2024-07-23 01:09:50.468531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.422 [2024-07-23 01:09:50.468699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.422 [2024-07-23 01:09:50.468725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.422 qpair failed and we were unable to recover it. 00:30:06.422 [2024-07-23 01:09:50.468858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.422 [2024-07-23 01:09:50.469024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.422 [2024-07-23 01:09:50.469049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.422 qpair failed and we were unable to recover it. 00:30:06.422 [2024-07-23 01:09:50.469195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.422 [2024-07-23 01:09:50.469325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.422 [2024-07-23 01:09:50.469352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.422 qpair failed and we were unable to recover it. 00:30:06.422 [2024-07-23 01:09:50.469514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.422 [2024-07-23 01:09:50.469645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.422 [2024-07-23 01:09:50.469670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.422 qpair failed and we were unable to recover it. 00:30:06.422 [2024-07-23 01:09:50.469835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.422 [2024-07-23 01:09:50.469964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.422 [2024-07-23 01:09:50.469988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.422 qpair failed and we were unable to recover it. 00:30:06.422 [2024-07-23 01:09:50.470154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.422 [2024-07-23 01:09:50.470282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.422 [2024-07-23 01:09:50.470306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.422 qpair failed and we were unable to recover it. 00:30:06.422 [2024-07-23 01:09:50.470469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.422 [2024-07-23 01:09:50.470600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.422 [2024-07-23 01:09:50.470632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.422 qpair failed and we were unable to recover it. 00:30:06.422 [2024-07-23 01:09:50.470781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.422 [2024-07-23 01:09:50.470954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.422 [2024-07-23 01:09:50.470979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.422 qpair failed and we were unable to recover it. 00:30:06.422 [2024-07-23 01:09:50.471122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.422 [2024-07-23 01:09:50.471294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.422 [2024-07-23 01:09:50.471319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.422 qpair failed and we were unable to recover it. 00:30:06.422 [2024-07-23 01:09:50.471472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.422 [2024-07-23 01:09:50.471661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.422 [2024-07-23 01:09:50.471687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.422 qpair failed and we were unable to recover it. 00:30:06.422 [2024-07-23 01:09:50.471847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.422 [2024-07-23 01:09:50.471981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.422 [2024-07-23 01:09:50.472006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.422 qpair failed and we were unable to recover it. 00:30:06.422 [2024-07-23 01:09:50.472139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.422 [2024-07-23 01:09:50.472281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.422 [2024-07-23 01:09:50.472306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.422 qpair failed and we were unable to recover it. 00:30:06.422 [2024-07-23 01:09:50.472472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.422 [2024-07-23 01:09:50.472640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.422 [2024-07-23 01:09:50.472667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.422 qpair failed and we were unable to recover it. 00:30:06.422 [2024-07-23 01:09:50.472806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.422 [2024-07-23 01:09:50.472965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.422 [2024-07-23 01:09:50.472991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.422 qpair failed and we were unable to recover it. 00:30:06.422 [2024-07-23 01:09:50.473132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.422 [2024-07-23 01:09:50.473286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.422 [2024-07-23 01:09:50.473316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.422 qpair failed and we were unable to recover it. 00:30:06.422 [2024-07-23 01:09:50.473478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.422 [2024-07-23 01:09:50.473623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.422 [2024-07-23 01:09:50.473659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.422 qpair failed and we were unable to recover it. 00:30:06.422 [2024-07-23 01:09:50.473800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.422 [2024-07-23 01:09:50.473989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.422 [2024-07-23 01:09:50.474013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.422 qpair failed and we were unable to recover it. 00:30:06.423 [2024-07-23 01:09:50.474181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.423 [2024-07-23 01:09:50.474346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.423 [2024-07-23 01:09:50.474371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.423 qpair failed and we were unable to recover it. 00:30:06.423 [2024-07-23 01:09:50.474505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.423 [2024-07-23 01:09:50.474636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.423 [2024-07-23 01:09:50.474660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.423 qpair failed and we were unable to recover it. 00:30:06.423 [2024-07-23 01:09:50.474794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.423 [2024-07-23 01:09:50.474964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.423 [2024-07-23 01:09:50.474991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.423 qpair failed and we were unable to recover it. 00:30:06.423 [2024-07-23 01:09:50.475128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.423 [2024-07-23 01:09:50.475254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.423 [2024-07-23 01:09:50.475279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.423 qpair failed and we were unable to recover it. 00:30:06.423 [2024-07-23 01:09:50.475446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.423 [2024-07-23 01:09:50.475577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.423 [2024-07-23 01:09:50.475602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.423 qpair failed and we were unable to recover it. 00:30:06.423 [2024-07-23 01:09:50.475794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.423 [2024-07-23 01:09:50.475925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.423 [2024-07-23 01:09:50.475950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.423 qpair failed and we were unable to recover it. 00:30:06.423 [2024-07-23 01:09:50.476115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.423 [2024-07-23 01:09:50.476260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.423 [2024-07-23 01:09:50.476285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.423 qpair failed and we were unable to recover it. 00:30:06.423 [2024-07-23 01:09:50.476449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.423 [2024-07-23 01:09:50.476625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.423 [2024-07-23 01:09:50.476654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.423 qpair failed and we were unable to recover it. 00:30:06.423 [2024-07-23 01:09:50.476796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.423 [2024-07-23 01:09:50.476925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.423 [2024-07-23 01:09:50.476950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.423 qpair failed and we were unable to recover it. 00:30:06.423 [2024-07-23 01:09:50.477113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.423 [2024-07-23 01:09:50.477252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.423 [2024-07-23 01:09:50.477277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.423 qpair failed and we were unable to recover it. 00:30:06.423 [2024-07-23 01:09:50.477440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.423 [2024-07-23 01:09:50.477605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.423 [2024-07-23 01:09:50.477637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.423 qpair failed and we were unable to recover it. 00:30:06.423 [2024-07-23 01:09:50.477771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.423 [2024-07-23 01:09:50.477931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.423 [2024-07-23 01:09:50.477956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.423 qpair failed and we were unable to recover it. 00:30:06.423 [2024-07-23 01:09:50.478096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.423 [2024-07-23 01:09:50.478258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.423 [2024-07-23 01:09:50.478283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.423 qpair failed and we were unable to recover it. 00:30:06.423 [2024-07-23 01:09:50.478423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.423 [2024-07-23 01:09:50.478556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.423 [2024-07-23 01:09:50.478583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.423 qpair failed and we were unable to recover it. 00:30:06.423 [2024-07-23 01:09:50.478728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.423 [2024-07-23 01:09:50.478856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.423 [2024-07-23 01:09:50.478881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.423 qpair failed and we were unable to recover it. 00:30:06.423 [2024-07-23 01:09:50.479013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.423 [2024-07-23 01:09:50.479146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.423 [2024-07-23 01:09:50.479172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.423 qpair failed and we were unable to recover it. 00:30:06.423 [2024-07-23 01:09:50.479312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.423 [2024-07-23 01:09:50.479440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.423 [2024-07-23 01:09:50.479465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.423 qpair failed and we were unable to recover it. 00:30:06.423 [2024-07-23 01:09:50.479655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.423 [2024-07-23 01:09:50.479816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.423 [2024-07-23 01:09:50.479844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.423 qpair failed and we were unable to recover it. 00:30:06.423 [2024-07-23 01:09:50.479989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.423 [2024-07-23 01:09:50.480147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.423 [2024-07-23 01:09:50.480172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.423 qpair failed and we were unable to recover it. 00:30:06.423 [2024-07-23 01:09:50.480335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.423 [2024-07-23 01:09:50.480469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.423 [2024-07-23 01:09:50.480494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.423 qpair failed and we were unable to recover it. 00:30:06.423 [2024-07-23 01:09:50.480642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.423 [2024-07-23 01:09:50.480799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.423 [2024-07-23 01:09:50.480825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.423 qpair failed and we were unable to recover it. 00:30:06.423 [2024-07-23 01:09:50.480966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.423 [2024-07-23 01:09:50.481100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.423 [2024-07-23 01:09:50.481125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.423 qpair failed and we were unable to recover it. 00:30:06.423 [2024-07-23 01:09:50.481254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.423 [2024-07-23 01:09:50.481438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.423 [2024-07-23 01:09:50.481463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.423 qpair failed and we were unable to recover it. 00:30:06.423 [2024-07-23 01:09:50.481593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.423 [2024-07-23 01:09:50.481724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.423 [2024-07-23 01:09:50.481749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.423 qpair failed and we were unable to recover it. 00:30:06.423 [2024-07-23 01:09:50.481906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.423 [2024-07-23 01:09:50.482029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.424 [2024-07-23 01:09:50.482054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.424 qpair failed and we were unable to recover it. 00:30:06.424 [2024-07-23 01:09:50.482215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.424 [2024-07-23 01:09:50.482370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.424 [2024-07-23 01:09:50.482395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.424 qpair failed and we were unable to recover it. 00:30:06.424 [2024-07-23 01:09:50.482528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.424 [2024-07-23 01:09:50.482689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.424 [2024-07-23 01:09:50.482715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.424 qpair failed and we were unable to recover it. 00:30:06.424 [2024-07-23 01:09:50.482851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.424 [2024-07-23 01:09:50.483003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.424 [2024-07-23 01:09:50.483034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.424 qpair failed and we were unable to recover it. 00:30:06.424 [2024-07-23 01:09:50.483197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.424 [2024-07-23 01:09:50.483327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.424 [2024-07-23 01:09:50.483351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.424 qpair failed and we were unable to recover it. 00:30:06.424 [2024-07-23 01:09:50.483480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.424 [2024-07-23 01:09:50.483644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.424 [2024-07-23 01:09:50.483670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.424 qpair failed and we were unable to recover it. 00:30:06.424 [2024-07-23 01:09:50.483803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.424 [2024-07-23 01:09:50.483931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.424 [2024-07-23 01:09:50.483964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.424 qpair failed and we were unable to recover it. 00:30:06.424 [2024-07-23 01:09:50.484119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.424 [2024-07-23 01:09:50.484317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.424 [2024-07-23 01:09:50.484342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.424 qpair failed and we were unable to recover it. 00:30:06.424 [2024-07-23 01:09:50.484474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.424 [2024-07-23 01:09:50.484633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.424 [2024-07-23 01:09:50.484658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.424 qpair failed and we were unable to recover it. 00:30:06.424 [2024-07-23 01:09:50.484825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.424 [2024-07-23 01:09:50.484956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.424 [2024-07-23 01:09:50.484981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.424 qpair failed and we were unable to recover it. 00:30:06.424 [2024-07-23 01:09:50.485127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.424 [2024-07-23 01:09:50.485286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.424 [2024-07-23 01:09:50.485311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.424 qpair failed and we were unable to recover it. 00:30:06.424 [2024-07-23 01:09:50.485442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.424 [2024-07-23 01:09:50.485610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.424 [2024-07-23 01:09:50.485643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.424 qpair failed and we were unable to recover it. 00:30:06.424 [2024-07-23 01:09:50.485796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.424 [2024-07-23 01:09:50.485959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.424 [2024-07-23 01:09:50.485984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.424 qpair failed and we were unable to recover it. 00:30:06.424 [2024-07-23 01:09:50.486118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.424 [2024-07-23 01:09:50.486241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.424 [2024-07-23 01:09:50.486266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.424 qpair failed and we were unable to recover it. 00:30:06.424 [2024-07-23 01:09:50.486448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.424 [2024-07-23 01:09:50.486582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.424 [2024-07-23 01:09:50.486607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.424 qpair failed and we were unable to recover it. 00:30:06.424 [2024-07-23 01:09:50.486736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.424 [2024-07-23 01:09:50.486872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.424 [2024-07-23 01:09:50.486899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.424 qpair failed and we were unable to recover it. 00:30:06.424 [2024-07-23 01:09:50.487071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.424 [2024-07-23 01:09:50.487234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.424 [2024-07-23 01:09:50.487259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.424 qpair failed and we were unable to recover it. 00:30:06.424 [2024-07-23 01:09:50.487405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.424 [2024-07-23 01:09:50.487532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.424 [2024-07-23 01:09:50.487557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.424 qpair failed and we were unable to recover it. 00:30:06.424 [2024-07-23 01:09:50.487701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.424 [2024-07-23 01:09:50.487835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.424 [2024-07-23 01:09:50.487860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.424 qpair failed and we were unable to recover it. 00:30:06.424 [2024-07-23 01:09:50.488013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.424 [2024-07-23 01:09:50.488154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.424 [2024-07-23 01:09:50.488179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.424 qpair failed and we were unable to recover it. 00:30:06.424 [2024-07-23 01:09:50.488313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.424 [2024-07-23 01:09:50.488484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.424 [2024-07-23 01:09:50.488509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.424 qpair failed and we were unable to recover it. 00:30:06.424 [2024-07-23 01:09:50.488694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.424 [2024-07-23 01:09:50.488837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.424 [2024-07-23 01:09:50.488864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.424 qpair failed and we were unable to recover it. 00:30:06.424 [2024-07-23 01:09:50.489032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.424 [2024-07-23 01:09:50.489165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.424 [2024-07-23 01:09:50.489190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.424 qpair failed and we were unable to recover it. 00:30:06.424 [2024-07-23 01:09:50.489330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.424 [2024-07-23 01:09:50.489452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.424 [2024-07-23 01:09:50.489476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.424 qpair failed and we were unable to recover it. 00:30:06.424 [2024-07-23 01:09:50.489618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.424 [2024-07-23 01:09:50.489762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.424 [2024-07-23 01:09:50.489786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.424 qpair failed and we were unable to recover it. 00:30:06.424 [2024-07-23 01:09:50.489916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.424 [2024-07-23 01:09:50.490092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.424 [2024-07-23 01:09:50.490116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.424 qpair failed and we were unable to recover it. 00:30:06.424 [2024-07-23 01:09:50.490294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.424 [2024-07-23 01:09:50.490425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.424 [2024-07-23 01:09:50.490452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.424 qpair failed and we were unable to recover it. 00:30:06.424 [2024-07-23 01:09:50.490616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.424 [2024-07-23 01:09:50.490746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.424 [2024-07-23 01:09:50.490771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.424 qpair failed and we were unable to recover it. 00:30:06.424 [2024-07-23 01:09:50.490940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.424 [2024-07-23 01:09:50.491080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.424 [2024-07-23 01:09:50.491105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.424 qpair failed and we were unable to recover it. 00:30:06.424 [2024-07-23 01:09:50.491252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.425 [2024-07-23 01:09:50.491436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.425 [2024-07-23 01:09:50.491461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.425 qpair failed and we were unable to recover it. 00:30:06.425 [2024-07-23 01:09:50.491595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.425 [2024-07-23 01:09:50.491756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.425 [2024-07-23 01:09:50.491781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.425 qpair failed and we were unable to recover it. 00:30:06.425 [2024-07-23 01:09:50.491910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.425 [2024-07-23 01:09:50.492040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.425 [2024-07-23 01:09:50.492064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.425 qpair failed and we were unable to recover it. 00:30:06.425 [2024-07-23 01:09:50.492221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.425 [2024-07-23 01:09:50.492349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.425 [2024-07-23 01:09:50.492374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.425 qpair failed and we were unable to recover it. 00:30:06.425 [2024-07-23 01:09:50.492510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.425 [2024-07-23 01:09:50.492655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.425 [2024-07-23 01:09:50.492682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.425 qpair failed and we were unable to recover it. 00:30:06.425 [2024-07-23 01:09:50.492868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.425 [2024-07-23 01:09:50.493036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.425 [2024-07-23 01:09:50.493060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.425 qpair failed and we were unable to recover it. 00:30:06.425 [2024-07-23 01:09:50.493224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.425 [2024-07-23 01:09:50.493361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.425 [2024-07-23 01:09:50.493388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.425 qpair failed and we were unable to recover it. 00:30:06.425 [2024-07-23 01:09:50.493528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.425 [2024-07-23 01:09:50.493670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.425 [2024-07-23 01:09:50.493695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.425 qpair failed and we were unable to recover it. 00:30:06.425 [2024-07-23 01:09:50.493855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.425 [2024-07-23 01:09:50.494002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.425 [2024-07-23 01:09:50.494027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.425 qpair failed and we were unable to recover it. 00:30:06.425 [2024-07-23 01:09:50.494191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.425 [2024-07-23 01:09:50.494327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.425 [2024-07-23 01:09:50.494352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.425 qpair failed and we were unable to recover it. 00:30:06.425 [2024-07-23 01:09:50.494491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.425 [2024-07-23 01:09:50.494625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.425 [2024-07-23 01:09:50.494650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.425 qpair failed and we were unable to recover it. 00:30:06.425 [2024-07-23 01:09:50.494794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.425 [2024-07-23 01:09:50.494957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.425 [2024-07-23 01:09:50.494981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.425 qpair failed and we were unable to recover it. 00:30:06.425 [2024-07-23 01:09:50.495125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.425 [2024-07-23 01:09:50.495270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.425 [2024-07-23 01:09:50.495295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.425 qpair failed and we were unable to recover it. 00:30:06.425 [2024-07-23 01:09:50.495426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.425 [2024-07-23 01:09:50.495592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.425 [2024-07-23 01:09:50.495642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.425 qpair failed and we were unable to recover it. 00:30:06.425 [2024-07-23 01:09:50.495778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.425 [2024-07-23 01:09:50.495915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.425 [2024-07-23 01:09:50.495940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.425 qpair failed and we were unable to recover it. 00:30:06.425 [2024-07-23 01:09:50.496096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.425 [2024-07-23 01:09:50.496281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.425 [2024-07-23 01:09:50.496306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.425 qpair failed and we were unable to recover it. 00:30:06.425 [2024-07-23 01:09:50.496467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.425 [2024-07-23 01:09:50.496626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.425 [2024-07-23 01:09:50.496652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.425 qpair failed and we were unable to recover it. 00:30:06.425 [2024-07-23 01:09:50.496786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.425 [2024-07-23 01:09:50.496944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.425 [2024-07-23 01:09:50.496969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.425 qpair failed and we were unable to recover it. 00:30:06.425 [2024-07-23 01:09:50.497131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.425 [2024-07-23 01:09:50.497273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.425 [2024-07-23 01:09:50.497298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.425 qpair failed and we were unable to recover it. 00:30:06.425 [2024-07-23 01:09:50.497432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.425 [2024-07-23 01:09:50.497571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.425 [2024-07-23 01:09:50.497595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.425 qpair failed and we were unable to recover it. 00:30:06.425 [2024-07-23 01:09:50.497744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.425 [2024-07-23 01:09:50.497888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.425 [2024-07-23 01:09:50.497913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.425 qpair failed and we were unable to recover it. 00:30:06.425 [2024-07-23 01:09:50.498084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.425 [2024-07-23 01:09:50.498224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.425 [2024-07-23 01:09:50.498249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.425 qpair failed and we were unable to recover it. 00:30:06.425 [2024-07-23 01:09:50.498411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.425 [2024-07-23 01:09:50.498545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.425 [2024-07-23 01:09:50.498569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.425 qpair failed and we were unable to recover it. 00:30:06.425 [2024-07-23 01:09:50.498733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.425 [2024-07-23 01:09:50.498874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.425 [2024-07-23 01:09:50.498901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.425 qpair failed and we were unable to recover it. 00:30:06.425 [2024-07-23 01:09:50.499045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.425 [2024-07-23 01:09:50.499173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.425 [2024-07-23 01:09:50.499197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.425 qpair failed and we were unable to recover it. 00:30:06.425 [2024-07-23 01:09:50.499328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.425 [2024-07-23 01:09:50.499470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.425 [2024-07-23 01:09:50.499494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.425 qpair failed and we were unable to recover it. 00:30:06.425 [2024-07-23 01:09:50.499662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.425 [2024-07-23 01:09:50.499822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.425 [2024-07-23 01:09:50.499848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.425 qpair failed and we were unable to recover it. 00:30:06.425 [2024-07-23 01:09:50.499994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.425 [2024-07-23 01:09:50.500161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.425 [2024-07-23 01:09:50.500188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.425 qpair failed and we were unable to recover it. 00:30:06.425 [2024-07-23 01:09:50.500349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.425 [2024-07-23 01:09:50.500482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.426 [2024-07-23 01:09:50.500509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.426 qpair failed and we were unable to recover it. 00:30:06.426 [2024-07-23 01:09:50.500648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.426 [2024-07-23 01:09:50.500824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.426 [2024-07-23 01:09:50.500851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.426 qpair failed and we were unable to recover it. 00:30:06.426 [2024-07-23 01:09:50.500991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.426 [2024-07-23 01:09:50.501156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.426 [2024-07-23 01:09:50.501181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.426 qpair failed and we were unable to recover it. 00:30:06.426 [2024-07-23 01:09:50.501342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.426 [2024-07-23 01:09:50.501485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.426 [2024-07-23 01:09:50.501510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.426 qpair failed and we were unable to recover it. 00:30:06.426 [2024-07-23 01:09:50.501677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.426 [2024-07-23 01:09:50.501814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.426 [2024-07-23 01:09:50.501842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.426 qpair failed and we were unable to recover it. 00:30:06.426 [2024-07-23 01:09:50.502013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.426 [2024-07-23 01:09:50.502176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.426 [2024-07-23 01:09:50.502201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.426 qpair failed and we were unable to recover it. 00:30:06.426 [2024-07-23 01:09:50.502359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.426 [2024-07-23 01:09:50.502495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.426 [2024-07-23 01:09:50.502522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.426 qpair failed and we were unable to recover it. 00:30:06.426 [2024-07-23 01:09:50.502678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.426 [2024-07-23 01:09:50.502813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.426 [2024-07-23 01:09:50.502839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.426 qpair failed and we were unable to recover it. 00:30:06.426 [2024-07-23 01:09:50.502987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.426 [2024-07-23 01:09:50.503123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.426 [2024-07-23 01:09:50.503148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.426 qpair failed and we were unable to recover it. 00:30:06.426 [2024-07-23 01:09:50.503276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.426 [2024-07-23 01:09:50.503398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.426 [2024-07-23 01:09:50.503422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.426 qpair failed and we were unable to recover it. 00:30:06.426 [2024-07-23 01:09:50.503611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.426 [2024-07-23 01:09:50.503794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.426 [2024-07-23 01:09:50.503819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.426 qpair failed and we were unable to recover it. 00:30:06.426 [2024-07-23 01:09:50.503955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.426 [2024-07-23 01:09:50.504089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.426 [2024-07-23 01:09:50.504114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.426 qpair failed and we were unable to recover it. 00:30:06.426 [2024-07-23 01:09:50.504243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.426 [2024-07-23 01:09:50.504374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.426 [2024-07-23 01:09:50.504399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.426 qpair failed and we were unable to recover it. 00:30:06.426 [2024-07-23 01:09:50.504524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.426 [2024-07-23 01:09:50.504653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.426 [2024-07-23 01:09:50.504678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.426 qpair failed and we were unable to recover it. 00:30:06.426 [2024-07-23 01:09:50.504835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.426 [2024-07-23 01:09:50.504998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.426 [2024-07-23 01:09:50.505022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.426 qpair failed and we were unable to recover it. 00:30:06.426 [2024-07-23 01:09:50.505160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.426 [2024-07-23 01:09:50.505352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.426 [2024-07-23 01:09:50.505377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.426 qpair failed and we were unable to recover it. 00:30:06.426 [2024-07-23 01:09:50.505510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.426 [2024-07-23 01:09:50.505673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.426 [2024-07-23 01:09:50.505698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb178000b90 with addr=10.0.0.2, port=4420 00:30:06.426 qpair failed and we were unable to recover it. 00:30:06.426 [2024-07-23 01:09:50.505878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.426 [2024-07-23 01:09:50.506037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.426 [2024-07-23 01:09:50.506068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.426 qpair failed and we were unable to recover it. 00:30:06.426 [2024-07-23 01:09:50.506238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.426 [2024-07-23 01:09:50.506400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.426 [2024-07-23 01:09:50.506427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.426 qpair failed and we were unable to recover it. 00:30:06.426 [2024-07-23 01:09:50.506600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.426 [2024-07-23 01:09:50.506767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.426 [2024-07-23 01:09:50.506795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.426 qpair failed and we were unable to recover it. 00:30:06.426 [2024-07-23 01:09:50.506927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.426 [2024-07-23 01:09:50.507083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.426 [2024-07-23 01:09:50.507109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.426 qpair failed and we were unable to recover it. 00:30:06.426 [2024-07-23 01:09:50.507242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.426 [2024-07-23 01:09:50.507402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.426 [2024-07-23 01:09:50.507428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.426 qpair failed and we were unable to recover it. 00:30:06.426 [2024-07-23 01:09:50.507626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.426 [2024-07-23 01:09:50.507753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.426 [2024-07-23 01:09:50.507778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.426 qpair failed and we were unable to recover it. 00:30:06.426 [2024-07-23 01:09:50.507931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.426 [2024-07-23 01:09:50.508063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.426 [2024-07-23 01:09:50.508088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.426 qpair failed and we were unable to recover it. 00:30:06.426 [2024-07-23 01:09:50.508258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.426 [2024-07-23 01:09:50.508391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.426 [2024-07-23 01:09:50.508419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.426 qpair failed and we were unable to recover it. 00:30:06.426 [2024-07-23 01:09:50.508556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.426 [2024-07-23 01:09:50.508732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.426 [2024-07-23 01:09:50.508759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.426 qpair failed and we were unable to recover it. 00:30:06.426 [2024-07-23 01:09:50.508895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.426 [2024-07-23 01:09:50.509026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.426 [2024-07-23 01:09:50.509053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.426 qpair failed and we were unable to recover it. 00:30:06.426 [2024-07-23 01:09:50.509194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.426 [2024-07-23 01:09:50.509388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.426 [2024-07-23 01:09:50.509414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.426 qpair failed and we were unable to recover it. 00:30:06.426 [2024-07-23 01:09:50.509575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.426 [2024-07-23 01:09:50.509732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.426 [2024-07-23 01:09:50.509760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.426 qpair failed and we were unable to recover it. 00:30:06.426 [2024-07-23 01:09:50.509904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.427 [2024-07-23 01:09:50.510070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.427 [2024-07-23 01:09:50.510096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.427 qpair failed and we were unable to recover it. 00:30:06.427 [2024-07-23 01:09:50.510230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.427 [2024-07-23 01:09:50.510360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.427 [2024-07-23 01:09:50.510386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.427 qpair failed and we were unable to recover it. 00:30:06.427 [2024-07-23 01:09:50.510551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.427 [2024-07-23 01:09:50.510688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.427 [2024-07-23 01:09:50.510715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.427 qpair failed and we were unable to recover it. 00:30:06.427 [2024-07-23 01:09:50.510846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.427 [2024-07-23 01:09:50.511007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.427 [2024-07-23 01:09:50.511033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.427 qpair failed and we were unable to recover it. 00:30:06.427 [2024-07-23 01:09:50.511195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.427 [2024-07-23 01:09:50.511318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.427 [2024-07-23 01:09:50.511344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.427 qpair failed and we were unable to recover it. 00:30:06.427 [2024-07-23 01:09:50.511514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.427 [2024-07-23 01:09:50.511661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.427 [2024-07-23 01:09:50.511688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.427 qpair failed and we were unable to recover it. 00:30:06.427 [2024-07-23 01:09:50.511839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.427 [2024-07-23 01:09:50.511988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.427 [2024-07-23 01:09:50.512013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.427 qpair failed and we were unable to recover it. 00:30:06.427 [2024-07-23 01:09:50.512147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.427 [2024-07-23 01:09:50.512278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.427 [2024-07-23 01:09:50.512304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.427 qpair failed and we were unable to recover it. 00:30:06.427 [2024-07-23 01:09:50.512439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.427 [2024-07-23 01:09:50.512627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.427 [2024-07-23 01:09:50.512653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.427 qpair failed and we were unable to recover it. 00:30:06.427 [2024-07-23 01:09:50.512802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.427 [2024-07-23 01:09:50.512964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.427 [2024-07-23 01:09:50.512992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.427 qpair failed and we were unable to recover it. 00:30:06.427 [2024-07-23 01:09:50.513189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.427 [2024-07-23 01:09:50.513334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.427 [2024-07-23 01:09:50.513359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.427 qpair failed and we were unable to recover it. 00:30:06.427 [2024-07-23 01:09:50.513495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.427 [2024-07-23 01:09:50.513641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.427 [2024-07-23 01:09:50.513667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.427 qpair failed and we were unable to recover it. 00:30:06.427 [2024-07-23 01:09:50.513828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.427 [2024-07-23 01:09:50.513983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.427 [2024-07-23 01:09:50.514008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.427 qpair failed and we were unable to recover it. 00:30:06.427 [2024-07-23 01:09:50.514167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.427 [2024-07-23 01:09:50.514326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.427 [2024-07-23 01:09:50.514353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.427 qpair failed and we were unable to recover it. 00:30:06.427 [2024-07-23 01:09:50.514532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.427 [2024-07-23 01:09:50.514699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.427 [2024-07-23 01:09:50.514725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.427 qpair failed and we were unable to recover it. 00:30:06.427 [2024-07-23 01:09:50.514872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.427 [2024-07-23 01:09:50.514999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.427 [2024-07-23 01:09:50.515025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.427 qpair failed and we were unable to recover it. 00:30:06.427 [2024-07-23 01:09:50.515155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.427 [2024-07-23 01:09:50.515323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.427 [2024-07-23 01:09:50.515348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.427 qpair failed and we were unable to recover it. 00:30:06.427 [2024-07-23 01:09:50.515482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.427 [2024-07-23 01:09:50.515658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.427 [2024-07-23 01:09:50.515685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.427 qpair failed and we were unable to recover it. 00:30:06.427 [2024-07-23 01:09:50.515829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.427 [2024-07-23 01:09:50.515990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.427 [2024-07-23 01:09:50.516016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.427 qpair failed and we were unable to recover it. 00:30:06.427 [2024-07-23 01:09:50.516158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.427 [2024-07-23 01:09:50.516325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.427 [2024-07-23 01:09:50.516353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.427 qpair failed and we were unable to recover it. 00:30:06.427 [2024-07-23 01:09:50.516517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.427 [2024-07-23 01:09:50.516681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.427 [2024-07-23 01:09:50.516708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.427 qpair failed and we were unable to recover it. 00:30:06.427 [2024-07-23 01:09:50.516878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.427 [2024-07-23 01:09:50.517036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.427 [2024-07-23 01:09:50.517063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.427 qpair failed and we were unable to recover it. 00:30:06.427 [2024-07-23 01:09:50.517222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.427 [2024-07-23 01:09:50.517383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.427 [2024-07-23 01:09:50.517410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.427 qpair failed and we were unable to recover it. 00:30:06.427 [2024-07-23 01:09:50.517560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.427 [2024-07-23 01:09:50.517720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.427 [2024-07-23 01:09:50.517746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.427 qpair failed and we were unable to recover it. 00:30:06.427 [2024-07-23 01:09:50.517921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.427 [2024-07-23 01:09:50.518081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.427 [2024-07-23 01:09:50.518106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.427 qpair failed and we were unable to recover it. 00:30:06.427 [2024-07-23 01:09:50.518245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.427 [2024-07-23 01:09:50.518380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.427 [2024-07-23 01:09:50.518407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.427 qpair failed and we were unable to recover it. 00:30:06.427 [2024-07-23 01:09:50.518570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.427 [2024-07-23 01:09:50.518752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.427 [2024-07-23 01:09:50.518778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.427 qpair failed and we were unable to recover it. 00:30:06.427 [2024-07-23 01:09:50.518976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.427 [2024-07-23 01:09:50.519102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.427 [2024-07-23 01:09:50.519127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.427 qpair failed and we were unable to recover it. 00:30:06.427 [2024-07-23 01:09:50.519256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.427 [2024-07-23 01:09:50.519398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.427 [2024-07-23 01:09:50.519423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.428 qpair failed and we were unable to recover it. 00:30:06.428 [2024-07-23 01:09:50.519584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.428 [2024-07-23 01:09:50.519771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.428 [2024-07-23 01:09:50.519797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.428 qpair failed and we were unable to recover it. 00:30:06.428 [2024-07-23 01:09:50.519937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.428 [2024-07-23 01:09:50.520082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.428 [2024-07-23 01:09:50.520108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.428 qpair failed and we were unable to recover it. 00:30:06.428 [2024-07-23 01:09:50.520286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.428 [2024-07-23 01:09:50.520420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.428 [2024-07-23 01:09:50.520447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.428 qpair failed and we were unable to recover it. 00:30:06.428 [2024-07-23 01:09:50.520582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.428 [2024-07-23 01:09:50.520722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.428 [2024-07-23 01:09:50.520750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.428 qpair failed and we were unable to recover it. 00:30:06.428 [2024-07-23 01:09:50.520885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.428 [2024-07-23 01:09:50.521053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.428 [2024-07-23 01:09:50.521080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.428 qpair failed and we were unable to recover it. 00:30:06.428 [2024-07-23 01:09:50.521247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.428 [2024-07-23 01:09:50.521411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.428 [2024-07-23 01:09:50.521436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.428 qpair failed and we were unable to recover it. 00:30:06.428 [2024-07-23 01:09:50.521570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.428 [2024-07-23 01:09:50.521704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.428 [2024-07-23 01:09:50.521730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.428 qpair failed and we were unable to recover it. 00:30:06.428 [2024-07-23 01:09:50.521870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.428 [2024-07-23 01:09:50.522027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.428 [2024-07-23 01:09:50.522054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.428 qpair failed and we were unable to recover it. 00:30:06.428 [2024-07-23 01:09:50.522228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.428 [2024-07-23 01:09:50.522363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.428 [2024-07-23 01:09:50.522388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.428 qpair failed and we were unable to recover it. 00:30:06.428 [2024-07-23 01:09:50.522552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.428 [2024-07-23 01:09:50.522704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.428 [2024-07-23 01:09:50.522736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.428 qpair failed and we were unable to recover it. 00:30:06.428 [2024-07-23 01:09:50.522871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.428 [2024-07-23 01:09:50.522999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.428 [2024-07-23 01:09:50.523024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.428 qpair failed and we were unable to recover it. 00:30:06.428 [2024-07-23 01:09:50.523171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.428 [2024-07-23 01:09:50.523320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.428 [2024-07-23 01:09:50.523346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.428 qpair failed and we were unable to recover it. 00:30:06.428 [2024-07-23 01:09:50.523512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.428 [2024-07-23 01:09:50.523646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.428 [2024-07-23 01:09:50.523672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.428 qpair failed and we were unable to recover it. 00:30:06.428 [2024-07-23 01:09:50.523856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.428 [2024-07-23 01:09:50.524032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.428 [2024-07-23 01:09:50.524058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.428 qpair failed and we were unable to recover it. 00:30:06.428 [2024-07-23 01:09:50.524200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.428 [2024-07-23 01:09:50.524361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.428 [2024-07-23 01:09:50.524387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.428 qpair failed and we were unable to recover it. 00:30:06.428 [2024-07-23 01:09:50.524547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.428 [2024-07-23 01:09:50.524695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.428 [2024-07-23 01:09:50.524721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.428 qpair failed and we were unable to recover it. 00:30:06.428 [2024-07-23 01:09:50.524881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.428 [2024-07-23 01:09:50.525011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.428 [2024-07-23 01:09:50.525036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.428 qpair failed and we were unable to recover it. 00:30:06.428 [2024-07-23 01:09:50.525216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.428 [2024-07-23 01:09:50.525345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.428 [2024-07-23 01:09:50.525370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.428 qpair failed and we were unable to recover it. 00:30:06.428 [2024-07-23 01:09:50.525514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.428 [2024-07-23 01:09:50.525673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.428 [2024-07-23 01:09:50.525699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.428 qpair failed and we were unable to recover it. 00:30:06.428 [2024-07-23 01:09:50.525831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.428 [2024-07-23 01:09:50.525994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.428 [2024-07-23 01:09:50.526024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.428 qpair failed and we were unable to recover it. 00:30:06.428 [2024-07-23 01:09:50.526171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.428 [2024-07-23 01:09:50.526329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.428 [2024-07-23 01:09:50.526354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.428 qpair failed and we were unable to recover it. 00:30:06.428 [2024-07-23 01:09:50.526545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.428 [2024-07-23 01:09:50.526719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.428 [2024-07-23 01:09:50.526746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.428 qpair failed and we were unable to recover it. 00:30:06.428 [2024-07-23 01:09:50.526913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.428 [2024-07-23 01:09:50.527069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.428 [2024-07-23 01:09:50.527094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.428 qpair failed and we were unable to recover it. 00:30:06.428 [2024-07-23 01:09:50.527253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.428 [2024-07-23 01:09:50.527384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.428 [2024-07-23 01:09:50.527409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.428 qpair failed and we were unable to recover it. 00:30:06.429 [2024-07-23 01:09:50.527546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.429 [2024-07-23 01:09:50.527704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.429 [2024-07-23 01:09:50.527730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.429 qpair failed and we were unable to recover it. 00:30:06.429 [2024-07-23 01:09:50.527876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.429 [2024-07-23 01:09:50.528017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.429 [2024-07-23 01:09:50.528043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.429 qpair failed and we were unable to recover it. 00:30:06.429 [2024-07-23 01:09:50.528177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.429 [2024-07-23 01:09:50.528341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.429 [2024-07-23 01:09:50.528367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.429 qpair failed and we were unable to recover it. 00:30:06.429 [2024-07-23 01:09:50.528499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.429 [2024-07-23 01:09:50.528641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.429 [2024-07-23 01:09:50.528668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.429 qpair failed and we were unable to recover it. 00:30:06.429 [2024-07-23 01:09:50.528827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.429 [2024-07-23 01:09:50.529012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.429 [2024-07-23 01:09:50.529038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.429 qpair failed and we were unable to recover it. 00:30:06.429 [2024-07-23 01:09:50.529206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.429 [2024-07-23 01:09:50.529334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.429 [2024-07-23 01:09:50.529367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.429 qpair failed and we were unable to recover it. 00:30:06.429 [2024-07-23 01:09:50.529531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.429 [2024-07-23 01:09:50.529691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.429 [2024-07-23 01:09:50.529718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.429 qpair failed and we were unable to recover it. 00:30:06.429 [2024-07-23 01:09:50.529882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.429 [2024-07-23 01:09:50.530057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.429 [2024-07-23 01:09:50.530083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.429 qpair failed and we were unable to recover it. 00:30:06.429 [2024-07-23 01:09:50.530245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.429 [2024-07-23 01:09:50.530407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.429 [2024-07-23 01:09:50.530432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.429 qpair failed and we were unable to recover it. 00:30:06.429 [2024-07-23 01:09:50.530596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.429 [2024-07-23 01:09:50.530741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.429 [2024-07-23 01:09:50.530767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.429 qpair failed and we were unable to recover it. 00:30:06.429 [2024-07-23 01:09:50.530923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.429 [2024-07-23 01:09:50.531108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.429 [2024-07-23 01:09:50.531133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.429 qpair failed and we were unable to recover it. 00:30:06.429 [2024-07-23 01:09:50.531293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.429 [2024-07-23 01:09:50.531427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.429 [2024-07-23 01:09:50.531453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.429 qpair failed and we were unable to recover it. 00:30:06.429 [2024-07-23 01:09:50.531603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.429 [2024-07-23 01:09:50.531813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.429 [2024-07-23 01:09:50.531840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.429 qpair failed and we were unable to recover it. 00:30:06.429 [2024-07-23 01:09:50.532010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.429 [2024-07-23 01:09:50.532173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.429 [2024-07-23 01:09:50.532199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.429 qpair failed and we were unable to recover it. 00:30:06.429 [2024-07-23 01:09:50.532353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.429 [2024-07-23 01:09:50.532498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.429 [2024-07-23 01:09:50.532523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.429 qpair failed and we were unable to recover it. 00:30:06.429 [2024-07-23 01:09:50.532690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.429 [2024-07-23 01:09:50.532840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.429 [2024-07-23 01:09:50.532869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.429 qpair failed and we were unable to recover it. 00:30:06.429 [2024-07-23 01:09:50.533064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.429 [2024-07-23 01:09:50.533224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.429 [2024-07-23 01:09:50.533249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.429 qpair failed and we were unable to recover it. 00:30:06.429 [2024-07-23 01:09:50.533418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.429 [2024-07-23 01:09:50.533597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.429 [2024-07-23 01:09:50.533628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.429 qpair failed and we were unable to recover it. 00:30:06.429 [2024-07-23 01:09:50.533779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.429 [2024-07-23 01:09:50.533914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.429 [2024-07-23 01:09:50.533939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.429 qpair failed and we were unable to recover it. 00:30:06.429 [2024-07-23 01:09:50.534072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.429 [2024-07-23 01:09:50.534209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.429 [2024-07-23 01:09:50.534235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.429 qpair failed and we were unable to recover it. 00:30:06.429 [2024-07-23 01:09:50.534394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.429 [2024-07-23 01:09:50.534581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.429 [2024-07-23 01:09:50.534606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.429 qpair failed and we were unable to recover it. 00:30:06.429 [2024-07-23 01:09:50.534771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.429 [2024-07-23 01:09:50.534911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.429 [2024-07-23 01:09:50.534937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.429 qpair failed and we were unable to recover it. 00:30:06.429 [2024-07-23 01:09:50.535098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.429 [2024-07-23 01:09:50.535234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.429 [2024-07-23 01:09:50.535261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.429 qpair failed and we were unable to recover it. 00:30:06.429 [2024-07-23 01:09:50.535426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.429 [2024-07-23 01:09:50.535565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.429 [2024-07-23 01:09:50.535591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb170000b90 with addr=10.0.0.2, port=4420 00:30:06.429 qpair failed and we were unable to recover it. 00:30:06.429 [2024-07-23 01:09:50.535750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.429 [2024-07-23 01:09:50.535926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.429 [2024-07-23 01:09:50.535954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.429 qpair failed and we were unable to recover it. 00:30:06.429 [2024-07-23 01:09:50.536099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.429 [2024-07-23 01:09:50.536245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.429 [2024-07-23 01:09:50.536271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.429 qpair failed and we were unable to recover it. 00:30:06.429 [2024-07-23 01:09:50.536410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.429 [2024-07-23 01:09:50.536553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.429 [2024-07-23 01:09:50.536577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.429 qpair failed and we were unable to recover it. 00:30:06.429 [2024-07-23 01:09:50.536747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.429 [2024-07-23 01:09:50.536889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.429 [2024-07-23 01:09:50.536914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.429 qpair failed and we were unable to recover it. 00:30:06.429 [2024-07-23 01:09:50.537075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.429 [2024-07-23 01:09:50.537225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.430 [2024-07-23 01:09:50.537252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.430 qpair failed and we were unable to recover it. 00:30:06.430 [2024-07-23 01:09:50.537384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.430 [2024-07-23 01:09:50.537515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.430 [2024-07-23 01:09:50.537539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.430 qpair failed and we were unable to recover it. 00:30:06.430 [2024-07-23 01:09:50.537705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.430 [2024-07-23 01:09:50.537832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.430 [2024-07-23 01:09:50.537857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.430 qpair failed and we were unable to recover it. 00:30:06.430 [2024-07-23 01:09:50.537995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.430 [2024-07-23 01:09:50.538131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.430 [2024-07-23 01:09:50.538155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.430 qpair failed and we were unable to recover it. 00:30:06.430 [2024-07-23 01:09:50.538312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.430 [2024-07-23 01:09:50.538464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.430 [2024-07-23 01:09:50.538489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.430 qpair failed and we were unable to recover it. 00:30:06.430 [2024-07-23 01:09:50.538624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.430 [2024-07-23 01:09:50.538790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.430 [2024-07-23 01:09:50.538814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.430 qpair failed and we were unable to recover it. 00:30:06.430 [2024-07-23 01:09:50.538943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.430 [2024-07-23 01:09:50.539110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.430 [2024-07-23 01:09:50.539135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.430 qpair failed and we were unable to recover it. 00:30:06.430 [2024-07-23 01:09:50.539271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.430 [2024-07-23 01:09:50.539404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.430 [2024-07-23 01:09:50.539428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.430 qpair failed and we were unable to recover it. 00:30:06.430 [2024-07-23 01:09:50.539579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.430 [2024-07-23 01:09:50.539715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.430 [2024-07-23 01:09:50.539741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.430 qpair failed and we were unable to recover it. 00:30:06.430 [2024-07-23 01:09:50.539890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.430 [2024-07-23 01:09:50.540062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.430 [2024-07-23 01:09:50.540086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.430 qpair failed and we were unable to recover it. 00:30:06.430 [2024-07-23 01:09:50.540225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.430 [2024-07-23 01:09:50.540356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.430 [2024-07-23 01:09:50.540380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.430 qpair failed and we were unable to recover it. 00:30:06.430 [2024-07-23 01:09:50.540538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.430 [2024-07-23 01:09:50.540688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.430 [2024-07-23 01:09:50.540714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.430 qpair failed and we were unable to recover it. 00:30:06.430 [2024-07-23 01:09:50.540853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.430 [2024-07-23 01:09:50.540987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.430 [2024-07-23 01:09:50.541012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.430 qpair failed and we were unable to recover it. 00:30:06.430 [2024-07-23 01:09:50.541144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.430 [2024-07-23 01:09:50.541298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.430 [2024-07-23 01:09:50.541323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.430 qpair failed and we were unable to recover it. 00:30:06.430 [2024-07-23 01:09:50.541454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.430 [2024-07-23 01:09:50.541590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.430 [2024-07-23 01:09:50.541620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.430 qpair failed and we were unable to recover it. 00:30:06.430 [2024-07-23 01:09:50.541761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.430 [2024-07-23 01:09:50.541902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.430 [2024-07-23 01:09:50.541926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.430 qpair failed and we were unable to recover it. 00:30:06.430 [2024-07-23 01:09:50.542059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.430 [2024-07-23 01:09:50.542190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.430 [2024-07-23 01:09:50.542214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.430 qpair failed and we were unable to recover it. 00:30:06.430 [2024-07-23 01:09:50.542374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.430 [2024-07-23 01:09:50.542505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.430 [2024-07-23 01:09:50.542529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.430 qpair failed and we were unable to recover it. 00:30:06.430 [2024-07-23 01:09:50.542690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.430 [2024-07-23 01:09:50.542824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.430 [2024-07-23 01:09:50.542849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.430 qpair failed and we were unable to recover it. 00:30:06.430 [2024-07-23 01:09:50.542973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.430 [2024-07-23 01:09:50.543117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.430 [2024-07-23 01:09:50.543143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.430 qpair failed and we were unable to recover it. 00:30:06.430 [2024-07-23 01:09:50.543302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.430 [2024-07-23 01:09:50.543423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.430 [2024-07-23 01:09:50.543448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.430 qpair failed and we were unable to recover it. 00:30:06.430 [2024-07-23 01:09:50.543580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.430 [2024-07-23 01:09:50.543719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.430 [2024-07-23 01:09:50.543746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.430 qpair failed and we were unable to recover it. 00:30:06.430 [2024-07-23 01:09:50.543887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.430 [2024-07-23 01:09:50.544052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.430 [2024-07-23 01:09:50.544076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.430 qpair failed and we were unable to recover it. 00:30:06.430 [2024-07-23 01:09:50.544228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.430 [2024-07-23 01:09:50.544404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.430 [2024-07-23 01:09:50.544428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.430 qpair failed and we were unable to recover it. 00:30:06.430 [2024-07-23 01:09:50.544559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.430 [2024-07-23 01:09:50.544693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.430 [2024-07-23 01:09:50.544718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.430 qpair failed and we were unable to recover it. 00:30:06.430 [2024-07-23 01:09:50.544860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.430 [2024-07-23 01:09:50.544992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.430 [2024-07-23 01:09:50.545016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.430 qpair failed and we were unable to recover it. 00:30:06.430 [2024-07-23 01:09:50.545208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.430 [2024-07-23 01:09:50.545342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.430 [2024-07-23 01:09:50.545366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.430 qpair failed and we were unable to recover it. 00:30:06.430 [2024-07-23 01:09:50.545529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.430 [2024-07-23 01:09:50.545726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.430 [2024-07-23 01:09:50.545751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.430 qpair failed and we were unable to recover it. 00:30:06.430 [2024-07-23 01:09:50.545882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.430 [2024-07-23 01:09:50.546052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.430 [2024-07-23 01:09:50.546078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.430 qpair failed and we were unable to recover it. 00:30:06.430 [2024-07-23 01:09:50.546245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.430 [2024-07-23 01:09:50.546373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.431 [2024-07-23 01:09:50.546398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.431 qpair failed and we were unable to recover it. 00:30:06.431 [2024-07-23 01:09:50.546554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.431 [2024-07-23 01:09:50.546694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.431 [2024-07-23 01:09:50.546719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.431 qpair failed and we were unable to recover it. 00:30:06.431 [2024-07-23 01:09:50.546850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.431 [2024-07-23 01:09:50.547007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.431 [2024-07-23 01:09:50.547031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.431 qpair failed and we were unable to recover it. 00:30:06.431 [2024-07-23 01:09:50.547193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.431 [2024-07-23 01:09:50.547328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.431 [2024-07-23 01:09:50.547353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.431 qpair failed and we were unable to recover it. 00:30:06.431 [2024-07-23 01:09:50.547490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.431 [2024-07-23 01:09:50.547654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.431 [2024-07-23 01:09:50.547680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.431 qpair failed and we were unable to recover it. 00:30:06.431 [2024-07-23 01:09:50.547841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.431 [2024-07-23 01:09:50.548000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.431 [2024-07-23 01:09:50.548024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.431 qpair failed and we were unable to recover it. 00:30:06.431 [2024-07-23 01:09:50.548210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.431 [2024-07-23 01:09:50.548358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.431 [2024-07-23 01:09:50.548385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.431 qpair failed and we were unable to recover it. 00:30:06.431 [2024-07-23 01:09:50.548525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.431 [2024-07-23 01:09:50.548684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.431 [2024-07-23 01:09:50.548710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.431 qpair failed and we were unable to recover it. 00:30:06.431 [2024-07-23 01:09:50.548839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.431 [2024-07-23 01:09:50.549013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.431 [2024-07-23 01:09:50.549037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.431 qpair failed and we were unable to recover it. 00:30:06.431 [2024-07-23 01:09:50.549165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.431 [2024-07-23 01:09:50.549326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.431 [2024-07-23 01:09:50.549354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.431 qpair failed and we were unable to recover it. 00:30:06.431 [2024-07-23 01:09:50.549487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.431 [2024-07-23 01:09:50.549629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.431 [2024-07-23 01:09:50.549655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.431 qpair failed and we were unable to recover it. 00:30:06.431 [2024-07-23 01:09:50.549819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.431 [2024-07-23 01:09:50.549946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.431 [2024-07-23 01:09:50.549971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.431 qpair failed and we were unable to recover it. 00:30:06.431 [2024-07-23 01:09:50.550105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.431 [2024-07-23 01:09:50.550277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.431 [2024-07-23 01:09:50.550302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.431 qpair failed and we were unable to recover it. 00:30:06.431 [2024-07-23 01:09:50.550447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.431 [2024-07-23 01:09:50.550584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.431 [2024-07-23 01:09:50.550608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.431 qpair failed and we were unable to recover it. 00:30:06.431 [2024-07-23 01:09:50.550781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.431 [2024-07-23 01:09:50.550939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.431 [2024-07-23 01:09:50.550963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.431 qpair failed and we were unable to recover it. 00:30:06.431 [2024-07-23 01:09:50.551094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.431 [2024-07-23 01:09:50.551252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.431 [2024-07-23 01:09:50.551277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.431 qpair failed and we were unable to recover it. 00:30:06.431 [2024-07-23 01:09:50.551404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.431 [2024-07-23 01:09:50.551531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.431 [2024-07-23 01:09:50.551556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.431 qpair failed and we were unable to recover it. 00:30:06.431 [2024-07-23 01:09:50.551690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.431 [2024-07-23 01:09:50.551818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.431 [2024-07-23 01:09:50.551843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.431 qpair failed and we were unable to recover it. 00:30:06.431 [2024-07-23 01:09:50.551974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.431 [2024-07-23 01:09:50.552104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.431 [2024-07-23 01:09:50.552129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.431 qpair failed and we were unable to recover it. 00:30:06.431 [2024-07-23 01:09:50.552266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.431 [2024-07-23 01:09:50.552399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.431 [2024-07-23 01:09:50.552441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.431 qpair failed and we were unable to recover it. 00:30:06.431 [2024-07-23 01:09:50.552584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.431 [2024-07-23 01:09:50.552725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.431 [2024-07-23 01:09:50.552751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.431 qpair failed and we were unable to recover it. 00:30:06.431 [2024-07-23 01:09:50.552934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.431 [2024-07-23 01:09:50.553071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.431 [2024-07-23 01:09:50.553096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.431 qpair failed and we were unable to recover it. 00:30:06.431 [2024-07-23 01:09:50.553281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.431 [2024-07-23 01:09:50.553412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.431 [2024-07-23 01:09:50.553436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.431 qpair failed and we were unable to recover it. 00:30:06.431 [2024-07-23 01:09:50.553570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.431 [2024-07-23 01:09:50.553746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.431 [2024-07-23 01:09:50.553771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.431 qpair failed and we were unable to recover it. 00:30:06.431 [2024-07-23 01:09:50.553927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.431 [2024-07-23 01:09:50.554080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.431 [2024-07-23 01:09:50.554106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.431 qpair failed and we were unable to recover it. 00:30:06.431 [2024-07-23 01:09:50.554266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.431 [2024-07-23 01:09:50.554398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.431 [2024-07-23 01:09:50.554424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.431 qpair failed and we were unable to recover it. 00:30:06.431 [2024-07-23 01:09:50.554565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.431 [2024-07-23 01:09:50.554697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.431 [2024-07-23 01:09:50.554723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.431 qpair failed and we were unable to recover it. 00:30:06.431 [2024-07-23 01:09:50.554858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.431 [2024-07-23 01:09:50.554992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.431 [2024-07-23 01:09:50.555016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.431 qpair failed and we were unable to recover it. 00:30:06.431 [2024-07-23 01:09:50.555179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.431 [2024-07-23 01:09:50.555342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.431 [2024-07-23 01:09:50.555367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.431 qpair failed and we were unable to recover it. 00:30:06.431 [2024-07-23 01:09:50.555506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.431 [2024-07-23 01:09:50.555633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.432 [2024-07-23 01:09:50.555658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.432 qpair failed and we were unable to recover it. 00:30:06.432 [2024-07-23 01:09:50.555829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.432 [2024-07-23 01:09:50.555961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.432 [2024-07-23 01:09:50.555986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.432 qpair failed and we were unable to recover it. 00:30:06.432 [2024-07-23 01:09:50.556144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.432 [2024-07-23 01:09:50.556301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.432 [2024-07-23 01:09:50.556327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.432 qpair failed and we were unable to recover it. 00:30:06.432 [2024-07-23 01:09:50.556452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.432 [2024-07-23 01:09:50.556580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.432 [2024-07-23 01:09:50.556604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.432 qpair failed and we were unable to recover it. 00:30:06.432 [2024-07-23 01:09:50.556743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.432 [2024-07-23 01:09:50.556879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.432 [2024-07-23 01:09:50.556904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.432 qpair failed and we were unable to recover it. 00:30:06.432 [2024-07-23 01:09:50.557032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.432 [2024-07-23 01:09:50.557172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.432 [2024-07-23 01:09:50.557197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.432 qpair failed and we were unable to recover it. 00:30:06.432 [2024-07-23 01:09:50.557390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.432 [2024-07-23 01:09:50.557529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.432 [2024-07-23 01:09:50.557554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.432 qpair failed and we were unable to recover it. 00:30:06.432 [2024-07-23 01:09:50.557689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.432 [2024-07-23 01:09:50.557856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.432 [2024-07-23 01:09:50.557880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.432 qpair failed and we were unable to recover it. 00:30:06.432 [2024-07-23 01:09:50.558019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.432 [2024-07-23 01:09:50.558155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.432 [2024-07-23 01:09:50.558181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.432 qpair failed and we were unable to recover it. 00:30:06.432 [2024-07-23 01:09:50.558329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.432 [2024-07-23 01:09:50.558503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.432 [2024-07-23 01:09:50.558528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.432 qpair failed and we were unable to recover it. 00:30:06.432 [2024-07-23 01:09:50.558663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.432 [2024-07-23 01:09:50.558801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.432 [2024-07-23 01:09:50.558826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.432 qpair failed and we were unable to recover it. 00:30:06.432 [2024-07-23 01:09:50.558994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.432 [2024-07-23 01:09:50.559138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.432 [2024-07-23 01:09:50.559163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.432 qpair failed and we were unable to recover it. 00:30:06.432 [2024-07-23 01:09:50.559334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.432 [2024-07-23 01:09:50.559466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.432 [2024-07-23 01:09:50.559490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.432 qpair failed and we were unable to recover it. 00:30:06.432 [2024-07-23 01:09:50.559632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.432 [2024-07-23 01:09:50.559819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.432 [2024-07-23 01:09:50.559843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.432 qpair failed and we were unable to recover it. 00:30:06.432 [2024-07-23 01:09:50.560003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.432 [2024-07-23 01:09:50.560164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.432 [2024-07-23 01:09:50.560188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.432 qpair failed and we were unable to recover it. 00:30:06.432 [2024-07-23 01:09:50.560339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.432 [2024-07-23 01:09:50.560499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.432 [2024-07-23 01:09:50.560523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.432 qpair failed and we were unable to recover it. 00:30:06.432 [2024-07-23 01:09:50.560673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.432 [2024-07-23 01:09:50.560802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.432 [2024-07-23 01:09:50.560827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.432 qpair failed and we were unable to recover it. 00:30:06.432 [2024-07-23 01:09:50.560951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.432 [2024-07-23 01:09:50.561125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.432 [2024-07-23 01:09:50.561150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.432 qpair failed and we were unable to recover it. 00:30:06.432 [2024-07-23 01:09:50.561278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.432 [2024-07-23 01:09:50.561411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.432 [2024-07-23 01:09:50.561437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.432 qpair failed and we were unable to recover it. 00:30:06.432 [2024-07-23 01:09:50.561607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.432 [2024-07-23 01:09:50.561746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.432 [2024-07-23 01:09:50.561770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.432 qpair failed and we were unable to recover it. 00:30:06.432 [2024-07-23 01:09:50.561903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.432 [2024-07-23 01:09:50.562032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.432 [2024-07-23 01:09:50.562057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.432 qpair failed and we were unable to recover it. 00:30:06.432 [2024-07-23 01:09:50.562215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.432 [2024-07-23 01:09:50.562358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.432 [2024-07-23 01:09:50.562382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.432 qpair failed and we were unable to recover it. 00:30:06.432 [2024-07-23 01:09:50.562506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.432 [2024-07-23 01:09:50.562668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.432 [2024-07-23 01:09:50.562693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.432 qpair failed and we were unable to recover it. 00:30:06.432 [2024-07-23 01:09:50.562862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.432 [2024-07-23 01:09:50.563022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.432 [2024-07-23 01:09:50.563047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.432 qpair failed and we were unable to recover it. 00:30:06.432 [2024-07-23 01:09:50.563236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.432 [2024-07-23 01:09:50.563360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.432 [2024-07-23 01:09:50.563384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.432 qpair failed and we were unable to recover it. 00:30:06.432 [2024-07-23 01:09:50.563556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.433 [2024-07-23 01:09:50.563690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.433 [2024-07-23 01:09:50.563716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.433 qpair failed and we were unable to recover it. 00:30:06.433 [2024-07-23 01:09:50.563880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.433 [2024-07-23 01:09:50.564023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.433 [2024-07-23 01:09:50.564047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.433 qpair failed and we were unable to recover it. 00:30:06.433 [2024-07-23 01:09:50.564216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.433 [2024-07-23 01:09:50.564389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.433 [2024-07-23 01:09:50.564414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.433 qpair failed and we were unable to recover it. 00:30:06.433 [2024-07-23 01:09:50.564547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.433 [2024-07-23 01:09:50.564708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.433 [2024-07-23 01:09:50.564733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.433 qpair failed and we were unable to recover it. 00:30:06.433 [2024-07-23 01:09:50.564869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.433 [2024-07-23 01:09:50.564994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.433 [2024-07-23 01:09:50.565019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.433 qpair failed and we were unable to recover it. 00:30:06.433 [2024-07-23 01:09:50.565151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.433 [2024-07-23 01:09:50.565281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.433 [2024-07-23 01:09:50.565305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.433 qpair failed and we were unable to recover it. 00:30:06.433 [2024-07-23 01:09:50.565462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.433 [2024-07-23 01:09:50.565608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.433 [2024-07-23 01:09:50.565638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.433 qpair failed and we were unable to recover it. 00:30:06.433 [2024-07-23 01:09:50.565776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.433 [2024-07-23 01:09:50.565903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.433 [2024-07-23 01:09:50.565928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.433 qpair failed and we were unable to recover it. 00:30:06.433 [2024-07-23 01:09:50.566100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.433 [2024-07-23 01:09:50.566228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.433 [2024-07-23 01:09:50.566252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.433 qpair failed and we were unable to recover it. 00:30:06.433 [2024-07-23 01:09:50.566385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.433 [2024-07-23 01:09:50.566542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.433 [2024-07-23 01:09:50.566566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.433 qpair failed and we were unable to recover it. 00:30:06.433 [2024-07-23 01:09:50.566707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.433 [2024-07-23 01:09:50.566838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.433 [2024-07-23 01:09:50.566863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.433 qpair failed and we were unable to recover it. 00:30:06.433 [2024-07-23 01:09:50.566993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.433 [2024-07-23 01:09:50.567151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.433 [2024-07-23 01:09:50.567175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.433 qpair failed and we were unable to recover it. 00:30:06.433 [2024-07-23 01:09:50.567334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.433 [2024-07-23 01:09:50.567467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.433 [2024-07-23 01:09:50.567491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.433 qpair failed and we were unable to recover it. 00:30:06.433 [2024-07-23 01:09:50.567650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.433 [2024-07-23 01:09:50.567807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.433 [2024-07-23 01:09:50.567832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.433 qpair failed and we were unable to recover it. 00:30:06.433 [2024-07-23 01:09:50.568030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.433 [2024-07-23 01:09:50.568156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.433 [2024-07-23 01:09:50.568180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.433 qpair failed and we were unable to recover it. 00:30:06.433 [2024-07-23 01:09:50.568329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.433 [2024-07-23 01:09:50.568466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.433 [2024-07-23 01:09:50.568493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.433 qpair failed and we were unable to recover it. 00:30:06.433 [2024-07-23 01:09:50.568670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.433 [2024-07-23 01:09:50.568803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.433 [2024-07-23 01:09:50.568831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.433 qpair failed and we were unable to recover it. 00:30:06.433 [2024-07-23 01:09:50.568964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.433 [2024-07-23 01:09:50.569127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.433 [2024-07-23 01:09:50.569153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.433 qpair failed and we were unable to recover it. 00:30:06.433 [2024-07-23 01:09:50.569315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.433 [2024-07-23 01:09:50.569451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.433 [2024-07-23 01:09:50.569477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.433 qpair failed and we were unable to recover it. 00:30:06.433 [2024-07-23 01:09:50.569604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.433 [2024-07-23 01:09:50.569750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.433 [2024-07-23 01:09:50.569775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.433 qpair failed and we were unable to recover it. 00:30:06.433 [2024-07-23 01:09:50.569935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.433 [2024-07-23 01:09:50.570091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.433 [2024-07-23 01:09:50.570116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.433 qpair failed and we were unable to recover it. 00:30:06.433 [2024-07-23 01:09:50.570248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.433 [2024-07-23 01:09:50.570385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.433 [2024-07-23 01:09:50.570410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.433 qpair failed and we were unable to recover it. 00:30:06.433 [2024-07-23 01:09:50.570569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.433 [2024-07-23 01:09:50.570751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.433 [2024-07-23 01:09:50.570776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.433 qpair failed and we were unable to recover it. 00:30:06.433 [2024-07-23 01:09:50.570918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.433 [2024-07-23 01:09:50.571083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.433 [2024-07-23 01:09:50.571107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.433 qpair failed and we were unable to recover it. 00:30:06.433 [2024-07-23 01:09:50.571240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.433 [2024-07-23 01:09:50.571372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.433 [2024-07-23 01:09:50.571398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.433 qpair failed and we were unable to recover it. 00:30:06.433 [2024-07-23 01:09:50.571528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.433 [2024-07-23 01:09:50.571689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.433 [2024-07-23 01:09:50.571715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.433 qpair failed and we were unable to recover it. 00:30:06.433 [2024-07-23 01:09:50.571850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.433 [2024-07-23 01:09:50.572006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.433 [2024-07-23 01:09:50.572031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.433 qpair failed and we were unable to recover it. 00:30:06.433 [2024-07-23 01:09:50.572169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.433 [2024-07-23 01:09:50.572330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.433 [2024-07-23 01:09:50.572355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.433 qpair failed and we were unable to recover it. 00:30:06.433 [2024-07-23 01:09:50.572490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.433 [2024-07-23 01:09:50.572626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.433 [2024-07-23 01:09:50.572651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.433 qpair failed and we were unable to recover it. 00:30:06.433 [2024-07-23 01:09:50.572789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.434 [2024-07-23 01:09:50.572924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.434 [2024-07-23 01:09:50.572948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.434 qpair failed and we were unable to recover it. 00:30:06.434 [2024-07-23 01:09:50.573075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.434 [2024-07-23 01:09:50.573226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.434 [2024-07-23 01:09:50.573252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.434 qpair failed and we were unable to recover it. 00:30:06.434 [2024-07-23 01:09:50.573419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.434 [2024-07-23 01:09:50.573547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.434 [2024-07-23 01:09:50.573571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.434 qpair failed and we were unable to recover it. 00:30:06.434 [2024-07-23 01:09:50.573719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.434 [2024-07-23 01:09:50.573858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.434 [2024-07-23 01:09:50.573883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.434 qpair failed and we were unable to recover it. 00:30:06.434 [2024-07-23 01:09:50.574049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.434 [2024-07-23 01:09:50.574190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.434 [2024-07-23 01:09:50.574215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.434 qpair failed and we were unable to recover it. 00:30:06.434 [2024-07-23 01:09:50.574373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.434 [2024-07-23 01:09:50.574536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.434 [2024-07-23 01:09:50.574560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.434 qpair failed and we were unable to recover it. 00:30:06.434 [2024-07-23 01:09:50.574694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.434 [2024-07-23 01:09:50.574830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.434 [2024-07-23 01:09:50.574856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.434 qpair failed and we were unable to recover it. 00:30:06.434 [2024-07-23 01:09:50.574986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.434 [2024-07-23 01:09:50.575144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.434 [2024-07-23 01:09:50.575169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.434 qpair failed and we were unable to recover it. 00:30:06.434 [2024-07-23 01:09:50.575307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.434 [2024-07-23 01:09:50.575437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.434 [2024-07-23 01:09:50.575461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.434 qpair failed and we were unable to recover it. 00:30:06.434 [2024-07-23 01:09:50.575653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.434 [2024-07-23 01:09:50.575784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.434 [2024-07-23 01:09:50.575810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.434 qpair failed and we were unable to recover it. 00:30:06.434 [2024-07-23 01:09:50.575942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.434 [2024-07-23 01:09:50.576106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.434 [2024-07-23 01:09:50.576130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.434 qpair failed and we were unable to recover it. 00:30:06.434 [2024-07-23 01:09:50.576294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.434 [2024-07-23 01:09:50.576441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.434 [2024-07-23 01:09:50.576465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.434 qpair failed and we were unable to recover it. 00:30:06.434 [2024-07-23 01:09:50.576596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.434 [2024-07-23 01:09:50.576732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.434 [2024-07-23 01:09:50.576757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.434 qpair failed and we were unable to recover it. 00:30:06.434 [2024-07-23 01:09:50.576887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.434 [2024-07-23 01:09:50.577020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.434 [2024-07-23 01:09:50.577044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.434 qpair failed and we were unable to recover it. 00:30:06.434 [2024-07-23 01:09:50.577170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.434 [2024-07-23 01:09:50.577337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.434 [2024-07-23 01:09:50.577361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.434 qpair failed and we were unable to recover it. 00:30:06.434 [2024-07-23 01:09:50.577488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.434 [2024-07-23 01:09:50.577623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.434 [2024-07-23 01:09:50.577648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.434 qpair failed and we were unable to recover it. 00:30:06.434 [2024-07-23 01:09:50.577798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.434 [2024-07-23 01:09:50.577933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.434 [2024-07-23 01:09:50.577957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.434 qpair failed and we were unable to recover it. 00:30:06.434 [2024-07-23 01:09:50.578111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.434 [2024-07-23 01:09:50.578266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.434 [2024-07-23 01:09:50.578290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.434 qpair failed and we were unable to recover it. 00:30:06.434 [2024-07-23 01:09:50.578428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.434 [2024-07-23 01:09:50.578576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.434 [2024-07-23 01:09:50.578600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.434 qpair failed and we were unable to recover it. 00:30:06.434 [2024-07-23 01:09:50.578736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.434 [2024-07-23 01:09:50.578872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.434 [2024-07-23 01:09:50.578898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.434 qpair failed and we were unable to recover it. 00:30:06.434 [2024-07-23 01:09:50.579038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.434 [2024-07-23 01:09:50.579200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.434 [2024-07-23 01:09:50.579224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.434 qpair failed and we were unable to recover it. 00:30:06.434 [2024-07-23 01:09:50.579389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.434 [2024-07-23 01:09:50.579510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.434 [2024-07-23 01:09:50.579534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.434 qpair failed and we were unable to recover it. 00:30:06.434 [2024-07-23 01:09:50.579678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.434 [2024-07-23 01:09:50.579807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.434 [2024-07-23 01:09:50.579832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.434 qpair failed and we were unable to recover it. 00:30:06.434 [2024-07-23 01:09:50.579968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.434 [2024-07-23 01:09:50.580113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.434 [2024-07-23 01:09:50.580138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.434 qpair failed and we were unable to recover it. 00:30:06.434 [2024-07-23 01:09:50.580301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.434 [2024-07-23 01:09:50.580429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.434 [2024-07-23 01:09:50.580454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.434 qpair failed and we were unable to recover it. 00:30:06.434 01:09:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:06.434 [2024-07-23 01:09:50.580623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.434 01:09:50 -- common/autotest_common.sh@852 -- # return 0 00:30:06.434 [2024-07-23 01:09:50.580773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.434 01:09:50 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:30:06.434 [2024-07-23 01:09:50.580799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.434 qpair failed and we were unable to recover it. 00:30:06.434 01:09:50 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:06.434 01:09:50 -- common/autotest_common.sh@10 -- # set +x 00:30:06.434 [2024-07-23 01:09:50.580989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.434 [2024-07-23 01:09:50.581124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.434 [2024-07-23 01:09:50.581149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.434 qpair failed and we were unable to recover it. 00:30:06.434 [2024-07-23 01:09:50.581292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.434 [2024-07-23 01:09:50.581449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.434 [2024-07-23 01:09:50.581481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.434 qpair failed and we were unable to recover it. 00:30:06.435 [2024-07-23 01:09:50.581621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.435 [2024-07-23 01:09:50.581768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.435 [2024-07-23 01:09:50.581792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.435 qpair failed and we were unable to recover it. 00:30:06.435 [2024-07-23 01:09:50.581947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.435 [2024-07-23 01:09:50.582103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.435 [2024-07-23 01:09:50.582127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.435 qpair failed and we were unable to recover it. 00:30:06.435 [2024-07-23 01:09:50.582258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.435 [2024-07-23 01:09:50.582440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.435 [2024-07-23 01:09:50.582464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.435 qpair failed and we were unable to recover it. 00:30:06.435 [2024-07-23 01:09:50.582601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.435 [2024-07-23 01:09:50.582746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.435 [2024-07-23 01:09:50.582774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.435 qpair failed and we were unable to recover it. 00:30:06.435 [2024-07-23 01:09:50.582912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.435 [2024-07-23 01:09:50.583049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.435 [2024-07-23 01:09:50.583074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.435 qpair failed and we were unable to recover it. 00:30:06.435 [2024-07-23 01:09:50.583258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.435 [2024-07-23 01:09:50.583391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.435 [2024-07-23 01:09:50.583415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.435 qpair failed and we were unable to recover it. 00:30:06.435 [2024-07-23 01:09:50.583544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.435 [2024-07-23 01:09:50.583704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.435 [2024-07-23 01:09:50.583730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.435 qpair failed and we were unable to recover it. 00:30:06.435 [2024-07-23 01:09:50.583887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.435 [2024-07-23 01:09:50.584046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.435 [2024-07-23 01:09:50.584071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.435 qpair failed and we were unable to recover it. 00:30:06.435 [2024-07-23 01:09:50.584199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.435 [2024-07-23 01:09:50.584346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.435 [2024-07-23 01:09:50.584373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.435 qpair failed and we were unable to recover it. 00:30:06.698 [2024-07-23 01:09:50.584525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.698 [2024-07-23 01:09:50.584693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.698 [2024-07-23 01:09:50.584719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.698 qpair failed and we were unable to recover it. 00:30:06.698 [2024-07-23 01:09:50.584870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.698 [2024-07-23 01:09:50.585046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.698 [2024-07-23 01:09:50.585071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.698 qpair failed and we were unable to recover it. 00:30:06.698 [2024-07-23 01:09:50.585218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.698 [2024-07-23 01:09:50.585377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.698 [2024-07-23 01:09:50.585402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.698 qpair failed and we were unable to recover it. 00:30:06.698 [2024-07-23 01:09:50.585531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.698 [2024-07-23 01:09:50.585707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.698 [2024-07-23 01:09:50.585732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.698 qpair failed and we were unable to recover it. 00:30:06.698 [2024-07-23 01:09:50.585895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.698 [2024-07-23 01:09:50.586064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.698 [2024-07-23 01:09:50.586090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.698 qpair failed and we were unable to recover it. 00:30:06.698 [2024-07-23 01:09:50.586280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.698 [2024-07-23 01:09:50.586412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.698 [2024-07-23 01:09:50.586436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.698 qpair failed and we were unable to recover it. 00:30:06.698 [2024-07-23 01:09:50.586579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.698 [2024-07-23 01:09:50.586764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.698 [2024-07-23 01:09:50.586790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.698 qpair failed and we were unable to recover it. 00:30:06.698 [2024-07-23 01:09:50.586939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.698 [2024-07-23 01:09:50.587101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.698 [2024-07-23 01:09:50.587125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.698 qpair failed and we were unable to recover it. 00:30:06.698 [2024-07-23 01:09:50.587269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.698 [2024-07-23 01:09:50.587417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.698 [2024-07-23 01:09:50.587441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.698 qpair failed and we were unable to recover it. 00:30:06.698 [2024-07-23 01:09:50.587608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.698 [2024-07-23 01:09:50.587747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.698 [2024-07-23 01:09:50.587771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.698 qpair failed and we were unable to recover it. 00:30:06.698 [2024-07-23 01:09:50.587900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.698 [2024-07-23 01:09:50.588066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.698 [2024-07-23 01:09:50.588090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.698 qpair failed and we were unable to recover it. 00:30:06.698 [2024-07-23 01:09:50.588285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.698 [2024-07-23 01:09:50.588452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.698 [2024-07-23 01:09:50.588477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.698 qpair failed and we were unable to recover it. 00:30:06.698 [2024-07-23 01:09:50.588620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.698 [2024-07-23 01:09:50.588789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.698 [2024-07-23 01:09:50.588814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.698 qpair failed and we were unable to recover it. 00:30:06.698 [2024-07-23 01:09:50.588940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.698 [2024-07-23 01:09:50.589108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.698 [2024-07-23 01:09:50.589133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.698 qpair failed and we were unable to recover it. 00:30:06.698 [2024-07-23 01:09:50.589275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.698 [2024-07-23 01:09:50.589466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.698 [2024-07-23 01:09:50.589491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.698 qpair failed and we were unable to recover it. 00:30:06.699 [2024-07-23 01:09:50.589625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.699 [2024-07-23 01:09:50.589759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.699 [2024-07-23 01:09:50.589783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.699 qpair failed and we were unable to recover it. 00:30:06.699 [2024-07-23 01:09:50.589933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.699 [2024-07-23 01:09:50.590092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.699 [2024-07-23 01:09:50.590117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.699 qpair failed and we were unable to recover it. 00:30:06.699 [2024-07-23 01:09:50.590280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.699 [2024-07-23 01:09:50.590416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.699 [2024-07-23 01:09:50.590440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.699 qpair failed and we were unable to recover it. 00:30:06.699 [2024-07-23 01:09:50.590599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.699 [2024-07-23 01:09:50.590771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.699 [2024-07-23 01:09:50.590796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.699 qpair failed and we were unable to recover it. 00:30:06.699 [2024-07-23 01:09:50.590976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.699 [2024-07-23 01:09:50.591129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.699 [2024-07-23 01:09:50.591154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.699 qpair failed and we were unable to recover it. 00:30:06.699 [2024-07-23 01:09:50.591321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.699 [2024-07-23 01:09:50.591488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.699 [2024-07-23 01:09:50.591512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.699 qpair failed and we were unable to recover it. 00:30:06.699 [2024-07-23 01:09:50.591663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.699 [2024-07-23 01:09:50.591805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.699 [2024-07-23 01:09:50.591830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.699 qpair failed and we were unable to recover it. 00:30:06.699 [2024-07-23 01:09:50.591958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.699 [2024-07-23 01:09:50.592122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.699 [2024-07-23 01:09:50.592153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.699 qpair failed and we were unable to recover it. 00:30:06.699 [2024-07-23 01:09:50.592317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.699 [2024-07-23 01:09:50.592452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.699 [2024-07-23 01:09:50.592477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.699 qpair failed and we were unable to recover it. 00:30:06.699 [2024-07-23 01:09:50.592643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.699 [2024-07-23 01:09:50.592776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.699 [2024-07-23 01:09:50.592802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.699 qpair failed and we were unable to recover it. 00:30:06.699 [2024-07-23 01:09:50.592939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.699 [2024-07-23 01:09:50.593102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.699 [2024-07-23 01:09:50.593127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.699 qpair failed and we were unable to recover it. 00:30:06.699 [2024-07-23 01:09:50.593266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.699 [2024-07-23 01:09:50.593394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.699 [2024-07-23 01:09:50.593419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.699 qpair failed and we were unable to recover it. 00:30:06.699 [2024-07-23 01:09:50.593549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.699 [2024-07-23 01:09:50.593737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.699 [2024-07-23 01:09:50.593763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.699 qpair failed and we were unable to recover it. 00:30:06.699 [2024-07-23 01:09:50.593896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.699 [2024-07-23 01:09:50.594084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.699 [2024-07-23 01:09:50.594113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.699 qpair failed and we were unable to recover it. 00:30:06.699 [2024-07-23 01:09:50.594247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.699 [2024-07-23 01:09:50.594421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.699 [2024-07-23 01:09:50.594446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.699 qpair failed and we were unable to recover it. 00:30:06.699 [2024-07-23 01:09:50.594611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.699 [2024-07-23 01:09:50.594746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.699 [2024-07-23 01:09:50.594771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.699 qpair failed and we were unable to recover it. 00:30:06.699 [2024-07-23 01:09:50.594937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.699 [2024-07-23 01:09:50.595078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.699 [2024-07-23 01:09:50.595103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.699 qpair failed and we were unable to recover it. 00:30:06.699 [2024-07-23 01:09:50.595277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.699 [2024-07-23 01:09:50.595402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.699 [2024-07-23 01:09:50.595428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.699 qpair failed and we were unable to recover it. 00:30:06.699 [2024-07-23 01:09:50.595575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.699 [2024-07-23 01:09:50.595734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.699 [2024-07-23 01:09:50.595760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.699 qpair failed and we were unable to recover it. 00:30:06.699 [2024-07-23 01:09:50.595904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.699 [2024-07-23 01:09:50.596069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.699 [2024-07-23 01:09:50.596094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.699 qpair failed and we were unable to recover it. 00:30:06.699 [2024-07-23 01:09:50.596224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.699 [2024-07-23 01:09:50.596370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.699 [2024-07-23 01:09:50.596394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.699 qpair failed and we were unable to recover it. 00:30:06.699 [2024-07-23 01:09:50.596526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.699 [2024-07-23 01:09:50.596683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.699 [2024-07-23 01:09:50.596710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.699 qpair failed and we were unable to recover it. 00:30:06.699 [2024-07-23 01:09:50.596843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.699 [2024-07-23 01:09:50.596999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.699 [2024-07-23 01:09:50.597034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.699 qpair failed and we were unable to recover it. 00:30:06.699 [2024-07-23 01:09:50.597173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.699 [2024-07-23 01:09:50.597306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.699 [2024-07-23 01:09:50.597330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.699 qpair failed and we were unable to recover it. 00:30:06.699 [2024-07-23 01:09:50.597469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.699 [2024-07-23 01:09:50.597628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.699 [2024-07-23 01:09:50.597653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.699 qpair failed and we were unable to recover it. 00:30:06.699 01:09:50 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:06.699 [2024-07-23 01:09:50.597790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.699 01:09:50 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:06.699 [2024-07-23 01:09:50.597921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.699 [2024-07-23 01:09:50.597946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.699 qpair failed and we were unable to recover it. 00:30:06.699 01:09:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:06.699 01:09:50 -- common/autotest_common.sh@10 -- # set +x 00:30:06.699 [2024-07-23 01:09:50.598110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.699 [2024-07-23 01:09:50.598291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.699 [2024-07-23 01:09:50.598315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.699 qpair failed and we were unable to recover it. 00:30:06.699 [2024-07-23 01:09:50.598478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.699 [2024-07-23 01:09:50.598634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.699 [2024-07-23 01:09:50.598659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.700 qpair failed and we were unable to recover it. 00:30:06.700 [2024-07-23 01:09:50.598795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.700 [2024-07-23 01:09:50.598951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.700 [2024-07-23 01:09:50.598976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.700 qpair failed and we were unable to recover it. 00:30:06.700 [2024-07-23 01:09:50.599153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.700 [2024-07-23 01:09:50.599287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.700 [2024-07-23 01:09:50.599311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.700 qpair failed and we were unable to recover it. 00:30:06.700 [2024-07-23 01:09:50.599451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.700 [2024-07-23 01:09:50.599579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.700 [2024-07-23 01:09:50.599605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.700 qpair failed and we were unable to recover it. 00:30:06.700 [2024-07-23 01:09:50.599767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.700 [2024-07-23 01:09:50.599929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.700 [2024-07-23 01:09:50.599953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.700 qpair failed and we were unable to recover it. 00:30:06.700 [2024-07-23 01:09:50.600087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.700 [2024-07-23 01:09:50.600226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.700 [2024-07-23 01:09:50.600250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.700 qpair failed and we were unable to recover it. 00:30:06.700 [2024-07-23 01:09:50.600415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.700 [2024-07-23 01:09:50.600586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.700 [2024-07-23 01:09:50.600610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.700 qpair failed and we were unable to recover it. 00:30:06.700 [2024-07-23 01:09:50.600767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.700 [2024-07-23 01:09:50.600903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.700 [2024-07-23 01:09:50.600929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.700 qpair failed and we were unable to recover it. 00:30:06.700 [2024-07-23 01:09:50.601061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.700 [2024-07-23 01:09:50.601221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.700 [2024-07-23 01:09:50.601246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.700 qpair failed and we were unable to recover it. 00:30:06.700 [2024-07-23 01:09:50.601384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.700 [2024-07-23 01:09:50.601517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.700 [2024-07-23 01:09:50.601541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.700 qpair failed and we were unable to recover it. 00:30:06.700 [2024-07-23 01:09:50.601727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.700 [2024-07-23 01:09:50.601886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.700 [2024-07-23 01:09:50.601912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.700 qpair failed and we were unable to recover it. 00:30:06.700 [2024-07-23 01:09:50.602054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.700 [2024-07-23 01:09:50.602187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.700 [2024-07-23 01:09:50.602212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.700 qpair failed and we were unable to recover it. 00:30:06.700 [2024-07-23 01:09:50.602344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.700 [2024-07-23 01:09:50.602500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.700 [2024-07-23 01:09:50.602525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.700 qpair failed and we were unable to recover it. 00:30:06.700 [2024-07-23 01:09:50.602659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.700 [2024-07-23 01:09:50.602823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.700 [2024-07-23 01:09:50.602849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.700 qpair failed and we were unable to recover it. 00:30:06.700 [2024-07-23 01:09:50.602986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.700 [2024-07-23 01:09:50.603127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.700 [2024-07-23 01:09:50.603152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.700 qpair failed and we were unable to recover it. 00:30:06.700 [2024-07-23 01:09:50.603285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.700 [2024-07-23 01:09:50.603506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.700 [2024-07-23 01:09:50.603531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.700 qpair failed and we were unable to recover it. 00:30:06.700 [2024-07-23 01:09:50.603740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.700 [2024-07-23 01:09:50.603900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.700 [2024-07-23 01:09:50.603925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.700 qpair failed and we were unable to recover it. 00:30:06.700 [2024-07-23 01:09:50.604096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.700 [2024-07-23 01:09:50.604250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.700 [2024-07-23 01:09:50.604276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.700 qpair failed and we were unable to recover it. 00:30:06.700 [2024-07-23 01:09:50.604437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.700 [2024-07-23 01:09:50.604597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.700 [2024-07-23 01:09:50.604635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.700 qpair failed and we were unable to recover it. 00:30:06.700 [2024-07-23 01:09:50.604832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.700 [2024-07-23 01:09:50.604975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.700 [2024-07-23 01:09:50.604999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.700 qpair failed and we were unable to recover it. 00:30:06.700 [2024-07-23 01:09:50.605135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.700 [2024-07-23 01:09:50.605319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.700 [2024-07-23 01:09:50.605343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.700 qpair failed and we were unable to recover it. 00:30:06.700 [2024-07-23 01:09:50.605507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.700 [2024-07-23 01:09:50.605638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.700 [2024-07-23 01:09:50.605664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.700 qpair failed and we were unable to recover it. 00:30:06.700 [2024-07-23 01:09:50.605799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.700 [2024-07-23 01:09:50.605946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.700 [2024-07-23 01:09:50.605971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.700 qpair failed and we were unable to recover it. 00:30:06.700 [2024-07-23 01:09:50.606133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.700 [2024-07-23 01:09:50.606287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.700 [2024-07-23 01:09:50.606311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.700 qpair failed and we were unable to recover it. 00:30:06.700 [2024-07-23 01:09:50.606442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.700 [2024-07-23 01:09:50.606601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.700 [2024-07-23 01:09:50.606631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.700 qpair failed and we were unable to recover it. 00:30:06.700 [2024-07-23 01:09:50.606787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.700 [2024-07-23 01:09:50.606926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.700 [2024-07-23 01:09:50.606951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.700 qpair failed and we were unable to recover it. 00:30:06.700 [2024-07-23 01:09:50.607090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.700 [2024-07-23 01:09:50.607223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.700 [2024-07-23 01:09:50.607247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.700 qpair failed and we were unable to recover it. 00:30:06.700 [2024-07-23 01:09:50.607394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.700 [2024-07-23 01:09:50.607527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.700 [2024-07-23 01:09:50.607551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.700 qpair failed and we were unable to recover it. 00:30:06.700 [2024-07-23 01:09:50.607697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.700 [2024-07-23 01:09:50.607827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.700 [2024-07-23 01:09:50.607851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.700 qpair failed and we were unable to recover it. 00:30:06.700 [2024-07-23 01:09:50.607991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.700 [2024-07-23 01:09:50.608130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.701 [2024-07-23 01:09:50.608154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.701 qpair failed and we were unable to recover it. 00:30:06.701 [2024-07-23 01:09:50.608314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.701 [2024-07-23 01:09:50.608447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.701 [2024-07-23 01:09:50.608471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.701 qpair failed and we were unable to recover it. 00:30:06.701 [2024-07-23 01:09:50.608621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.701 [2024-07-23 01:09:50.608804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.701 [2024-07-23 01:09:50.608828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.701 qpair failed and we were unable to recover it. 00:30:06.701 [2024-07-23 01:09:50.608963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.701 [2024-07-23 01:09:50.609198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.701 [2024-07-23 01:09:50.609222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.701 qpair failed and we were unable to recover it. 00:30:06.701 [2024-07-23 01:09:50.609397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.701 [2024-07-23 01:09:50.609564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.701 [2024-07-23 01:09:50.609589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.701 qpair failed and we were unable to recover it. 00:30:06.701 [2024-07-23 01:09:50.609738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.701 [2024-07-23 01:09:50.609895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.701 [2024-07-23 01:09:50.609920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.701 qpair failed and we were unable to recover it. 00:30:06.701 [2024-07-23 01:09:50.610096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.701 [2024-07-23 01:09:50.610260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.701 [2024-07-23 01:09:50.610284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.701 qpair failed and we were unable to recover it. 00:30:06.701 [2024-07-23 01:09:50.610430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.701 [2024-07-23 01:09:50.610564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.701 [2024-07-23 01:09:50.610588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.701 qpair failed and we were unable to recover it. 00:30:06.701 [2024-07-23 01:09:50.610736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.701 [2024-07-23 01:09:50.610930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.701 [2024-07-23 01:09:50.610954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.701 qpair failed and we were unable to recover it. 00:30:06.701 [2024-07-23 01:09:50.611097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.701 [2024-07-23 01:09:50.611243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.701 [2024-07-23 01:09:50.611268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.701 qpair failed and we were unable to recover it. 00:30:06.701 [2024-07-23 01:09:50.611408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.701 [2024-07-23 01:09:50.611538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.701 [2024-07-23 01:09:50.611568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.701 qpair failed and we were unable to recover it. 00:30:06.701 [2024-07-23 01:09:50.611771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.701 [2024-07-23 01:09:50.611899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.701 [2024-07-23 01:09:50.611924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.701 qpair failed and we were unable to recover it. 00:30:06.701 [2024-07-23 01:09:50.612058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.701 [2024-07-23 01:09:50.612188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.701 [2024-07-23 01:09:50.612213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.701 qpair failed and we were unable to recover it. 00:30:06.701 [2024-07-23 01:09:50.612372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.701 [2024-07-23 01:09:50.612512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.701 [2024-07-23 01:09:50.612537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.701 qpair failed and we were unable to recover it. 00:30:06.701 [2024-07-23 01:09:50.612691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.701 [2024-07-23 01:09:50.612828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.701 [2024-07-23 01:09:50.612852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.701 qpair failed and we were unable to recover it. 00:30:06.701 [2024-07-23 01:09:50.613023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.701 [2024-07-23 01:09:50.613153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.701 [2024-07-23 01:09:50.613178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.701 qpair failed and we were unable to recover it. 00:30:06.701 [2024-07-23 01:09:50.613366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.701 [2024-07-23 01:09:50.613536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.701 [2024-07-23 01:09:50.613560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.701 qpair failed and we were unable to recover it. 00:30:06.701 [2024-07-23 01:09:50.613738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.701 [2024-07-23 01:09:50.613900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.701 [2024-07-23 01:09:50.613924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.701 qpair failed and we were unable to recover it. 00:30:06.701 [2024-07-23 01:09:50.614084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.701 [2024-07-23 01:09:50.614245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.701 [2024-07-23 01:09:50.614270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.701 qpair failed and we were unable to recover it. 00:30:06.701 [2024-07-23 01:09:50.614432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.701 [2024-07-23 01:09:50.614572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.701 [2024-07-23 01:09:50.614596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.701 qpair failed and we were unable to recover it. 00:30:06.701 [2024-07-23 01:09:50.614741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.701 [2024-07-23 01:09:50.614904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.701 [2024-07-23 01:09:50.614937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.701 qpair failed and we were unable to recover it. 00:30:06.701 [2024-07-23 01:09:50.615112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.701 [2024-07-23 01:09:50.615239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.701 [2024-07-23 01:09:50.615263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.701 qpair failed and we were unable to recover it. 00:30:06.701 [2024-07-23 01:09:50.615426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.701 [2024-07-23 01:09:50.615574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.701 [2024-07-23 01:09:50.615605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.701 qpair failed and we were unable to recover it. 00:30:06.701 [2024-07-23 01:09:50.615768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.701 [2024-07-23 01:09:50.615917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.701 [2024-07-23 01:09:50.615941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.701 qpair failed and we were unable to recover it. 00:30:06.701 [2024-07-23 01:09:50.616131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.701 [2024-07-23 01:09:50.616295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.701 [2024-07-23 01:09:50.616319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.701 qpair failed and we were unable to recover it. 00:30:06.701 [2024-07-23 01:09:50.616473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.701 [2024-07-23 01:09:50.616688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.701 [2024-07-23 01:09:50.616714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.701 qpair failed and we were unable to recover it. 00:30:06.701 [2024-07-23 01:09:50.616844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.701 [2024-07-23 01:09:50.617014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.701 [2024-07-23 01:09:50.617039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.701 qpair failed and we were unable to recover it. 00:30:06.701 [2024-07-23 01:09:50.617170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.701 [2024-07-23 01:09:50.617314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.701 [2024-07-23 01:09:50.617338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.701 qpair failed and we were unable to recover it. 00:30:06.701 [2024-07-23 01:09:50.617495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.701 [2024-07-23 01:09:50.617639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.701 [2024-07-23 01:09:50.617665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.701 qpair failed and we were unable to recover it. 00:30:06.701 [2024-07-23 01:09:50.617805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.701 [2024-07-23 01:09:50.617936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.702 [2024-07-23 01:09:50.617961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.702 qpair failed and we were unable to recover it. 00:30:06.702 [2024-07-23 01:09:50.618088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.702 [2024-07-23 01:09:50.618242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.702 [2024-07-23 01:09:50.618267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.702 qpair failed and we were unable to recover it. 00:30:06.702 [2024-07-23 01:09:50.618407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.702 [2024-07-23 01:09:50.618539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.702 [2024-07-23 01:09:50.618563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.702 qpair failed and we were unable to recover it. 00:30:06.702 [2024-07-23 01:09:50.618713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.702 [2024-07-23 01:09:50.618844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.702 [2024-07-23 01:09:50.618869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.702 qpair failed and we were unable to recover it. 00:30:06.702 [2024-07-23 01:09:50.619009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.702 [2024-07-23 01:09:50.619167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.702 [2024-07-23 01:09:50.619191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.702 qpair failed and we were unable to recover it. 00:30:06.702 [2024-07-23 01:09:50.619352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.702 [2024-07-23 01:09:50.619511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.702 [2024-07-23 01:09:50.619536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.702 qpair failed and we were unable to recover it. 00:30:06.702 [2024-07-23 01:09:50.619689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.702 [2024-07-23 01:09:50.619820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.702 [2024-07-23 01:09:50.619844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.702 qpair failed and we were unable to recover it. 00:30:06.702 [2024-07-23 01:09:50.620010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.702 [2024-07-23 01:09:50.620139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.702 [2024-07-23 01:09:50.620164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.702 qpair failed and we were unable to recover it. 00:30:06.702 [2024-07-23 01:09:50.620325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.702 [2024-07-23 01:09:50.620481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.702 Malloc0 00:30:06.702 [2024-07-23 01:09:50.620506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.702 qpair failed and we were unable to recover it. 00:30:06.702 [2024-07-23 01:09:50.620646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.702 [2024-07-23 01:09:50.620797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.702 [2024-07-23 01:09:50.620824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.702 qpair failed and we were unable to recover it. 00:30:06.702 01:09:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:06.702 [2024-07-23 01:09:50.620988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.702 01:09:50 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:06.702 [2024-07-23 01:09:50.621127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.702 [2024-07-23 01:09:50.621152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.702 qpair failed and we were unable to recover it. 00:30:06.702 01:09:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:06.702 01:09:50 -- common/autotest_common.sh@10 -- # set +x 00:30:06.702 [2024-07-23 01:09:50.621280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.702 [2024-07-23 01:09:50.621409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.702 [2024-07-23 01:09:50.621434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.702 qpair failed and we were unable to recover it. 00:30:06.702 [2024-07-23 01:09:50.621587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.702 [2024-07-23 01:09:50.621734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.702 [2024-07-23 01:09:50.621759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.702 qpair failed and we were unable to recover it. 00:30:06.702 [2024-07-23 01:09:50.621894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.702 [2024-07-23 01:09:50.622021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.702 [2024-07-23 01:09:50.622045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.702 qpair failed and we were unable to recover it. 00:30:06.702 [2024-07-23 01:09:50.622200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.702 [2024-07-23 01:09:50.622335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.702 [2024-07-23 01:09:50.622359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.702 qpair failed and we were unable to recover it. 00:30:06.702 [2024-07-23 01:09:50.622493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.702 [2024-07-23 01:09:50.622635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.702 [2024-07-23 01:09:50.622660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.702 qpair failed and we were unable to recover it. 00:30:06.702 [2024-07-23 01:09:50.622823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.702 [2024-07-23 01:09:50.622948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.702 [2024-07-23 01:09:50.622973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.702 qpair failed and we were unable to recover it. 00:30:06.702 [2024-07-23 01:09:50.623117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.702 [2024-07-23 01:09:50.623244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.702 [2024-07-23 01:09:50.623269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.702 qpair failed and we were unable to recover it. 00:30:06.702 [2024-07-23 01:09:50.623431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.702 [2024-07-23 01:09:50.623586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.702 [2024-07-23 01:09:50.623611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.702 qpair failed and we were unable to recover it. 00:30:06.702 [2024-07-23 01:09:50.623754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.702 [2024-07-23 01:09:50.623889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.702 [2024-07-23 01:09:50.623915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.702 qpair failed and we were unable to recover it. 00:30:06.702 [2024-07-23 01:09:50.624108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.702 [2024-07-23 01:09:50.624201] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:06.702 [2024-07-23 01:09:50.624241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.702 [2024-07-23 01:09:50.624265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.702 qpair failed and we were unable to recover it. 00:30:06.702 [2024-07-23 01:09:50.624398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.702 [2024-07-23 01:09:50.624539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.702 [2024-07-23 01:09:50.624563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.702 qpair failed and we were unable to recover it. 00:30:06.702 [2024-07-23 01:09:50.624695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.702 [2024-07-23 01:09:50.624857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.702 [2024-07-23 01:09:50.624881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.702 qpair failed and we were unable to recover it. 00:30:06.702 [2024-07-23 01:09:50.625016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.702 [2024-07-23 01:09:50.625151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.702 [2024-07-23 01:09:50.625176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.702 qpair failed and we were unable to recover it. 00:30:06.702 [2024-07-23 01:09:50.625343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.702 [2024-07-23 01:09:50.625500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.702 [2024-07-23 01:09:50.625524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.702 qpair failed and we were unable to recover it. 00:30:06.702 [2024-07-23 01:09:50.625672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.702 [2024-07-23 01:09:50.625802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.702 [2024-07-23 01:09:50.625826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.702 qpair failed and we were unable to recover it. 00:30:06.702 [2024-07-23 01:09:50.626025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.702 [2024-07-23 01:09:50.626159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.702 [2024-07-23 01:09:50.626183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.703 qpair failed and we were unable to recover it. 00:30:06.703 [2024-07-23 01:09:50.626342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.703 [2024-07-23 01:09:50.626525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.703 [2024-07-23 01:09:50.626549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.703 qpair failed and we were unable to recover it. 00:30:06.703 [2024-07-23 01:09:50.626705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.703 [2024-07-23 01:09:50.626836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.703 [2024-07-23 01:09:50.626860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.703 qpair failed and we were unable to recover it. 00:30:06.703 [2024-07-23 01:09:50.627021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.703 [2024-07-23 01:09:50.627157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.703 [2024-07-23 01:09:50.627184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.703 qpair failed and we were unable to recover it. 00:30:06.703 [2024-07-23 01:09:50.627349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.703 [2024-07-23 01:09:50.627482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.703 [2024-07-23 01:09:50.627506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.703 qpair failed and we were unable to recover it. 00:30:06.703 [2024-07-23 01:09:50.627649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.703 [2024-07-23 01:09:50.627806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.703 [2024-07-23 01:09:50.627834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.703 qpair failed and we were unable to recover it. 00:30:06.703 [2024-07-23 01:09:50.628017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.703 [2024-07-23 01:09:50.628176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.703 [2024-07-23 01:09:50.628200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.703 qpair failed and we were unable to recover it. 00:30:06.703 [2024-07-23 01:09:50.628331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.703 [2024-07-23 01:09:50.628508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.703 [2024-07-23 01:09:50.628532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.703 qpair failed and we were unable to recover it. 00:30:06.703 [2024-07-23 01:09:50.628669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.703 [2024-07-23 01:09:50.628795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.703 [2024-07-23 01:09:50.628820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.703 qpair failed and we were unable to recover it. 00:30:06.703 [2024-07-23 01:09:50.628959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.703 [2024-07-23 01:09:50.629124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.703 [2024-07-23 01:09:50.629149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.703 qpair failed and we were unable to recover it. 00:30:06.703 [2024-07-23 01:09:50.629311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.703 [2024-07-23 01:09:50.629486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.703 [2024-07-23 01:09:50.629510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.703 qpair failed and we were unable to recover it. 00:30:06.703 [2024-07-23 01:09:50.629653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.703 [2024-07-23 01:09:50.629811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.703 [2024-07-23 01:09:50.629837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.703 qpair failed and we were unable to recover it. 00:30:06.703 [2024-07-23 01:09:50.630003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.703 [2024-07-23 01:09:50.630161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.703 [2024-07-23 01:09:50.630185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.703 qpair failed and we were unable to recover it. 00:30:06.703 [2024-07-23 01:09:50.630354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.703 [2024-07-23 01:09:50.630490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.703 [2024-07-23 01:09:50.630514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.703 qpair failed and we were unable to recover it. 00:30:06.703 [2024-07-23 01:09:50.630656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.703 [2024-07-23 01:09:50.630803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.703 [2024-07-23 01:09:50.630828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.703 qpair failed and we were unable to recover it. 00:30:06.703 [2024-07-23 01:09:50.631026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.703 [2024-07-23 01:09:50.631157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.703 [2024-07-23 01:09:50.631181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.703 qpair failed and we were unable to recover it. 00:30:06.703 [2024-07-23 01:09:50.631336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.703 [2024-07-23 01:09:50.631481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.703 [2024-07-23 01:09:50.631505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.703 qpair failed and we were unable to recover it. 00:30:06.703 [2024-07-23 01:09:50.631653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.703 [2024-07-23 01:09:50.631815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.703 [2024-07-23 01:09:50.631839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.703 qpair failed and we were unable to recover it. 00:30:06.703 [2024-07-23 01:09:50.631996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.703 [2024-07-23 01:09:50.632130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.703 [2024-07-23 01:09:50.632157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.703 qpair failed and we were unable to recover it. 00:30:06.703 [2024-07-23 01:09:50.632345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.703 01:09:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:06.703 [2024-07-23 01:09:50.632490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.703 [2024-07-23 01:09:50.632515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.703 qpair failed and we were unable to recover it. 00:30:06.703 01:09:50 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:06.703 [2024-07-23 01:09:50.632651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.703 01:09:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:06.703 01:09:50 -- common/autotest_common.sh@10 -- # set +x 00:30:06.703 [2024-07-23 01:09:50.632839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.703 [2024-07-23 01:09:50.632863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.703 qpair failed and we were unable to recover it. 00:30:06.703 [2024-07-23 01:09:50.633051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.703 [2024-07-23 01:09:50.633184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.703 [2024-07-23 01:09:50.633208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.703 qpair failed and we were unable to recover it. 00:30:06.703 [2024-07-23 01:09:50.633358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.703 [2024-07-23 01:09:50.633521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.703 [2024-07-23 01:09:50.633545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.703 qpair failed and we were unable to recover it. 00:30:06.703 [2024-07-23 01:09:50.633704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.703 [2024-07-23 01:09:50.633834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.703 [2024-07-23 01:09:50.633858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.703 qpair failed and we were unable to recover it. 00:30:06.703 [2024-07-23 01:09:50.634004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.703 [2024-07-23 01:09:50.634143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.703 [2024-07-23 01:09:50.634167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.703 qpair failed and we were unable to recover it. 00:30:06.703 [2024-07-23 01:09:50.634347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.704 [2024-07-23 01:09:50.634509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.704 [2024-07-23 01:09:50.634533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.704 qpair failed and we were unable to recover it. 00:30:06.704 [2024-07-23 01:09:50.634706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.704 [2024-07-23 01:09:50.634838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.704 [2024-07-23 01:09:50.634862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.704 qpair failed and we were unable to recover it. 00:30:06.704 [2024-07-23 01:09:50.635032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.704 [2024-07-23 01:09:50.635189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.704 [2024-07-23 01:09:50.635213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.704 qpair failed and we were unable to recover it. 00:30:06.704 [2024-07-23 01:09:50.635374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.704 [2024-07-23 01:09:50.635518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.704 [2024-07-23 01:09:50.635543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.704 qpair failed and we were unable to recover it. 00:30:06.704 [2024-07-23 01:09:50.635691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.704 [2024-07-23 01:09:50.635831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.704 [2024-07-23 01:09:50.635855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.704 qpair failed and we were unable to recover it. 00:30:06.704 [2024-07-23 01:09:50.636012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.704 [2024-07-23 01:09:50.636138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.704 [2024-07-23 01:09:50.636163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.704 qpair failed and we were unable to recover it. 00:30:06.704 [2024-07-23 01:09:50.636340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.704 [2024-07-23 01:09:50.636476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.704 [2024-07-23 01:09:50.636500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.704 qpair failed and we were unable to recover it. 00:30:06.704 [2024-07-23 01:09:50.636634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.704 [2024-07-23 01:09:50.636783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.704 [2024-07-23 01:09:50.636808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.704 qpair failed and we were unable to recover it. 00:30:06.704 [2024-07-23 01:09:50.636946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.704 [2024-07-23 01:09:50.637096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.704 [2024-07-23 01:09:50.637120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.704 qpair failed and we were unable to recover it. 00:30:06.704 [2024-07-23 01:09:50.637285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.704 [2024-07-23 01:09:50.637434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.704 [2024-07-23 01:09:50.637459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.704 qpair failed and we were unable to recover it. 00:30:06.704 [2024-07-23 01:09:50.637585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.704 [2024-07-23 01:09:50.637736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.704 [2024-07-23 01:09:50.637760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.704 qpair failed and we were unable to recover it. 00:30:06.704 [2024-07-23 01:09:50.637904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.704 [2024-07-23 01:09:50.638035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.704 [2024-07-23 01:09:50.638060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.704 qpair failed and we were unable to recover it. 00:30:06.704 [2024-07-23 01:09:50.638189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.704 [2024-07-23 01:09:50.638345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.704 [2024-07-23 01:09:50.638370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.704 qpair failed and we were unable to recover it. 00:30:06.704 [2024-07-23 01:09:50.638512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.704 [2024-07-23 01:09:50.638679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.704 [2024-07-23 01:09:50.638704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.704 qpair failed and we were unable to recover it. 00:30:06.704 [2024-07-23 01:09:50.638837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.704 [2024-07-23 01:09:50.638980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.704 [2024-07-23 01:09:50.639005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.704 qpair failed and we were unable to recover it. 00:30:06.704 [2024-07-23 01:09:50.639167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.704 [2024-07-23 01:09:50.639328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.704 [2024-07-23 01:09:50.639352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.704 qpair failed and we were unable to recover it. 00:30:06.704 [2024-07-23 01:09:50.639540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.704 [2024-07-23 01:09:50.639690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.704 [2024-07-23 01:09:50.639715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.704 qpair failed and we were unable to recover it. 00:30:06.704 [2024-07-23 01:09:50.639856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.704 [2024-07-23 01:09:50.640019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.704 [2024-07-23 01:09:50.640044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.704 qpair failed and we were unable to recover it. 00:30:06.704 [2024-07-23 01:09:50.640191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.704 [2024-07-23 01:09:50.640346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.704 [2024-07-23 01:09:50.640370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.704 qpair failed and we were unable to recover it. 00:30:06.704 01:09:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:06.704 [2024-07-23 01:09:50.640546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.704 01:09:50 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:06.704 [2024-07-23 01:09:50.640723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.704 [2024-07-23 01:09:50.640749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.704 qpair failed and we were unable to recover it. 00:30:06.704 01:09:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:06.704 [2024-07-23 01:09:50.640915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.704 01:09:50 -- common/autotest_common.sh@10 -- # set +x 00:30:06.704 [2024-07-23 01:09:50.641077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.704 [2024-07-23 01:09:50.641101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.704 qpair failed and we were unable to recover it. 00:30:06.704 [2024-07-23 01:09:50.641233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.704 [2024-07-23 01:09:50.641394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.704 [2024-07-23 01:09:50.641418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.704 qpair failed and we were unable to recover it. 00:30:06.704 [2024-07-23 01:09:50.641578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.704 [2024-07-23 01:09:50.641719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.704 [2024-07-23 01:09:50.641744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.704 qpair failed and we were unable to recover it. 00:30:06.704 [2024-07-23 01:09:50.641877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.704 [2024-07-23 01:09:50.642040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.704 [2024-07-23 01:09:50.642065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.704 qpair failed and we were unable to recover it. 00:30:06.704 [2024-07-23 01:09:50.642202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.704 [2024-07-23 01:09:50.642332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.704 [2024-07-23 01:09:50.642357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.704 qpair failed and we were unable to recover it. 00:30:06.704 [2024-07-23 01:09:50.642548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.704 [2024-07-23 01:09:50.642684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.704 [2024-07-23 01:09:50.642709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.704 qpair failed and we were unable to recover it. 00:30:06.704 [2024-07-23 01:09:50.642902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.704 [2024-07-23 01:09:50.643040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.704 [2024-07-23 01:09:50.643067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.704 qpair failed and we were unable to recover it. 00:30:06.704 [2024-07-23 01:09:50.643230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.704 [2024-07-23 01:09:50.643386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.705 [2024-07-23 01:09:50.643410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.705 qpair failed and we were unable to recover it. 00:30:06.705 [2024-07-23 01:09:50.643550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.705 [2024-07-23 01:09:50.643718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.705 [2024-07-23 01:09:50.643743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.705 qpair failed and we were unable to recover it. 00:30:06.705 [2024-07-23 01:09:50.643882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.705 [2024-07-23 01:09:50.644044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.705 [2024-07-23 01:09:50.644068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.705 qpair failed and we were unable to recover it. 00:30:06.705 [2024-07-23 01:09:50.644230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.705 [2024-07-23 01:09:50.644366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.705 [2024-07-23 01:09:50.644391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.705 qpair failed and we were unable to recover it. 00:30:06.705 [2024-07-23 01:09:50.644555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.705 [2024-07-23 01:09:50.644691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.705 [2024-07-23 01:09:50.644716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.705 qpair failed and we were unable to recover it. 00:30:06.705 [2024-07-23 01:09:50.644846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.705 [2024-07-23 01:09:50.645003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.705 [2024-07-23 01:09:50.645027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.705 qpair failed and we were unable to recover it. 00:30:06.705 [2024-07-23 01:09:50.645164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.705 [2024-07-23 01:09:50.645299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.705 [2024-07-23 01:09:50.645324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.705 qpair failed and we were unable to recover it. 00:30:06.705 [2024-07-23 01:09:50.645460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.705 [2024-07-23 01:09:50.645623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.705 [2024-07-23 01:09:50.645648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.705 qpair failed and we were unable to recover it. 00:30:06.705 [2024-07-23 01:09:50.645813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.705 [2024-07-23 01:09:50.645965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.705 [2024-07-23 01:09:50.645990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.705 qpair failed and we were unable to recover it. 00:30:06.705 [2024-07-23 01:09:50.646152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.705 [2024-07-23 01:09:50.646302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.705 [2024-07-23 01:09:50.646326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.705 qpair failed and we were unable to recover it. 00:30:06.705 [2024-07-23 01:09:50.646462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.705 [2024-07-23 01:09:50.646638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.705 [2024-07-23 01:09:50.646663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.705 qpair failed and we were unable to recover it. 00:30:06.705 [2024-07-23 01:09:50.646798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.705 [2024-07-23 01:09:50.646931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.705 [2024-07-23 01:09:50.646956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.705 qpair failed and we were unable to recover it. 00:30:06.705 [2024-07-23 01:09:50.647116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.705 [2024-07-23 01:09:50.647257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.705 [2024-07-23 01:09:50.647281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.705 qpair failed and we were unable to recover it. 00:30:06.705 [2024-07-23 01:09:50.647407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.705 [2024-07-23 01:09:50.647553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.705 [2024-07-23 01:09:50.647578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.705 qpair failed and we were unable to recover it. 00:30:06.705 [2024-07-23 01:09:50.647744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.705 [2024-07-23 01:09:50.647908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.705 [2024-07-23 01:09:50.647932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.705 qpair failed and we were unable to recover it. 00:30:06.705 [2024-07-23 01:09:50.648064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.705 [2024-07-23 01:09:50.648209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.705 [2024-07-23 01:09:50.648233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.705 qpair failed and we were unable to recover it. 00:30:06.705 [2024-07-23 01:09:50.648368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.705 [2024-07-23 01:09:50.648503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.705 [2024-07-23 01:09:50.648527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.705 01:09:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:06.705 qpair failed and we were unable to recover it. 00:30:06.705 [2024-07-23 01:09:50.648675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.705 01:09:50 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:06.705 01:09:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:06.705 [2024-07-23 01:09:50.648832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.705 [2024-07-23 01:09:50.648857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.705 qpair failed and we were unable to recover it. 00:30:06.705 01:09:50 -- common/autotest_common.sh@10 -- # set +x 00:30:06.705 [2024-07-23 01:09:50.648997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.705 [2024-07-23 01:09:50.649133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.705 [2024-07-23 01:09:50.649157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.705 qpair failed and we were unable to recover it. 00:30:06.705 [2024-07-23 01:09:50.649290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.705 [2024-07-23 01:09:50.649450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.705 [2024-07-23 01:09:50.649476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.705 qpair failed and we were unable to recover it. 00:30:06.705 [2024-07-23 01:09:50.649606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.705 [2024-07-23 01:09:50.649769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.705 [2024-07-23 01:09:50.649794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.705 qpair failed and we were unable to recover it. 00:30:06.705 [2024-07-23 01:09:50.649923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.705 [2024-07-23 01:09:50.650057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.705 [2024-07-23 01:09:50.650081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.705 qpair failed and we were unable to recover it. 00:30:06.705 [2024-07-23 01:09:50.650251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.705 [2024-07-23 01:09:50.650376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.705 [2024-07-23 01:09:50.650405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.705 qpair failed and we were unable to recover it. 00:30:06.705 [2024-07-23 01:09:50.650547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.705 [2024-07-23 01:09:50.650710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.705 [2024-07-23 01:09:50.650735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.705 qpair failed and we were unable to recover it. 00:30:06.705 [2024-07-23 01:09:50.650872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.705 [2024-07-23 01:09:50.651019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.705 [2024-07-23 01:09:50.651043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.705 qpair failed and we were unable to recover it. 00:30:06.705 [2024-07-23 01:09:50.651174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.705 [2024-07-23 01:09:50.651334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.705 [2024-07-23 01:09:50.651359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.705 qpair failed and we were unable to recover it. 00:30:06.705 [2024-07-23 01:09:50.651494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.705 [2024-07-23 01:09:50.651631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.705 [2024-07-23 01:09:50.651658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.705 qpair failed and we were unable to recover it. 00:30:06.705 [2024-07-23 01:09:50.651798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.705 [2024-07-23 01:09:50.651928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.705 [2024-07-23 01:09:50.651952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.705 qpair failed and we were unable to recover it. 00:30:06.705 [2024-07-23 01:09:50.652084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.705 [2024-07-23 01:09:50.652244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.706 [2024-07-23 01:09:50.652268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4350 with addr=10.0.0.2, port=4420 00:30:06.706 qpair failed and we were unable to recover it. 00:30:06.706 [2024-07-23 01:09:50.652400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.706 [2024-07-23 01:09:50.652447] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:06.706 [2024-07-23 01:09:50.655274] posix.c: 670:posix_sock_psk_use_session_client_cb: *ERROR*: PSK is not set 00:30:06.706 [2024-07-23 01:09:50.655350] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4350 (107): Transport endpoint is not connected 00:30:06.706 [2024-07-23 01:09:50.655415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.706 qpair failed and we were unable to recover it. 00:30:06.706 01:09:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:06.706 01:09:50 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:06.706 01:09:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:06.706 01:09:50 -- common/autotest_common.sh@10 -- # set +x 00:30:06.706 01:09:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:06.706 01:09:50 -- host/target_disconnect.sh@58 -- # wait 3529823 00:30:06.706 [2024-07-23 01:09:50.664877] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.706 [2024-07-23 01:09:50.665039] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.706 [2024-07-23 01:09:50.665067] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.706 [2024-07-23 01:09:50.665087] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.706 [2024-07-23 01:09:50.665101] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:06.706 [2024-07-23 01:09:50.665130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.706 qpair failed and we were unable to recover it. 00:30:06.706 [2024-07-23 01:09:50.674796] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.706 [2024-07-23 01:09:50.674939] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.706 [2024-07-23 01:09:50.674966] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.706 [2024-07-23 01:09:50.674980] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.706 [2024-07-23 01:09:50.674993] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:06.706 [2024-07-23 01:09:50.675022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.706 qpair failed and we were unable to recover it. 00:30:06.706 [2024-07-23 01:09:50.684844] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.706 [2024-07-23 01:09:50.684987] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.706 [2024-07-23 01:09:50.685013] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.706 [2024-07-23 01:09:50.685027] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.706 [2024-07-23 01:09:50.685040] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:06.706 [2024-07-23 01:09:50.685068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.706 qpair failed and we were unable to recover it. 00:30:06.706 [2024-07-23 01:09:50.694814] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.706 [2024-07-23 01:09:50.694956] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.706 [2024-07-23 01:09:50.694982] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.706 [2024-07-23 01:09:50.694996] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.706 [2024-07-23 01:09:50.695009] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:06.706 [2024-07-23 01:09:50.695037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.706 qpair failed and we were unable to recover it. 00:30:06.706 [2024-07-23 01:09:50.704820] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.706 [2024-07-23 01:09:50.704961] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.706 [2024-07-23 01:09:50.704988] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.706 [2024-07-23 01:09:50.705002] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.706 [2024-07-23 01:09:50.705014] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:06.706 [2024-07-23 01:09:50.705042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.706 qpair failed and we were unable to recover it. 00:30:06.706 [2024-07-23 01:09:50.714883] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.706 [2024-07-23 01:09:50.715050] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.706 [2024-07-23 01:09:50.715076] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.706 [2024-07-23 01:09:50.715089] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.706 [2024-07-23 01:09:50.715102] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:06.706 [2024-07-23 01:09:50.715130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.706 qpair failed and we were unable to recover it. 00:30:06.706 [2024-07-23 01:09:50.724883] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.706 [2024-07-23 01:09:50.725024] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.706 [2024-07-23 01:09:50.725050] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.706 [2024-07-23 01:09:50.725064] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.706 [2024-07-23 01:09:50.725077] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:06.706 [2024-07-23 01:09:50.725108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.706 qpair failed and we were unable to recover it. 00:30:06.706 [2024-07-23 01:09:50.734965] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.706 [2024-07-23 01:09:50.735106] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.706 [2024-07-23 01:09:50.735133] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.706 [2024-07-23 01:09:50.735147] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.706 [2024-07-23 01:09:50.735160] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:06.706 [2024-07-23 01:09:50.735188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.706 qpair failed and we were unable to recover it. 00:30:06.706 [2024-07-23 01:09:50.744959] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.706 [2024-07-23 01:09:50.745096] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.706 [2024-07-23 01:09:50.745122] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.706 [2024-07-23 01:09:50.745136] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.706 [2024-07-23 01:09:50.745149] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:06.706 [2024-07-23 01:09:50.745177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.706 qpair failed and we were unable to recover it. 00:30:06.706 [2024-07-23 01:09:50.754996] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.706 [2024-07-23 01:09:50.755137] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.706 [2024-07-23 01:09:50.755167] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.706 [2024-07-23 01:09:50.755182] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.706 [2024-07-23 01:09:50.755195] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:06.706 [2024-07-23 01:09:50.755223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.706 qpair failed and we were unable to recover it. 00:30:06.706 [2024-07-23 01:09:50.764966] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.706 [2024-07-23 01:09:50.765104] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.706 [2024-07-23 01:09:50.765130] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.706 [2024-07-23 01:09:50.765143] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.706 [2024-07-23 01:09:50.765157] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:06.706 [2024-07-23 01:09:50.765185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.706 qpair failed and we were unable to recover it. 00:30:06.706 [2024-07-23 01:09:50.775052] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.706 [2024-07-23 01:09:50.775212] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.706 [2024-07-23 01:09:50.775238] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.706 [2024-07-23 01:09:50.775252] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.706 [2024-07-23 01:09:50.775264] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:06.706 [2024-07-23 01:09:50.775292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.706 qpair failed and we were unable to recover it. 00:30:06.707 [2024-07-23 01:09:50.785220] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.707 [2024-07-23 01:09:50.785395] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.707 [2024-07-23 01:09:50.785421] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.707 [2024-07-23 01:09:50.785435] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.707 [2024-07-23 01:09:50.785448] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:06.707 [2024-07-23 01:09:50.785476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.707 qpair failed and we were unable to recover it. 00:30:06.707 [2024-07-23 01:09:50.795110] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.707 [2024-07-23 01:09:50.795253] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.707 [2024-07-23 01:09:50.795279] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.707 [2024-07-23 01:09:50.795293] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.707 [2024-07-23 01:09:50.795306] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:06.707 [2024-07-23 01:09:50.795333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.707 qpair failed and we were unable to recover it. 00:30:06.707 [2024-07-23 01:09:50.805154] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.707 [2024-07-23 01:09:50.805311] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.707 [2024-07-23 01:09:50.805336] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.707 [2024-07-23 01:09:50.805350] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.707 [2024-07-23 01:09:50.805362] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:06.707 [2024-07-23 01:09:50.805390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.707 qpair failed and we were unable to recover it. 00:30:06.707 [2024-07-23 01:09:50.815180] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.707 [2024-07-23 01:09:50.815316] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.707 [2024-07-23 01:09:50.815342] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.707 [2024-07-23 01:09:50.815355] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.707 [2024-07-23 01:09:50.815368] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:06.707 [2024-07-23 01:09:50.815396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.707 qpair failed and we were unable to recover it. 00:30:06.707 [2024-07-23 01:09:50.825152] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.707 [2024-07-23 01:09:50.825291] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.707 [2024-07-23 01:09:50.825316] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.707 [2024-07-23 01:09:50.825330] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.707 [2024-07-23 01:09:50.825343] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:06.707 [2024-07-23 01:09:50.825373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.707 qpair failed and we were unable to recover it. 00:30:06.707 [2024-07-23 01:09:50.835177] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.707 [2024-07-23 01:09:50.835318] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.707 [2024-07-23 01:09:50.835344] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.707 [2024-07-23 01:09:50.835358] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.707 [2024-07-23 01:09:50.835371] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:06.707 [2024-07-23 01:09:50.835399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.707 qpair failed and we were unable to recover it. 00:30:06.707 [2024-07-23 01:09:50.845240] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.707 [2024-07-23 01:09:50.845385] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.707 [2024-07-23 01:09:50.845416] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.707 [2024-07-23 01:09:50.845431] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.707 [2024-07-23 01:09:50.845444] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:06.707 [2024-07-23 01:09:50.845471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.707 qpair failed and we were unable to recover it. 00:30:06.707 [2024-07-23 01:09:50.855256] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.707 [2024-07-23 01:09:50.855447] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.707 [2024-07-23 01:09:50.855472] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.707 [2024-07-23 01:09:50.855486] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.707 [2024-07-23 01:09:50.855499] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:06.707 [2024-07-23 01:09:50.855526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.707 qpair failed and we were unable to recover it. 00:30:06.707 [2024-07-23 01:09:50.865294] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.707 [2024-07-23 01:09:50.865444] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.707 [2024-07-23 01:09:50.865470] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.707 [2024-07-23 01:09:50.865484] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.707 [2024-07-23 01:09:50.865497] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:06.707 [2024-07-23 01:09:50.865525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.707 qpair failed and we were unable to recover it. 00:30:06.707 [2024-07-23 01:09:50.875309] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.707 [2024-07-23 01:09:50.875445] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.707 [2024-07-23 01:09:50.875471] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.707 [2024-07-23 01:09:50.875485] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.707 [2024-07-23 01:09:50.875497] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:06.707 [2024-07-23 01:09:50.875525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.707 qpair failed and we were unable to recover it. 00:30:06.707 [2024-07-23 01:09:50.885341] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.707 [2024-07-23 01:09:50.885481] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.707 [2024-07-23 01:09:50.885506] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.707 [2024-07-23 01:09:50.885520] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.707 [2024-07-23 01:09:50.885533] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:06.707 [2024-07-23 01:09:50.885565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.707 qpair failed and we were unable to recover it. 00:30:06.707 [2024-07-23 01:09:50.895432] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.966 [2024-07-23 01:09:50.895577] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.966 [2024-07-23 01:09:50.895609] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.966 [2024-07-23 01:09:50.895634] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.966 [2024-07-23 01:09:50.895649] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:06.966 [2024-07-23 01:09:50.895677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.966 qpair failed and we were unable to recover it. 00:30:06.966 [2024-07-23 01:09:50.905429] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.966 [2024-07-23 01:09:50.905586] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.966 [2024-07-23 01:09:50.905618] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.966 [2024-07-23 01:09:50.905634] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.966 [2024-07-23 01:09:50.905647] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:06.966 [2024-07-23 01:09:50.905675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.966 qpair failed and we were unable to recover it. 00:30:06.966 [2024-07-23 01:09:50.915438] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.966 [2024-07-23 01:09:50.915578] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.966 [2024-07-23 01:09:50.915604] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.966 [2024-07-23 01:09:50.915625] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.966 [2024-07-23 01:09:50.915639] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:06.966 [2024-07-23 01:09:50.915668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.966 qpair failed and we were unable to recover it. 00:30:06.966 [2024-07-23 01:09:50.925450] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.967 [2024-07-23 01:09:50.925595] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.967 [2024-07-23 01:09:50.925627] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.967 [2024-07-23 01:09:50.925643] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.967 [2024-07-23 01:09:50.925656] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:06.967 [2024-07-23 01:09:50.925683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.967 qpair failed and we were unable to recover it. 00:30:06.967 [2024-07-23 01:09:50.935527] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.967 [2024-07-23 01:09:50.935675] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.967 [2024-07-23 01:09:50.935705] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.967 [2024-07-23 01:09:50.935720] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.967 [2024-07-23 01:09:50.935732] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:06.967 [2024-07-23 01:09:50.935760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.967 qpair failed and we were unable to recover it. 00:30:06.967 [2024-07-23 01:09:50.945565] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.967 [2024-07-23 01:09:50.945712] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.967 [2024-07-23 01:09:50.945738] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.967 [2024-07-23 01:09:50.945752] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.967 [2024-07-23 01:09:50.945764] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:06.967 [2024-07-23 01:09:50.945792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.967 qpair failed and we were unable to recover it. 00:30:06.967 [2024-07-23 01:09:50.955585] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.967 [2024-07-23 01:09:50.955757] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.967 [2024-07-23 01:09:50.955785] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.967 [2024-07-23 01:09:50.955800] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.967 [2024-07-23 01:09:50.955816] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:06.967 [2024-07-23 01:09:50.955846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.967 qpair failed and we were unable to recover it. 00:30:06.967 [2024-07-23 01:09:50.965594] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.967 [2024-07-23 01:09:50.965759] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.967 [2024-07-23 01:09:50.965785] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.967 [2024-07-23 01:09:50.965799] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.967 [2024-07-23 01:09:50.965812] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:06.967 [2024-07-23 01:09:50.965840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.967 qpair failed and we were unable to recover it. 00:30:06.967 [2024-07-23 01:09:50.975642] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.967 [2024-07-23 01:09:50.975780] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.967 [2024-07-23 01:09:50.975806] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.967 [2024-07-23 01:09:50.975819] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.967 [2024-07-23 01:09:50.975832] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:06.967 [2024-07-23 01:09:50.975866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.967 qpair failed and we were unable to recover it. 00:30:06.967 [2024-07-23 01:09:50.985676] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.967 [2024-07-23 01:09:50.985812] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.967 [2024-07-23 01:09:50.985838] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.967 [2024-07-23 01:09:50.985852] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.967 [2024-07-23 01:09:50.985864] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:06.967 [2024-07-23 01:09:50.985892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.967 qpair failed and we were unable to recover it. 00:30:06.967 [2024-07-23 01:09:50.995690] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.967 [2024-07-23 01:09:50.995837] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.967 [2024-07-23 01:09:50.995863] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.967 [2024-07-23 01:09:50.995877] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.967 [2024-07-23 01:09:50.995890] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:06.967 [2024-07-23 01:09:50.995917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.967 qpair failed and we were unable to recover it. 00:30:06.967 [2024-07-23 01:09:51.005706] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.967 [2024-07-23 01:09:51.005843] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.967 [2024-07-23 01:09:51.005868] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.967 [2024-07-23 01:09:51.005881] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.967 [2024-07-23 01:09:51.005894] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:06.967 [2024-07-23 01:09:51.005922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.967 qpair failed and we were unable to recover it. 00:30:06.967 [2024-07-23 01:09:51.015757] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.967 [2024-07-23 01:09:51.015896] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.967 [2024-07-23 01:09:51.015922] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.967 [2024-07-23 01:09:51.015936] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.967 [2024-07-23 01:09:51.015948] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:06.967 [2024-07-23 01:09:51.015975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.967 qpair failed and we were unable to recover it. 00:30:06.967 [2024-07-23 01:09:51.025778] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.967 [2024-07-23 01:09:51.025919] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.967 [2024-07-23 01:09:51.025949] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.967 [2024-07-23 01:09:51.025963] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.967 [2024-07-23 01:09:51.025976] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:06.967 [2024-07-23 01:09:51.026003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.967 qpair failed and we were unable to recover it. 00:30:06.967 [2024-07-23 01:09:51.035790] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.967 [2024-07-23 01:09:51.035940] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.967 [2024-07-23 01:09:51.035966] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.967 [2024-07-23 01:09:51.035980] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.967 [2024-07-23 01:09:51.035993] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:06.967 [2024-07-23 01:09:51.036023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.967 qpair failed and we were unable to recover it. 00:30:06.967 [2024-07-23 01:09:51.045851] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.967 [2024-07-23 01:09:51.045991] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.967 [2024-07-23 01:09:51.046016] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.967 [2024-07-23 01:09:51.046030] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.967 [2024-07-23 01:09:51.046043] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:06.967 [2024-07-23 01:09:51.046071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.967 qpair failed and we were unable to recover it. 00:30:06.967 [2024-07-23 01:09:51.055845] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.967 [2024-07-23 01:09:51.055986] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.967 [2024-07-23 01:09:51.056011] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.967 [2024-07-23 01:09:51.056024] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.968 [2024-07-23 01:09:51.056037] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:06.968 [2024-07-23 01:09:51.056066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.968 qpair failed and we were unable to recover it. 00:30:06.968 [2024-07-23 01:09:51.065876] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.968 [2024-07-23 01:09:51.066025] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.968 [2024-07-23 01:09:51.066050] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.968 [2024-07-23 01:09:51.066064] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.968 [2024-07-23 01:09:51.066076] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:06.968 [2024-07-23 01:09:51.066109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.968 qpair failed and we were unable to recover it. 00:30:06.968 [2024-07-23 01:09:51.075943] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.968 [2024-07-23 01:09:51.076133] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.968 [2024-07-23 01:09:51.076159] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.968 [2024-07-23 01:09:51.076173] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.968 [2024-07-23 01:09:51.076186] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:06.968 [2024-07-23 01:09:51.076214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.968 qpair failed and we were unable to recover it. 00:30:06.968 [2024-07-23 01:09:51.085939] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.968 [2024-07-23 01:09:51.086080] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.968 [2024-07-23 01:09:51.086105] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.968 [2024-07-23 01:09:51.086118] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.968 [2024-07-23 01:09:51.086130] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:06.968 [2024-07-23 01:09:51.086158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.968 qpair failed and we were unable to recover it. 00:30:06.968 [2024-07-23 01:09:51.095962] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.968 [2024-07-23 01:09:51.096107] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.968 [2024-07-23 01:09:51.096132] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.968 [2024-07-23 01:09:51.096146] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.968 [2024-07-23 01:09:51.096158] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:06.968 [2024-07-23 01:09:51.096185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.968 qpair failed and we were unable to recover it. 00:30:06.968 [2024-07-23 01:09:51.105984] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.968 [2024-07-23 01:09:51.106118] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.968 [2024-07-23 01:09:51.106143] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.968 [2024-07-23 01:09:51.106157] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.968 [2024-07-23 01:09:51.106170] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:06.968 [2024-07-23 01:09:51.106198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.968 qpair failed and we were unable to recover it. 00:30:06.968 [2024-07-23 01:09:51.116034] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.968 [2024-07-23 01:09:51.116194] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.968 [2024-07-23 01:09:51.116224] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.968 [2024-07-23 01:09:51.116239] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.968 [2024-07-23 01:09:51.116252] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:06.968 [2024-07-23 01:09:51.116280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.968 qpair failed and we were unable to recover it. 00:30:06.968 [2024-07-23 01:09:51.126101] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.968 [2024-07-23 01:09:51.126256] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.968 [2024-07-23 01:09:51.126280] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.968 [2024-07-23 01:09:51.126294] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.968 [2024-07-23 01:09:51.126307] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:06.968 [2024-07-23 01:09:51.126341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.968 qpair failed and we were unable to recover it. 00:30:06.968 [2024-07-23 01:09:51.136082] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.968 [2024-07-23 01:09:51.136220] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.968 [2024-07-23 01:09:51.136245] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.968 [2024-07-23 01:09:51.136260] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.968 [2024-07-23 01:09:51.136272] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:06.968 [2024-07-23 01:09:51.136299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.968 qpair failed and we were unable to recover it. 00:30:06.968 [2024-07-23 01:09:51.146105] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.968 [2024-07-23 01:09:51.146237] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.968 [2024-07-23 01:09:51.146261] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.968 [2024-07-23 01:09:51.146275] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.968 [2024-07-23 01:09:51.146288] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:06.968 [2024-07-23 01:09:51.146316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.968 qpair failed and we were unable to recover it. 00:30:06.968 [2024-07-23 01:09:51.156279] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.968 [2024-07-23 01:09:51.156438] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.968 [2024-07-23 01:09:51.156463] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.968 [2024-07-23 01:09:51.156477] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.968 [2024-07-23 01:09:51.156490] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:06.968 [2024-07-23 01:09:51.156526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.968 qpair failed and we were unable to recover it. 00:30:06.968 [2024-07-23 01:09:51.166203] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.968 [2024-07-23 01:09:51.166347] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.968 [2024-07-23 01:09:51.166373] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.968 [2024-07-23 01:09:51.166387] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.968 [2024-07-23 01:09:51.166398] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:06.968 [2024-07-23 01:09:51.166426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.968 qpair failed and we were unable to recover it. 00:30:07.227 [2024-07-23 01:09:51.176204] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.227 [2024-07-23 01:09:51.176348] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.227 [2024-07-23 01:09:51.176374] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.227 [2024-07-23 01:09:51.176388] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.227 [2024-07-23 01:09:51.176401] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:07.227 [2024-07-23 01:09:51.176429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.227 qpair failed and we were unable to recover it. 00:30:07.227 [2024-07-23 01:09:51.186250] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.227 [2024-07-23 01:09:51.186389] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.227 [2024-07-23 01:09:51.186414] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.227 [2024-07-23 01:09:51.186429] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.227 [2024-07-23 01:09:51.186442] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:07.227 [2024-07-23 01:09:51.186470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.227 qpair failed and we were unable to recover it. 00:30:07.227 [2024-07-23 01:09:51.196278] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.227 [2024-07-23 01:09:51.196432] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.227 [2024-07-23 01:09:51.196458] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.227 [2024-07-23 01:09:51.196472] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.228 [2024-07-23 01:09:51.196485] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:07.228 [2024-07-23 01:09:51.196512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.228 qpair failed and we were unable to recover it. 00:30:07.228 [2024-07-23 01:09:51.206335] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.228 [2024-07-23 01:09:51.206530] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.228 [2024-07-23 01:09:51.206561] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.228 [2024-07-23 01:09:51.206576] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.228 [2024-07-23 01:09:51.206589] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:07.228 [2024-07-23 01:09:51.206624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.228 qpair failed and we were unable to recover it. 00:30:07.228 [2024-07-23 01:09:51.216362] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.228 [2024-07-23 01:09:51.216512] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.228 [2024-07-23 01:09:51.216538] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.228 [2024-07-23 01:09:51.216551] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.228 [2024-07-23 01:09:51.216565] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:07.228 [2024-07-23 01:09:51.216592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.228 qpair failed and we were unable to recover it. 00:30:07.228 [2024-07-23 01:09:51.226363] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.228 [2024-07-23 01:09:51.226508] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.228 [2024-07-23 01:09:51.226533] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.228 [2024-07-23 01:09:51.226548] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.228 [2024-07-23 01:09:51.226560] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:07.228 [2024-07-23 01:09:51.226588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.228 qpair failed and we were unable to recover it. 00:30:07.228 [2024-07-23 01:09:51.236369] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.228 [2024-07-23 01:09:51.236510] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.228 [2024-07-23 01:09:51.236536] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.228 [2024-07-23 01:09:51.236550] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.228 [2024-07-23 01:09:51.236563] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:07.228 [2024-07-23 01:09:51.236591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.228 qpair failed and we were unable to recover it. 00:30:07.228 [2024-07-23 01:09:51.246431] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.228 [2024-07-23 01:09:51.246605] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.228 [2024-07-23 01:09:51.246638] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.228 [2024-07-23 01:09:51.246653] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.228 [2024-07-23 01:09:51.246672] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:07.228 [2024-07-23 01:09:51.246701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.228 qpair failed and we were unable to recover it. 00:30:07.228 [2024-07-23 01:09:51.256437] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.228 [2024-07-23 01:09:51.256586] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.228 [2024-07-23 01:09:51.256612] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.228 [2024-07-23 01:09:51.256637] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.228 [2024-07-23 01:09:51.256651] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:07.228 [2024-07-23 01:09:51.256681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.228 qpair failed and we were unable to recover it. 00:30:07.228 [2024-07-23 01:09:51.266477] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.228 [2024-07-23 01:09:51.266634] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.228 [2024-07-23 01:09:51.266663] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.228 [2024-07-23 01:09:51.266679] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.228 [2024-07-23 01:09:51.266693] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:07.228 [2024-07-23 01:09:51.266723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.228 qpair failed and we were unable to recover it. 00:30:07.228 [2024-07-23 01:09:51.276499] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.228 [2024-07-23 01:09:51.276685] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.228 [2024-07-23 01:09:51.276712] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.228 [2024-07-23 01:09:51.276727] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.228 [2024-07-23 01:09:51.276741] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:07.228 [2024-07-23 01:09:51.276769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.228 qpair failed and we were unable to recover it. 00:30:07.228 [2024-07-23 01:09:51.286540] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.228 [2024-07-23 01:09:51.286718] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.228 [2024-07-23 01:09:51.286744] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.228 [2024-07-23 01:09:51.286759] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.228 [2024-07-23 01:09:51.286771] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:07.228 [2024-07-23 01:09:51.286800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.228 qpair failed and we were unable to recover it. 00:30:07.228 [2024-07-23 01:09:51.296553] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.228 [2024-07-23 01:09:51.296706] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.228 [2024-07-23 01:09:51.296733] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.228 [2024-07-23 01:09:51.296748] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.228 [2024-07-23 01:09:51.296761] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:07.228 [2024-07-23 01:09:51.296789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.228 qpair failed and we were unable to recover it. 00:30:07.228 [2024-07-23 01:09:51.306581] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.228 [2024-07-23 01:09:51.306730] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.228 [2024-07-23 01:09:51.306756] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.228 [2024-07-23 01:09:51.306771] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.228 [2024-07-23 01:09:51.306784] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:07.228 [2024-07-23 01:09:51.306812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.228 qpair failed and we were unable to recover it. 00:30:07.228 [2024-07-23 01:09:51.316661] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.228 [2024-07-23 01:09:51.316849] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.228 [2024-07-23 01:09:51.316875] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.228 [2024-07-23 01:09:51.316890] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.228 [2024-07-23 01:09:51.316903] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:07.228 [2024-07-23 01:09:51.316931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.228 qpair failed and we were unable to recover it. 00:30:07.228 [2024-07-23 01:09:51.326669] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.228 [2024-07-23 01:09:51.326816] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.228 [2024-07-23 01:09:51.326842] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.228 [2024-07-23 01:09:51.326856] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.228 [2024-07-23 01:09:51.326870] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:07.228 [2024-07-23 01:09:51.326897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.228 qpair failed and we were unable to recover it. 00:30:07.228 [2024-07-23 01:09:51.336687] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.229 [2024-07-23 01:09:51.336833] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.229 [2024-07-23 01:09:51.336861] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.229 [2024-07-23 01:09:51.336876] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.229 [2024-07-23 01:09:51.336894] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:07.229 [2024-07-23 01:09:51.336923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.229 qpair failed and we were unable to recover it. 00:30:07.229 [2024-07-23 01:09:51.346704] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.229 [2024-07-23 01:09:51.346881] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.229 [2024-07-23 01:09:51.346907] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.229 [2024-07-23 01:09:51.346922] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.229 [2024-07-23 01:09:51.346950] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:07.229 [2024-07-23 01:09:51.346979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.229 qpair failed and we were unable to recover it. 00:30:07.229 [2024-07-23 01:09:51.356719] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.229 [2024-07-23 01:09:51.356856] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.229 [2024-07-23 01:09:51.356882] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.229 [2024-07-23 01:09:51.356896] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.229 [2024-07-23 01:09:51.356909] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:07.229 [2024-07-23 01:09:51.356938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.229 qpair failed and we were unable to recover it. 00:30:07.229 [2024-07-23 01:09:51.366801] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.229 [2024-07-23 01:09:51.366988] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.229 [2024-07-23 01:09:51.367028] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.229 [2024-07-23 01:09:51.367043] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.229 [2024-07-23 01:09:51.367056] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:07.229 [2024-07-23 01:09:51.367084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.229 qpair failed and we were unable to recover it. 00:30:07.229 [2024-07-23 01:09:51.376837] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.229 [2024-07-23 01:09:51.376980] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.229 [2024-07-23 01:09:51.377006] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.229 [2024-07-23 01:09:51.377021] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.229 [2024-07-23 01:09:51.377035] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:07.229 [2024-07-23 01:09:51.377063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.229 qpair failed and we were unable to recover it. 00:30:07.229 [2024-07-23 01:09:51.386903] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.229 [2024-07-23 01:09:51.387049] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.229 [2024-07-23 01:09:51.387076] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.229 [2024-07-23 01:09:51.387091] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.229 [2024-07-23 01:09:51.387104] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:07.229 [2024-07-23 01:09:51.387133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.229 qpair failed and we were unable to recover it. 00:30:07.229 [2024-07-23 01:09:51.396869] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.229 [2024-07-23 01:09:51.397012] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.229 [2024-07-23 01:09:51.397039] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.229 [2024-07-23 01:09:51.397054] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.229 [2024-07-23 01:09:51.397068] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:07.229 [2024-07-23 01:09:51.397096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.229 qpair failed and we were unable to recover it. 00:30:07.229 [2024-07-23 01:09:51.406914] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.229 [2024-07-23 01:09:51.407062] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.229 [2024-07-23 01:09:51.407088] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.229 [2024-07-23 01:09:51.407103] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.229 [2024-07-23 01:09:51.407116] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:07.229 [2024-07-23 01:09:51.407159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.229 qpair failed and we were unable to recover it. 00:30:07.229 [2024-07-23 01:09:51.417022] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.229 [2024-07-23 01:09:51.417163] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.229 [2024-07-23 01:09:51.417190] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.229 [2024-07-23 01:09:51.417205] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.229 [2024-07-23 01:09:51.417217] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:07.229 [2024-07-23 01:09:51.417261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.229 qpair failed and we were unable to recover it. 00:30:07.229 [2024-07-23 01:09:51.426993] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.229 [2024-07-23 01:09:51.427147] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.229 [2024-07-23 01:09:51.427173] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.229 [2024-07-23 01:09:51.427189] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.229 [2024-07-23 01:09:51.427207] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:07.229 [2024-07-23 01:09:51.427236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.229 qpair failed and we were unable to recover it. 00:30:07.488 [2024-07-23 01:09:51.436987] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.488 [2024-07-23 01:09:51.437129] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.488 [2024-07-23 01:09:51.437156] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.488 [2024-07-23 01:09:51.437172] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.488 [2024-07-23 01:09:51.437185] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:07.488 [2024-07-23 01:09:51.437214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.488 qpair failed and we were unable to recover it. 00:30:07.488 [2024-07-23 01:09:51.447075] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.488 [2024-07-23 01:09:51.447220] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.488 [2024-07-23 01:09:51.447247] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.488 [2024-07-23 01:09:51.447262] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.488 [2024-07-23 01:09:51.447291] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:07.488 [2024-07-23 01:09:51.447322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.488 qpair failed and we were unable to recover it. 00:30:07.488 [2024-07-23 01:09:51.457044] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.488 [2024-07-23 01:09:51.457182] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.488 [2024-07-23 01:09:51.457208] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.488 [2024-07-23 01:09:51.457223] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.488 [2024-07-23 01:09:51.457236] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:07.488 [2024-07-23 01:09:51.457264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.488 qpair failed and we were unable to recover it. 00:30:07.488 [2024-07-23 01:09:51.467094] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.488 [2024-07-23 01:09:51.467239] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.488 [2024-07-23 01:09:51.467267] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.488 [2024-07-23 01:09:51.467282] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.488 [2024-07-23 01:09:51.467295] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:07.488 [2024-07-23 01:09:51.467339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.488 qpair failed and we were unable to recover it. 00:30:07.488 [2024-07-23 01:09:51.477101] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.488 [2024-07-23 01:09:51.477243] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.488 [2024-07-23 01:09:51.477270] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.488 [2024-07-23 01:09:51.477285] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.488 [2024-07-23 01:09:51.477297] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:07.488 [2024-07-23 01:09:51.477325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.488 qpair failed and we were unable to recover it. 00:30:07.488 [2024-07-23 01:09:51.487203] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.488 [2024-07-23 01:09:51.487398] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.488 [2024-07-23 01:09:51.487424] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.488 [2024-07-23 01:09:51.487454] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.488 [2024-07-23 01:09:51.487466] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:07.488 [2024-07-23 01:09:51.487511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.488 qpair failed and we were unable to recover it. 00:30:07.488 [2024-07-23 01:09:51.497163] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.488 [2024-07-23 01:09:51.497307] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.488 [2024-07-23 01:09:51.497333] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.488 [2024-07-23 01:09:51.497348] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.489 [2024-07-23 01:09:51.497361] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:07.489 [2024-07-23 01:09:51.497390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.489 qpair failed and we were unable to recover it. 00:30:07.489 [2024-07-23 01:09:51.507218] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.489 [2024-07-23 01:09:51.507372] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.489 [2024-07-23 01:09:51.507398] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.489 [2024-07-23 01:09:51.507412] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.489 [2024-07-23 01:09:51.507425] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:07.489 [2024-07-23 01:09:51.507453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.489 qpair failed and we were unable to recover it. 00:30:07.489 [2024-07-23 01:09:51.517256] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.489 [2024-07-23 01:09:51.517404] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.489 [2024-07-23 01:09:51.517431] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.489 [2024-07-23 01:09:51.517445] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.489 [2024-07-23 01:09:51.517465] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:07.489 [2024-07-23 01:09:51.517494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.489 qpair failed and we were unable to recover it. 00:30:07.489 [2024-07-23 01:09:51.527248] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.489 [2024-07-23 01:09:51.527414] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.489 [2024-07-23 01:09:51.527440] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.489 [2024-07-23 01:09:51.527455] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.489 [2024-07-23 01:09:51.527468] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:07.489 [2024-07-23 01:09:51.527496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.489 qpair failed and we were unable to recover it. 00:30:07.489 [2024-07-23 01:09:51.537280] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.489 [2024-07-23 01:09:51.537435] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.489 [2024-07-23 01:09:51.537462] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.489 [2024-07-23 01:09:51.537476] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.489 [2024-07-23 01:09:51.537488] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:07.489 [2024-07-23 01:09:51.537516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.489 qpair failed and we were unable to recover it. 00:30:07.489 [2024-07-23 01:09:51.547328] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.489 [2024-07-23 01:09:51.547473] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.489 [2024-07-23 01:09:51.547499] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.489 [2024-07-23 01:09:51.547514] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.489 [2024-07-23 01:09:51.547527] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:07.489 [2024-07-23 01:09:51.547555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.489 qpair failed and we were unable to recover it. 00:30:07.489 [2024-07-23 01:09:51.557457] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.489 [2024-07-23 01:09:51.557599] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.489 [2024-07-23 01:09:51.557634] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.489 [2024-07-23 01:09:51.557650] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.489 [2024-07-23 01:09:51.557663] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:07.489 [2024-07-23 01:09:51.557692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.489 qpair failed and we were unable to recover it. 00:30:07.489 [2024-07-23 01:09:51.567385] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.489 [2024-07-23 01:09:51.567576] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.489 [2024-07-23 01:09:51.567624] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.489 [2024-07-23 01:09:51.567641] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.489 [2024-07-23 01:09:51.567654] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:07.489 [2024-07-23 01:09:51.567699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.489 qpair failed and we were unable to recover it. 00:30:07.489 [2024-07-23 01:09:51.577416] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.489 [2024-07-23 01:09:51.577599] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.489 [2024-07-23 01:09:51.577633] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.489 [2024-07-23 01:09:51.577648] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.489 [2024-07-23 01:09:51.577661] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:07.489 [2024-07-23 01:09:51.577689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.489 qpair failed and we were unable to recover it. 00:30:07.489 [2024-07-23 01:09:51.587449] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.489 [2024-07-23 01:09:51.587641] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.489 [2024-07-23 01:09:51.587667] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.489 [2024-07-23 01:09:51.587681] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.489 [2024-07-23 01:09:51.587694] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:07.489 [2024-07-23 01:09:51.587723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.489 qpair failed and we were unable to recover it. 00:30:07.489 [2024-07-23 01:09:51.597519] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.489 [2024-07-23 01:09:51.597664] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.489 [2024-07-23 01:09:51.597692] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.489 [2024-07-23 01:09:51.597707] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.489 [2024-07-23 01:09:51.597719] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:07.489 [2024-07-23 01:09:51.597750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.489 qpair failed and we were unable to recover it. 00:30:07.489 [2024-07-23 01:09:51.607571] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.489 [2024-07-23 01:09:51.607724] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.489 [2024-07-23 01:09:51.607751] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.489 [2024-07-23 01:09:51.607771] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.489 [2024-07-23 01:09:51.607786] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:07.489 [2024-07-23 01:09:51.607815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.489 qpair failed and we were unable to recover it. 00:30:07.489 [2024-07-23 01:09:51.617531] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.489 [2024-07-23 01:09:51.617680] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.489 [2024-07-23 01:09:51.617707] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.489 [2024-07-23 01:09:51.617722] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.489 [2024-07-23 01:09:51.617735] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:07.489 [2024-07-23 01:09:51.617764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.489 qpair failed and we were unable to recover it. 00:30:07.489 [2024-07-23 01:09:51.627565] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.489 [2024-07-23 01:09:51.627720] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.489 [2024-07-23 01:09:51.627746] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.489 [2024-07-23 01:09:51.627771] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.489 [2024-07-23 01:09:51.627783] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:07.490 [2024-07-23 01:09:51.627811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.490 qpair failed and we were unable to recover it. 00:30:07.490 [2024-07-23 01:09:51.637610] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.490 [2024-07-23 01:09:51.637786] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.490 [2024-07-23 01:09:51.637812] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.490 [2024-07-23 01:09:51.637827] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.490 [2024-07-23 01:09:51.637839] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:07.490 [2024-07-23 01:09:51.637868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.490 qpair failed and we were unable to recover it. 00:30:07.490 [2024-07-23 01:09:51.647648] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.490 [2024-07-23 01:09:51.647804] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.490 [2024-07-23 01:09:51.647831] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.490 [2024-07-23 01:09:51.647846] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.490 [2024-07-23 01:09:51.647860] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:07.490 [2024-07-23 01:09:51.647887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.490 qpair failed and we were unable to recover it. 00:30:07.490 [2024-07-23 01:09:51.657667] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.490 [2024-07-23 01:09:51.657806] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.490 [2024-07-23 01:09:51.657833] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.490 [2024-07-23 01:09:51.657848] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.490 [2024-07-23 01:09:51.657862] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:07.490 [2024-07-23 01:09:51.657892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.490 qpair failed and we were unable to recover it. 00:30:07.490 [2024-07-23 01:09:51.667683] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.490 [2024-07-23 01:09:51.667822] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.490 [2024-07-23 01:09:51.667849] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.490 [2024-07-23 01:09:51.667863] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.490 [2024-07-23 01:09:51.667876] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:07.490 [2024-07-23 01:09:51.667904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.490 qpair failed and we were unable to recover it. 00:30:07.490 [2024-07-23 01:09:51.677720] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.490 [2024-07-23 01:09:51.677906] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.490 [2024-07-23 01:09:51.677947] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.490 [2024-07-23 01:09:51.677962] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.490 [2024-07-23 01:09:51.677976] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:07.490 [2024-07-23 01:09:51.678003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.490 qpair failed and we were unable to recover it. 00:30:07.490 [2024-07-23 01:09:51.687764] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.490 [2024-07-23 01:09:51.687954] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.490 [2024-07-23 01:09:51.687981] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.490 [2024-07-23 01:09:51.687996] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.490 [2024-07-23 01:09:51.688009] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:07.490 [2024-07-23 01:09:51.688037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.490 qpair failed and we were unable to recover it. 00:30:07.749 [2024-07-23 01:09:51.697793] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.749 [2024-07-23 01:09:51.697981] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.749 [2024-07-23 01:09:51.698023] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.749 [2024-07-23 01:09:51.698043] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.749 [2024-07-23 01:09:51.698057] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:07.749 [2024-07-23 01:09:51.698100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.749 qpair failed and we were unable to recover it. 00:30:07.749 [2024-07-23 01:09:51.707851] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.749 [2024-07-23 01:09:51.708011] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.749 [2024-07-23 01:09:51.708038] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.749 [2024-07-23 01:09:51.708053] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.749 [2024-07-23 01:09:51.708066] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:07.749 [2024-07-23 01:09:51.708109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.749 qpair failed and we were unable to recover it. 00:30:07.749 [2024-07-23 01:09:51.717861] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.749 [2024-07-23 01:09:51.718002] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.749 [2024-07-23 01:09:51.718028] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.749 [2024-07-23 01:09:51.718043] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.749 [2024-07-23 01:09:51.718057] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:07.749 [2024-07-23 01:09:51.718100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.749 qpair failed and we were unable to recover it. 00:30:07.749 [2024-07-23 01:09:51.727965] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.749 [2024-07-23 01:09:51.728147] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.749 [2024-07-23 01:09:51.728174] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.749 [2024-07-23 01:09:51.728189] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.749 [2024-07-23 01:09:51.728202] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:07.749 [2024-07-23 01:09:51.728229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.749 qpair failed and we were unable to recover it. 00:30:07.749 [2024-07-23 01:09:51.737912] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.749 [2024-07-23 01:09:51.738054] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.749 [2024-07-23 01:09:51.738080] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.749 [2024-07-23 01:09:51.738096] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.749 [2024-07-23 01:09:51.738109] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:07.749 [2024-07-23 01:09:51.738137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.749 qpair failed and we were unable to recover it. 00:30:07.749 [2024-07-23 01:09:51.747925] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.749 [2024-07-23 01:09:51.748087] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.749 [2024-07-23 01:09:51.748114] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.749 [2024-07-23 01:09:51.748133] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.749 [2024-07-23 01:09:51.748162] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:07.749 [2024-07-23 01:09:51.748191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.749 qpair failed and we were unable to recover it. 00:30:07.749 [2024-07-23 01:09:51.758058] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.749 [2024-07-23 01:09:51.758201] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.749 [2024-07-23 01:09:51.758229] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.749 [2024-07-23 01:09:51.758244] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.749 [2024-07-23 01:09:51.758257] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:07.749 [2024-07-23 01:09:51.758300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.749 qpair failed and we were unable to recover it. 00:30:07.749 [2024-07-23 01:09:51.767976] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.749 [2024-07-23 01:09:51.768118] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.749 [2024-07-23 01:09:51.768145] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.750 [2024-07-23 01:09:51.768160] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.750 [2024-07-23 01:09:51.768173] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:07.750 [2024-07-23 01:09:51.768202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.750 qpair failed and we were unable to recover it. 00:30:07.750 [2024-07-23 01:09:51.778018] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.750 [2024-07-23 01:09:51.778197] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.750 [2024-07-23 01:09:51.778224] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.750 [2024-07-23 01:09:51.778239] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.750 [2024-07-23 01:09:51.778253] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:07.750 [2024-07-23 01:09:51.778280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.750 qpair failed and we were unable to recover it. 00:30:07.750 [2024-07-23 01:09:51.788075] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.750 [2024-07-23 01:09:51.788261] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.750 [2024-07-23 01:09:51.788288] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.750 [2024-07-23 01:09:51.788309] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.750 [2024-07-23 01:09:51.788323] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:07.750 [2024-07-23 01:09:51.788351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.750 qpair failed and we were unable to recover it. 00:30:07.750 [2024-07-23 01:09:51.798094] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.750 [2024-07-23 01:09:51.798238] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.750 [2024-07-23 01:09:51.798266] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.750 [2024-07-23 01:09:51.798282] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.750 [2024-07-23 01:09:51.798296] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:07.750 [2024-07-23 01:09:51.798339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.750 qpair failed and we were unable to recover it. 00:30:07.750 [2024-07-23 01:09:51.808107] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.750 [2024-07-23 01:09:51.808293] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.750 [2024-07-23 01:09:51.808320] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.750 [2024-07-23 01:09:51.808335] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.750 [2024-07-23 01:09:51.808348] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:07.750 [2024-07-23 01:09:51.808376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.750 qpair failed and we were unable to recover it. 00:30:07.750 [2024-07-23 01:09:51.818183] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.750 [2024-07-23 01:09:51.818322] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.750 [2024-07-23 01:09:51.818348] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.750 [2024-07-23 01:09:51.818363] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.750 [2024-07-23 01:09:51.818376] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:07.750 [2024-07-23 01:09:51.818420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.750 qpair failed and we were unable to recover it. 00:30:07.750 [2024-07-23 01:09:51.828150] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.750 [2024-07-23 01:09:51.828287] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.750 [2024-07-23 01:09:51.828313] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.750 [2024-07-23 01:09:51.828328] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.750 [2024-07-23 01:09:51.828342] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:07.750 [2024-07-23 01:09:51.828369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.750 qpair failed and we were unable to recover it. 00:30:07.750 [2024-07-23 01:09:51.838217] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.750 [2024-07-23 01:09:51.838375] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.750 [2024-07-23 01:09:51.838402] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.750 [2024-07-23 01:09:51.838418] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.750 [2024-07-23 01:09:51.838431] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:07.750 [2024-07-23 01:09:51.838459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.750 qpair failed and we were unable to recover it. 00:30:07.750 [2024-07-23 01:09:51.848257] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.750 [2024-07-23 01:09:51.848489] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.750 [2024-07-23 01:09:51.848514] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.750 [2024-07-23 01:09:51.848528] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.750 [2024-07-23 01:09:51.848540] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:07.750 [2024-07-23 01:09:51.848582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.750 qpair failed and we were unable to recover it. 00:30:07.750 [2024-07-23 01:09:51.858253] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.750 [2024-07-23 01:09:51.858430] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.750 [2024-07-23 01:09:51.858457] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.750 [2024-07-23 01:09:51.858472] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.750 [2024-07-23 01:09:51.858485] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:07.750 [2024-07-23 01:09:51.858513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.750 qpair failed and we were unable to recover it. 00:30:07.750 [2024-07-23 01:09:51.868268] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.750 [2024-07-23 01:09:51.868441] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.750 [2024-07-23 01:09:51.868468] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.750 [2024-07-23 01:09:51.868482] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.750 [2024-07-23 01:09:51.868496] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:07.750 [2024-07-23 01:09:51.868524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.750 qpair failed and we were unable to recover it. 00:30:07.750 [2024-07-23 01:09:51.878283] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.750 [2024-07-23 01:09:51.878422] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.750 [2024-07-23 01:09:51.878448] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.750 [2024-07-23 01:09:51.878468] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.750 [2024-07-23 01:09:51.878483] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:07.750 [2024-07-23 01:09:51.878513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.750 qpair failed and we were unable to recover it. 00:30:07.750 [2024-07-23 01:09:51.888342] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.750 [2024-07-23 01:09:51.888484] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.750 [2024-07-23 01:09:51.888509] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.750 [2024-07-23 01:09:51.888524] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.750 [2024-07-23 01:09:51.888536] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:07.750 [2024-07-23 01:09:51.888580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.750 qpair failed and we were unable to recover it. 00:30:07.750 [2024-07-23 01:09:51.898340] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.750 [2024-07-23 01:09:51.898480] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.750 [2024-07-23 01:09:51.898506] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.750 [2024-07-23 01:09:51.898521] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.751 [2024-07-23 01:09:51.898534] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:07.751 [2024-07-23 01:09:51.898563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.751 qpair failed and we were unable to recover it. 00:30:07.751 [2024-07-23 01:09:51.908403] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.751 [2024-07-23 01:09:51.908542] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.751 [2024-07-23 01:09:51.908567] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.751 [2024-07-23 01:09:51.908582] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.751 [2024-07-23 01:09:51.908595] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:07.751 [2024-07-23 01:09:51.908632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.751 qpair failed and we were unable to recover it. 00:30:07.751 [2024-07-23 01:09:51.918412] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.751 [2024-07-23 01:09:51.918551] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.751 [2024-07-23 01:09:51.918578] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.751 [2024-07-23 01:09:51.918593] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.751 [2024-07-23 01:09:51.918606] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:07.751 [2024-07-23 01:09:51.918643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.751 qpair failed and we were unable to recover it. 00:30:07.751 [2024-07-23 01:09:51.928470] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.751 [2024-07-23 01:09:51.928656] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.751 [2024-07-23 01:09:51.928683] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.751 [2024-07-23 01:09:51.928698] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.751 [2024-07-23 01:09:51.928711] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:07.751 [2024-07-23 01:09:51.928740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.751 qpair failed and we were unable to recover it. 00:30:07.751 [2024-07-23 01:09:51.938470] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.751 [2024-07-23 01:09:51.938623] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.751 [2024-07-23 01:09:51.938661] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.751 [2024-07-23 01:09:51.938675] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.751 [2024-07-23 01:09:51.938689] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:07.751 [2024-07-23 01:09:51.938728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.751 qpair failed and we were unable to recover it. 00:30:07.751 [2024-07-23 01:09:51.948519] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.751 [2024-07-23 01:09:51.948712] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.751 [2024-07-23 01:09:51.948739] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.751 [2024-07-23 01:09:51.948759] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.751 [2024-07-23 01:09:51.948774] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:07.751 [2024-07-23 01:09:51.948806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.751 qpair failed and we were unable to recover it. 00:30:08.010 [2024-07-23 01:09:51.958603] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.010 [2024-07-23 01:09:51.958801] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.010 [2024-07-23 01:09:51.958829] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.010 [2024-07-23 01:09:51.958844] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.010 [2024-07-23 01:09:51.958857] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:08.010 [2024-07-23 01:09:51.958887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.010 qpair failed and we were unable to recover it. 00:30:08.010 [2024-07-23 01:09:51.968582] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.010 [2024-07-23 01:09:51.968734] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.010 [2024-07-23 01:09:51.968768] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.010 [2024-07-23 01:09:51.968784] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.010 [2024-07-23 01:09:51.968798] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:08.010 [2024-07-23 01:09:51.968829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.010 qpair failed and we were unable to recover it. 00:30:08.010 [2024-07-23 01:09:51.978621] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.010 [2024-07-23 01:09:51.978784] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.010 [2024-07-23 01:09:51.978810] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.010 [2024-07-23 01:09:51.978825] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.010 [2024-07-23 01:09:51.978840] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:08.010 [2024-07-23 01:09:51.978868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.010 qpair failed and we were unable to recover it. 00:30:08.010 [2024-07-23 01:09:51.988611] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.010 [2024-07-23 01:09:51.988768] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.010 [2024-07-23 01:09:51.988794] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.010 [2024-07-23 01:09:51.988809] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.010 [2024-07-23 01:09:51.988823] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:08.010 [2024-07-23 01:09:51.988851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.010 qpair failed and we were unable to recover it. 00:30:08.010 [2024-07-23 01:09:51.998652] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.010 [2024-07-23 01:09:51.998816] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.010 [2024-07-23 01:09:51.998843] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.010 [2024-07-23 01:09:51.998858] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.010 [2024-07-23 01:09:51.998871] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:08.010 [2024-07-23 01:09:51.998899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.010 qpair failed and we were unable to recover it. 00:30:08.010 [2024-07-23 01:09:52.008770] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.010 [2024-07-23 01:09:52.008943] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.010 [2024-07-23 01:09:52.008971] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.010 [2024-07-23 01:09:52.009001] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.010 [2024-07-23 01:09:52.009014] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:08.010 [2024-07-23 01:09:52.009042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.010 qpair failed and we were unable to recover it. 00:30:08.010 [2024-07-23 01:09:52.018717] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.010 [2024-07-23 01:09:52.018859] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.010 [2024-07-23 01:09:52.018889] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.010 [2024-07-23 01:09:52.018904] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.010 [2024-07-23 01:09:52.018917] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:08.010 [2024-07-23 01:09:52.018952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.010 qpair failed and we were unable to recover it. 00:30:08.010 [2024-07-23 01:09:52.028743] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.010 [2024-07-23 01:09:52.028900] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.010 [2024-07-23 01:09:52.028927] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.010 [2024-07-23 01:09:52.028942] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.010 [2024-07-23 01:09:52.028955] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:08.010 [2024-07-23 01:09:52.028998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.010 qpair failed and we were unable to recover it. 00:30:08.010 [2024-07-23 01:09:52.038802] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.010 [2024-07-23 01:09:52.038981] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.010 [2024-07-23 01:09:52.039009] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.010 [2024-07-23 01:09:52.039039] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.010 [2024-07-23 01:09:52.039052] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:08.010 [2024-07-23 01:09:52.039080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.010 qpair failed and we were unable to recover it. 00:30:08.010 [2024-07-23 01:09:52.048850] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.010 [2024-07-23 01:09:52.049007] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.010 [2024-07-23 01:09:52.049033] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.010 [2024-07-23 01:09:52.049048] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.010 [2024-07-23 01:09:52.049076] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:08.010 [2024-07-23 01:09:52.049106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.010 qpair failed and we were unable to recover it. 00:30:08.010 [2024-07-23 01:09:52.058972] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.010 [2024-07-23 01:09:52.059112] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.010 [2024-07-23 01:09:52.059144] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.010 [2024-07-23 01:09:52.059159] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.010 [2024-07-23 01:09:52.059172] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:08.010 [2024-07-23 01:09:52.059214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.010 qpair failed and we were unable to recover it. 00:30:08.010 [2024-07-23 01:09:52.068875] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.011 [2024-07-23 01:09:52.069018] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.011 [2024-07-23 01:09:52.069044] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.011 [2024-07-23 01:09:52.069060] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.011 [2024-07-23 01:09:52.069074] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:08.011 [2024-07-23 01:09:52.069102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.011 qpair failed and we were unable to recover it. 00:30:08.011 [2024-07-23 01:09:52.078883] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.011 [2024-07-23 01:09:52.079019] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.011 [2024-07-23 01:09:52.079046] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.011 [2024-07-23 01:09:52.079061] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.011 [2024-07-23 01:09:52.079075] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:08.011 [2024-07-23 01:09:52.079104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.011 qpair failed and we were unable to recover it. 00:30:08.011 [2024-07-23 01:09:52.088946] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.011 [2024-07-23 01:09:52.089132] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.011 [2024-07-23 01:09:52.089173] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.011 [2024-07-23 01:09:52.089188] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.011 [2024-07-23 01:09:52.089201] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:08.011 [2024-07-23 01:09:52.089229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.011 qpair failed and we were unable to recover it. 00:30:08.011 [2024-07-23 01:09:52.098947] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.011 [2024-07-23 01:09:52.099089] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.011 [2024-07-23 01:09:52.099115] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.011 [2024-07-23 01:09:52.099130] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.011 [2024-07-23 01:09:52.099144] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:08.011 [2024-07-23 01:09:52.099177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.011 qpair failed and we were unable to recover it. 00:30:08.011 [2024-07-23 01:09:52.108968] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.011 [2024-07-23 01:09:52.109117] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.011 [2024-07-23 01:09:52.109144] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.011 [2024-07-23 01:09:52.109160] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.011 [2024-07-23 01:09:52.109173] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:08.011 [2024-07-23 01:09:52.109201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.011 qpair failed and we were unable to recover it. 00:30:08.011 [2024-07-23 01:09:52.119023] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.011 [2024-07-23 01:09:52.119171] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.011 [2024-07-23 01:09:52.119197] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.011 [2024-07-23 01:09:52.119212] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.011 [2024-07-23 01:09:52.119225] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:08.011 [2024-07-23 01:09:52.119253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.011 qpair failed and we were unable to recover it. 00:30:08.011 [2024-07-23 01:09:52.129182] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.011 [2024-07-23 01:09:52.129335] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.011 [2024-07-23 01:09:52.129363] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.011 [2024-07-23 01:09:52.129381] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.011 [2024-07-23 01:09:52.129410] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:08.011 [2024-07-23 01:09:52.129440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.011 qpair failed and we were unable to recover it. 00:30:08.011 [2024-07-23 01:09:52.139113] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.011 [2024-07-23 01:09:52.139288] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.011 [2024-07-23 01:09:52.139315] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.011 [2024-07-23 01:09:52.139330] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.011 [2024-07-23 01:09:52.139344] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:08.011 [2024-07-23 01:09:52.139372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.011 qpair failed and we were unable to recover it. 00:30:08.011 [2024-07-23 01:09:52.149120] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.011 [2024-07-23 01:09:52.149302] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.011 [2024-07-23 01:09:52.149335] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.011 [2024-07-23 01:09:52.149351] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.011 [2024-07-23 01:09:52.149365] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:08.011 [2024-07-23 01:09:52.149394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.011 qpair failed and we were unable to recover it. 00:30:08.011 [2024-07-23 01:09:52.159119] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.011 [2024-07-23 01:09:52.159256] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.011 [2024-07-23 01:09:52.159283] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.011 [2024-07-23 01:09:52.159299] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.011 [2024-07-23 01:09:52.159312] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:08.011 [2024-07-23 01:09:52.159340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.011 qpair failed and we were unable to recover it. 00:30:08.011 [2024-07-23 01:09:52.169158] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.011 [2024-07-23 01:09:52.169306] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.011 [2024-07-23 01:09:52.169332] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.011 [2024-07-23 01:09:52.169348] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.011 [2024-07-23 01:09:52.169361] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:08.011 [2024-07-23 01:09:52.169389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.011 qpair failed and we were unable to recover it. 00:30:08.011 [2024-07-23 01:09:52.179188] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.011 [2024-07-23 01:09:52.179371] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.011 [2024-07-23 01:09:52.179398] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.011 [2024-07-23 01:09:52.179413] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.011 [2024-07-23 01:09:52.179426] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:08.011 [2024-07-23 01:09:52.179453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.011 qpair failed and we were unable to recover it. 00:30:08.011 [2024-07-23 01:09:52.189199] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.011 [2024-07-23 01:09:52.189342] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.011 [2024-07-23 01:09:52.189368] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.011 [2024-07-23 01:09:52.189383] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.011 [2024-07-23 01:09:52.189396] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:08.011 [2024-07-23 01:09:52.189430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.011 qpair failed and we were unable to recover it. 00:30:08.011 [2024-07-23 01:09:52.199226] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.011 [2024-07-23 01:09:52.199394] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.011 [2024-07-23 01:09:52.199421] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.011 [2024-07-23 01:09:52.199436] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.012 [2024-07-23 01:09:52.199449] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:08.012 [2024-07-23 01:09:52.199478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.012 qpair failed and we were unable to recover it. 00:30:08.012 [2024-07-23 01:09:52.209321] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.012 [2024-07-23 01:09:52.209466] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.012 [2024-07-23 01:09:52.209492] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.012 [2024-07-23 01:09:52.209507] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.012 [2024-07-23 01:09:52.209520] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:08.012 [2024-07-23 01:09:52.209548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.012 qpair failed and we were unable to recover it. 00:30:08.270 [2024-07-23 01:09:52.219329] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.270 [2024-07-23 01:09:52.219476] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.270 [2024-07-23 01:09:52.219504] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.270 [2024-07-23 01:09:52.219523] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.270 [2024-07-23 01:09:52.219536] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:08.270 [2024-07-23 01:09:52.219566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.270 qpair failed and we were unable to recover it. 00:30:08.270 [2024-07-23 01:09:52.229318] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.270 [2024-07-23 01:09:52.229456] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.271 [2024-07-23 01:09:52.229483] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.271 [2024-07-23 01:09:52.229499] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.271 [2024-07-23 01:09:52.229512] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:08.271 [2024-07-23 01:09:52.229541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.271 qpair failed and we were unable to recover it. 00:30:08.271 [2024-07-23 01:09:52.239381] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.271 [2024-07-23 01:09:52.239544] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.271 [2024-07-23 01:09:52.239578] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.271 [2024-07-23 01:09:52.239594] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.271 [2024-07-23 01:09:52.239607] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:08.271 [2024-07-23 01:09:52.239646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.271 qpair failed and we were unable to recover it. 00:30:08.271 [2024-07-23 01:09:52.249423] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.271 [2024-07-23 01:09:52.249571] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.271 [2024-07-23 01:09:52.249598] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.271 [2024-07-23 01:09:52.249619] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.271 [2024-07-23 01:09:52.249633] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:08.271 [2024-07-23 01:09:52.249662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.271 qpair failed and we were unable to recover it. 00:30:08.271 [2024-07-23 01:09:52.259421] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.271 [2024-07-23 01:09:52.259606] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.271 [2024-07-23 01:09:52.259639] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.271 [2024-07-23 01:09:52.259655] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.271 [2024-07-23 01:09:52.259668] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:08.271 [2024-07-23 01:09:52.259696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.271 qpair failed and we were unable to recover it. 00:30:08.271 [2024-07-23 01:09:52.269431] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.271 [2024-07-23 01:09:52.269579] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.271 [2024-07-23 01:09:52.269606] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.271 [2024-07-23 01:09:52.269631] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.271 [2024-07-23 01:09:52.269645] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:08.271 [2024-07-23 01:09:52.269674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.271 qpair failed and we were unable to recover it. 00:30:08.271 [2024-07-23 01:09:52.279552] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.271 [2024-07-23 01:09:52.279702] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.271 [2024-07-23 01:09:52.279730] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.271 [2024-07-23 01:09:52.279745] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.271 [2024-07-23 01:09:52.279758] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:08.271 [2024-07-23 01:09:52.279792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.271 qpair failed and we were unable to recover it. 00:30:08.271 [2024-07-23 01:09:52.289597] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.271 [2024-07-23 01:09:52.289777] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.271 [2024-07-23 01:09:52.289804] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.271 [2024-07-23 01:09:52.289818] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.271 [2024-07-23 01:09:52.289831] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:08.271 [2024-07-23 01:09:52.289862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.271 qpair failed and we were unable to recover it. 00:30:08.271 [2024-07-23 01:09:52.299518] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.271 [2024-07-23 01:09:52.299665] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.271 [2024-07-23 01:09:52.299692] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.271 [2024-07-23 01:09:52.299707] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.271 [2024-07-23 01:09:52.299721] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:08.271 [2024-07-23 01:09:52.299750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.271 qpair failed and we were unable to recover it. 00:30:08.271 [2024-07-23 01:09:52.309550] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.271 [2024-07-23 01:09:52.309706] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.271 [2024-07-23 01:09:52.309733] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.271 [2024-07-23 01:09:52.309748] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.271 [2024-07-23 01:09:52.309760] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:08.271 [2024-07-23 01:09:52.309791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.271 qpair failed and we were unable to recover it. 00:30:08.271 [2024-07-23 01:09:52.319579] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.271 [2024-07-23 01:09:52.319739] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.271 [2024-07-23 01:09:52.319765] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.271 [2024-07-23 01:09:52.319780] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.271 [2024-07-23 01:09:52.319792] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:08.271 [2024-07-23 01:09:52.319822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.271 qpair failed and we were unable to recover it. 00:30:08.271 [2024-07-23 01:09:52.329639] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.271 [2024-07-23 01:09:52.329783] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.271 [2024-07-23 01:09:52.329814] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.271 [2024-07-23 01:09:52.329830] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.271 [2024-07-23 01:09:52.329842] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:08.271 [2024-07-23 01:09:52.329872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.271 qpair failed and we were unable to recover it. 00:30:08.271 [2024-07-23 01:09:52.339660] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.271 [2024-07-23 01:09:52.339802] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.271 [2024-07-23 01:09:52.339828] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.271 [2024-07-23 01:09:52.339842] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.271 [2024-07-23 01:09:52.339855] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:08.271 [2024-07-23 01:09:52.339884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.271 qpair failed and we were unable to recover it. 00:30:08.271 [2024-07-23 01:09:52.349672] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.271 [2024-07-23 01:09:52.349816] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.271 [2024-07-23 01:09:52.349842] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.271 [2024-07-23 01:09:52.349856] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.271 [2024-07-23 01:09:52.349869] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:08.271 [2024-07-23 01:09:52.349899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.271 qpair failed and we were unable to recover it. 00:30:08.271 [2024-07-23 01:09:52.359753] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.271 [2024-07-23 01:09:52.359892] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.271 [2024-07-23 01:09:52.359918] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.271 [2024-07-23 01:09:52.359933] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.272 [2024-07-23 01:09:52.359946] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:08.272 [2024-07-23 01:09:52.359990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.272 qpair failed and we were unable to recover it. 00:30:08.272 [2024-07-23 01:09:52.369743] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.272 [2024-07-23 01:09:52.369883] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.272 [2024-07-23 01:09:52.369909] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.272 [2024-07-23 01:09:52.369923] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.272 [2024-07-23 01:09:52.369936] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:08.272 [2024-07-23 01:09:52.369970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.272 qpair failed and we were unable to recover it. 00:30:08.272 [2024-07-23 01:09:52.379768] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.272 [2024-07-23 01:09:52.379904] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.272 [2024-07-23 01:09:52.379930] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.272 [2024-07-23 01:09:52.379945] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.272 [2024-07-23 01:09:52.379957] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:08.272 [2024-07-23 01:09:52.379987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.272 qpair failed and we were unable to recover it. 00:30:08.272 [2024-07-23 01:09:52.389784] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.272 [2024-07-23 01:09:52.389928] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.272 [2024-07-23 01:09:52.389954] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.272 [2024-07-23 01:09:52.389969] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.272 [2024-07-23 01:09:52.389981] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:08.272 [2024-07-23 01:09:52.390010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.272 qpair failed and we were unable to recover it. 00:30:08.272 [2024-07-23 01:09:52.399816] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.272 [2024-07-23 01:09:52.399951] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.272 [2024-07-23 01:09:52.399977] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.272 [2024-07-23 01:09:52.399992] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.272 [2024-07-23 01:09:52.400006] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:08.272 [2024-07-23 01:09:52.400035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.272 qpair failed and we were unable to recover it. 00:30:08.272 [2024-07-23 01:09:52.409933] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.272 [2024-07-23 01:09:52.410082] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.272 [2024-07-23 01:09:52.410108] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.272 [2024-07-23 01:09:52.410123] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.272 [2024-07-23 01:09:52.410135] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:08.272 [2024-07-23 01:09:52.410178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.272 qpair failed and we were unable to recover it. 00:30:08.272 [2024-07-23 01:09:52.419912] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.272 [2024-07-23 01:09:52.420058] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.272 [2024-07-23 01:09:52.420089] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.272 [2024-07-23 01:09:52.420104] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.272 [2024-07-23 01:09:52.420117] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:08.272 [2024-07-23 01:09:52.420161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.272 qpair failed and we were unable to recover it. 00:30:08.272 [2024-07-23 01:09:52.429916] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.272 [2024-07-23 01:09:52.430073] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.272 [2024-07-23 01:09:52.430099] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.272 [2024-07-23 01:09:52.430114] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.272 [2024-07-23 01:09:52.430126] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:08.272 [2024-07-23 01:09:52.430155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.272 qpair failed and we were unable to recover it. 00:30:08.272 [2024-07-23 01:09:52.439964] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.272 [2024-07-23 01:09:52.440108] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.272 [2024-07-23 01:09:52.440134] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.272 [2024-07-23 01:09:52.440148] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.272 [2024-07-23 01:09:52.440161] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:08.272 [2024-07-23 01:09:52.440190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.272 qpair failed and we were unable to recover it. 00:30:08.272 [2024-07-23 01:09:52.449986] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.272 [2024-07-23 01:09:52.450131] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.272 [2024-07-23 01:09:52.450156] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.272 [2024-07-23 01:09:52.450171] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.272 [2024-07-23 01:09:52.450183] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:08.272 [2024-07-23 01:09:52.450213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.272 qpair failed and we were unable to recover it. 00:30:08.272 [2024-07-23 01:09:52.460003] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.272 [2024-07-23 01:09:52.460152] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.272 [2024-07-23 01:09:52.460179] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.272 [2024-07-23 01:09:52.460194] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.272 [2024-07-23 01:09:52.460214] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:08.272 [2024-07-23 01:09:52.460243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.272 qpair failed and we were unable to recover it. 00:30:08.272 [2024-07-23 01:09:52.470074] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.272 [2024-07-23 01:09:52.470259] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.272 [2024-07-23 01:09:52.470284] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.272 [2024-07-23 01:09:52.470299] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.272 [2024-07-23 01:09:52.470312] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:08.272 [2024-07-23 01:09:52.470341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.272 qpair failed and we were unable to recover it. 00:30:08.531 [2024-07-23 01:09:52.480082] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.531 [2024-07-23 01:09:52.480229] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.531 [2024-07-23 01:09:52.480256] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.531 [2024-07-23 01:09:52.480271] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.531 [2024-07-23 01:09:52.480284] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:08.531 [2024-07-23 01:09:52.480313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.531 qpair failed and we were unable to recover it. 00:30:08.531 [2024-07-23 01:09:52.490104] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.531 [2024-07-23 01:09:52.490250] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.531 [2024-07-23 01:09:52.490276] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.531 [2024-07-23 01:09:52.490290] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.531 [2024-07-23 01:09:52.490303] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:08.531 [2024-07-23 01:09:52.490332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.531 qpair failed and we were unable to recover it. 00:30:08.531 [2024-07-23 01:09:52.500147] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.531 [2024-07-23 01:09:52.500307] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.531 [2024-07-23 01:09:52.500332] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.531 [2024-07-23 01:09:52.500347] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.531 [2024-07-23 01:09:52.500375] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:08.531 [2024-07-23 01:09:52.500402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.531 qpair failed and we were unable to recover it. 00:30:08.531 [2024-07-23 01:09:52.510198] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.531 [2024-07-23 01:09:52.510344] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.531 [2024-07-23 01:09:52.510371] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.531 [2024-07-23 01:09:52.510385] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.531 [2024-07-23 01:09:52.510398] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:08.531 [2024-07-23 01:09:52.510441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.531 qpair failed and we were unable to recover it. 00:30:08.531 [2024-07-23 01:09:52.520225] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.531 [2024-07-23 01:09:52.520401] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.531 [2024-07-23 01:09:52.520426] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.531 [2024-07-23 01:09:52.520440] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.531 [2024-07-23 01:09:52.520453] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:08.531 [2024-07-23 01:09:52.520483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.531 qpair failed and we were unable to recover it. 00:30:08.531 [2024-07-23 01:09:52.530246] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.531 [2024-07-23 01:09:52.530401] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.531 [2024-07-23 01:09:52.530427] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.531 [2024-07-23 01:09:52.530442] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.531 [2024-07-23 01:09:52.530470] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:08.531 [2024-07-23 01:09:52.530498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.531 qpair failed and we were unable to recover it. 00:30:08.531 [2024-07-23 01:09:52.540273] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.531 [2024-07-23 01:09:52.540414] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.531 [2024-07-23 01:09:52.540439] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.531 [2024-07-23 01:09:52.540453] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.532 [2024-07-23 01:09:52.540466] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:08.532 [2024-07-23 01:09:52.540508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.532 qpair failed and we were unable to recover it. 00:30:08.532 [2024-07-23 01:09:52.550287] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.532 [2024-07-23 01:09:52.550424] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.532 [2024-07-23 01:09:52.550449] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.532 [2024-07-23 01:09:52.550464] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.532 [2024-07-23 01:09:52.550484] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:08.532 [2024-07-23 01:09:52.550527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.532 qpair failed and we were unable to recover it. 00:30:08.532 [2024-07-23 01:09:52.560317] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.532 [2024-07-23 01:09:52.560464] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.532 [2024-07-23 01:09:52.560490] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.532 [2024-07-23 01:09:52.560504] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.532 [2024-07-23 01:09:52.560517] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:08.532 [2024-07-23 01:09:52.560546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.532 qpair failed and we were unable to recover it. 00:30:08.532 [2024-07-23 01:09:52.570342] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.532 [2024-07-23 01:09:52.570497] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.532 [2024-07-23 01:09:52.570523] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.532 [2024-07-23 01:09:52.570537] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.532 [2024-07-23 01:09:52.570550] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:08.532 [2024-07-23 01:09:52.570580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.532 qpair failed and we were unable to recover it. 00:30:08.532 [2024-07-23 01:09:52.580369] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.532 [2024-07-23 01:09:52.580519] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.532 [2024-07-23 01:09:52.580547] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.532 [2024-07-23 01:09:52.580565] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.532 [2024-07-23 01:09:52.580580] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:08.532 [2024-07-23 01:09:52.580632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.532 qpair failed and we were unable to recover it. 00:30:08.532 [2024-07-23 01:09:52.590409] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.532 [2024-07-23 01:09:52.590579] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.532 [2024-07-23 01:09:52.590605] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.532 [2024-07-23 01:09:52.590628] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.532 [2024-07-23 01:09:52.590643] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:08.532 [2024-07-23 01:09:52.590673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.532 qpair failed and we were unable to recover it. 00:30:08.532 [2024-07-23 01:09:52.600438] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.532 [2024-07-23 01:09:52.600578] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.532 [2024-07-23 01:09:52.600604] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.532 [2024-07-23 01:09:52.600628] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.532 [2024-07-23 01:09:52.600643] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:08.532 [2024-07-23 01:09:52.600672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.532 qpair failed and we were unable to recover it. 00:30:08.532 [2024-07-23 01:09:52.610460] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.532 [2024-07-23 01:09:52.610648] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.532 [2024-07-23 01:09:52.610674] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.532 [2024-07-23 01:09:52.610689] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.532 [2024-07-23 01:09:52.610702] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:08.532 [2024-07-23 01:09:52.610730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.532 qpair failed and we were unable to recover it. 00:30:08.532 [2024-07-23 01:09:52.620488] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.532 [2024-07-23 01:09:52.620650] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.532 [2024-07-23 01:09:52.620677] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.532 [2024-07-23 01:09:52.620691] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.532 [2024-07-23 01:09:52.620704] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:08.532 [2024-07-23 01:09:52.620733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.532 qpair failed and we were unable to recover it. 00:30:08.532 [2024-07-23 01:09:52.630506] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.532 [2024-07-23 01:09:52.630649] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.532 [2024-07-23 01:09:52.630674] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.532 [2024-07-23 01:09:52.630688] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.532 [2024-07-23 01:09:52.630700] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:08.532 [2024-07-23 01:09:52.630728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.532 qpair failed and we were unable to recover it. 00:30:08.532 [2024-07-23 01:09:52.640585] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.532 [2024-07-23 01:09:52.640782] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.532 [2024-07-23 01:09:52.640808] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.532 [2024-07-23 01:09:52.640824] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.532 [2024-07-23 01:09:52.640843] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:08.532 [2024-07-23 01:09:52.640872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.532 qpair failed and we were unable to recover it. 00:30:08.532 [2024-07-23 01:09:52.650597] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.532 [2024-07-23 01:09:52.650793] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.532 [2024-07-23 01:09:52.650819] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.532 [2024-07-23 01:09:52.650834] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.532 [2024-07-23 01:09:52.650847] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:08.532 [2024-07-23 01:09:52.650876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.532 qpair failed and we were unable to recover it. 00:30:08.532 [2024-07-23 01:09:52.660660] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.532 [2024-07-23 01:09:52.660799] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.532 [2024-07-23 01:09:52.660825] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.532 [2024-07-23 01:09:52.660840] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.532 [2024-07-23 01:09:52.660853] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:08.532 [2024-07-23 01:09:52.660882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.532 qpair failed and we were unable to recover it. 00:30:08.532 [2024-07-23 01:09:52.670645] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.532 [2024-07-23 01:09:52.670784] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.532 [2024-07-23 01:09:52.670811] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.532 [2024-07-23 01:09:52.670825] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.532 [2024-07-23 01:09:52.670840] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:08.532 [2024-07-23 01:09:52.670871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.532 qpair failed and we were unable to recover it. 00:30:08.533 [2024-07-23 01:09:52.680712] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.533 [2024-07-23 01:09:52.680871] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.533 [2024-07-23 01:09:52.680898] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.533 [2024-07-23 01:09:52.680916] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.533 [2024-07-23 01:09:52.680930] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:08.533 [2024-07-23 01:09:52.680976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.533 qpair failed and we were unable to recover it. 00:30:08.533 [2024-07-23 01:09:52.690722] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.533 [2024-07-23 01:09:52.690872] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.533 [2024-07-23 01:09:52.690899] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.533 [2024-07-23 01:09:52.690914] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.533 [2024-07-23 01:09:52.690928] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:08.533 [2024-07-23 01:09:52.690958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.533 qpair failed and we were unable to recover it. 00:30:08.533 [2024-07-23 01:09:52.700746] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.533 [2024-07-23 01:09:52.700887] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.533 [2024-07-23 01:09:52.700913] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.533 [2024-07-23 01:09:52.700928] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.533 [2024-07-23 01:09:52.700942] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:08.533 [2024-07-23 01:09:52.700985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.533 qpair failed and we were unable to recover it. 00:30:08.533 [2024-07-23 01:09:52.710748] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.533 [2024-07-23 01:09:52.710889] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.533 [2024-07-23 01:09:52.710915] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.533 [2024-07-23 01:09:52.710930] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.533 [2024-07-23 01:09:52.710943] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:08.533 [2024-07-23 01:09:52.710972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.533 qpair failed and we were unable to recover it. 00:30:08.533 [2024-07-23 01:09:52.720773] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.533 [2024-07-23 01:09:52.720907] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.533 [2024-07-23 01:09:52.720932] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.533 [2024-07-23 01:09:52.720947] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.533 [2024-07-23 01:09:52.720960] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:08.533 [2024-07-23 01:09:52.720987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.533 qpair failed and we were unable to recover it. 00:30:08.533 [2024-07-23 01:09:52.730880] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.533 [2024-07-23 01:09:52.731075] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.533 [2024-07-23 01:09:52.731100] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.533 [2024-07-23 01:09:52.731115] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.533 [2024-07-23 01:09:52.731133] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:08.533 [2024-07-23 01:09:52.731164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.533 qpair failed and we were unable to recover it. 00:30:08.792 [2024-07-23 01:09:52.740946] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.792 [2024-07-23 01:09:52.741090] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.792 [2024-07-23 01:09:52.741115] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.792 [2024-07-23 01:09:52.741130] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.792 [2024-07-23 01:09:52.741143] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:08.792 [2024-07-23 01:09:52.741170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.792 qpair failed and we were unable to recover it. 00:30:08.792 [2024-07-23 01:09:52.750869] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.792 [2024-07-23 01:09:52.751011] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.792 [2024-07-23 01:09:52.751036] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.792 [2024-07-23 01:09:52.751050] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.792 [2024-07-23 01:09:52.751063] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:08.792 [2024-07-23 01:09:52.751091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.792 qpair failed and we were unable to recover it. 00:30:08.792 [2024-07-23 01:09:52.760903] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.792 [2024-07-23 01:09:52.761044] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.792 [2024-07-23 01:09:52.761070] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.792 [2024-07-23 01:09:52.761084] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.792 [2024-07-23 01:09:52.761097] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:08.792 [2024-07-23 01:09:52.761124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.792 qpair failed and we were unable to recover it. 00:30:08.792 [2024-07-23 01:09:52.770949] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.792 [2024-07-23 01:09:52.771105] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.792 [2024-07-23 01:09:52.771130] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.792 [2024-07-23 01:09:52.771143] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.792 [2024-07-23 01:09:52.771157] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:08.792 [2024-07-23 01:09:52.771184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.792 qpair failed and we were unable to recover it. 00:30:08.792 [2024-07-23 01:09:52.780959] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.792 [2024-07-23 01:09:52.781103] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.792 [2024-07-23 01:09:52.781129] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.792 [2024-07-23 01:09:52.781143] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.792 [2024-07-23 01:09:52.781156] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:08.792 [2024-07-23 01:09:52.781184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.792 qpair failed and we were unable to recover it. 00:30:08.792 [2024-07-23 01:09:52.791123] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.792 [2024-07-23 01:09:52.791267] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.792 [2024-07-23 01:09:52.791293] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.792 [2024-07-23 01:09:52.791307] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.792 [2024-07-23 01:09:52.791319] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:08.792 [2024-07-23 01:09:52.791346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.792 qpair failed and we were unable to recover it. 00:30:08.792 [2024-07-23 01:09:52.801054] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.792 [2024-07-23 01:09:52.801223] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.792 [2024-07-23 01:09:52.801249] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.792 [2024-07-23 01:09:52.801263] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.792 [2024-07-23 01:09:52.801276] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:08.792 [2024-07-23 01:09:52.801304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.792 qpair failed and we were unable to recover it. 00:30:08.792 [2024-07-23 01:09:52.811111] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.792 [2024-07-23 01:09:52.811261] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.792 [2024-07-23 01:09:52.811287] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.792 [2024-07-23 01:09:52.811301] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.792 [2024-07-23 01:09:52.811313] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:08.792 [2024-07-23 01:09:52.811341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.792 qpair failed and we were unable to recover it. 00:30:08.792 [2024-07-23 01:09:52.821139] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.792 [2024-07-23 01:09:52.821313] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.792 [2024-07-23 01:09:52.821338] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.792 [2024-07-23 01:09:52.821357] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.792 [2024-07-23 01:09:52.821371] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:08.792 [2024-07-23 01:09:52.821399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.792 qpair failed and we were unable to recover it. 00:30:08.792 [2024-07-23 01:09:52.831107] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.792 [2024-07-23 01:09:52.831245] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.792 [2024-07-23 01:09:52.831270] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.792 [2024-07-23 01:09:52.831284] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.792 [2024-07-23 01:09:52.831297] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:08.792 [2024-07-23 01:09:52.831324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.792 qpair failed and we were unable to recover it. 00:30:08.792 [2024-07-23 01:09:52.841155] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.792 [2024-07-23 01:09:52.841303] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.792 [2024-07-23 01:09:52.841328] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.792 [2024-07-23 01:09:52.841342] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.792 [2024-07-23 01:09:52.841355] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:08.792 [2024-07-23 01:09:52.841382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.792 qpair failed and we were unable to recover it. 00:30:08.792 [2024-07-23 01:09:52.851166] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.792 [2024-07-23 01:09:52.851321] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.792 [2024-07-23 01:09:52.851345] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.792 [2024-07-23 01:09:52.851359] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.792 [2024-07-23 01:09:52.851372] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:08.792 [2024-07-23 01:09:52.851399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.792 qpair failed and we were unable to recover it. 00:30:08.792 [2024-07-23 01:09:52.861222] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.793 [2024-07-23 01:09:52.861362] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.793 [2024-07-23 01:09:52.861388] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.793 [2024-07-23 01:09:52.861402] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.793 [2024-07-23 01:09:52.861414] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:08.793 [2024-07-23 01:09:52.861441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.793 qpair failed and we were unable to recover it. 00:30:08.793 [2024-07-23 01:09:52.871261] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.793 [2024-07-23 01:09:52.871397] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.793 [2024-07-23 01:09:52.871422] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.793 [2024-07-23 01:09:52.871437] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.793 [2024-07-23 01:09:52.871449] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:08.793 [2024-07-23 01:09:52.871476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.793 qpair failed and we were unable to recover it. 00:30:08.793 [2024-07-23 01:09:52.881256] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.793 [2024-07-23 01:09:52.881397] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.793 [2024-07-23 01:09:52.881422] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.793 [2024-07-23 01:09:52.881436] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.793 [2024-07-23 01:09:52.881448] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:08.793 [2024-07-23 01:09:52.881476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.793 qpair failed and we were unable to recover it. 00:30:08.793 [2024-07-23 01:09:52.891297] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.793 [2024-07-23 01:09:52.891437] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.793 [2024-07-23 01:09:52.891463] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.793 [2024-07-23 01:09:52.891476] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.793 [2024-07-23 01:09:52.891489] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:08.793 [2024-07-23 01:09:52.891516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.793 qpair failed and we were unable to recover it. 00:30:08.793 [2024-07-23 01:09:52.901332] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.793 [2024-07-23 01:09:52.901474] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.793 [2024-07-23 01:09:52.901499] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.793 [2024-07-23 01:09:52.901513] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.793 [2024-07-23 01:09:52.901526] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:08.793 [2024-07-23 01:09:52.901553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.793 qpair failed and we were unable to recover it. 00:30:08.793 [2024-07-23 01:09:52.911403] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.793 [2024-07-23 01:09:52.911602] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.793 [2024-07-23 01:09:52.911638] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.793 [2024-07-23 01:09:52.911659] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.793 [2024-07-23 01:09:52.911673] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:08.793 [2024-07-23 01:09:52.911701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.793 qpair failed and we were unable to recover it. 00:30:08.793 [2024-07-23 01:09:52.921391] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.793 [2024-07-23 01:09:52.921534] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.793 [2024-07-23 01:09:52.921560] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.793 [2024-07-23 01:09:52.921574] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.793 [2024-07-23 01:09:52.921587] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:08.793 [2024-07-23 01:09:52.921621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.793 qpair failed and we were unable to recover it. 00:30:08.793 [2024-07-23 01:09:52.931419] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.793 [2024-07-23 01:09:52.931583] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.793 [2024-07-23 01:09:52.931609] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.793 [2024-07-23 01:09:52.931630] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.793 [2024-07-23 01:09:52.931645] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:08.793 [2024-07-23 01:09:52.931673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.793 qpair failed and we were unable to recover it. 00:30:08.793 [2024-07-23 01:09:52.941445] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.793 [2024-07-23 01:09:52.941581] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.793 [2024-07-23 01:09:52.941606] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.793 [2024-07-23 01:09:52.941627] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.793 [2024-07-23 01:09:52.941641] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:08.793 [2024-07-23 01:09:52.941669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.793 qpair failed and we were unable to recover it. 00:30:08.793 [2024-07-23 01:09:52.951464] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.793 [2024-07-23 01:09:52.951599] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.793 [2024-07-23 01:09:52.951631] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.793 [2024-07-23 01:09:52.951645] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.793 [2024-07-23 01:09:52.951658] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:08.793 [2024-07-23 01:09:52.951686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.793 qpair failed and we were unable to recover it. 00:30:08.793 [2024-07-23 01:09:52.961504] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.793 [2024-07-23 01:09:52.961651] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.793 [2024-07-23 01:09:52.961678] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.793 [2024-07-23 01:09:52.961692] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.793 [2024-07-23 01:09:52.961705] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:08.793 [2024-07-23 01:09:52.961735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.793 qpair failed and we were unable to recover it. 00:30:08.793 [2024-07-23 01:09:52.971554] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.793 [2024-07-23 01:09:52.971737] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.793 [2024-07-23 01:09:52.971762] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.793 [2024-07-23 01:09:52.971776] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.793 [2024-07-23 01:09:52.971789] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:08.793 [2024-07-23 01:09:52.971816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.793 qpair failed and we were unable to recover it. 00:30:08.793 [2024-07-23 01:09:52.981549] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.793 [2024-07-23 01:09:52.981701] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.793 [2024-07-23 01:09:52.981727] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.793 [2024-07-23 01:09:52.981741] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.793 [2024-07-23 01:09:52.981753] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:08.794 [2024-07-23 01:09:52.981781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.794 qpair failed and we were unable to recover it. 00:30:08.794 [2024-07-23 01:09:52.991605] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.794 [2024-07-23 01:09:52.991752] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.794 [2024-07-23 01:09:52.991777] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.794 [2024-07-23 01:09:52.991791] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.794 [2024-07-23 01:09:52.991804] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:08.794 [2024-07-23 01:09:52.991831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.794 qpair failed and we were unable to recover it. 00:30:09.052 [2024-07-23 01:09:53.001641] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.052 [2024-07-23 01:09:53.001783] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.052 [2024-07-23 01:09:53.001809] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.052 [2024-07-23 01:09:53.001829] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.052 [2024-07-23 01:09:53.001842] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:09.053 [2024-07-23 01:09:53.001871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.053 qpair failed and we were unable to recover it. 00:30:09.053 [2024-07-23 01:09:53.011657] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.053 [2024-07-23 01:09:53.011802] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.053 [2024-07-23 01:09:53.011827] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.053 [2024-07-23 01:09:53.011841] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.053 [2024-07-23 01:09:53.011854] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:09.053 [2024-07-23 01:09:53.011881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.053 qpair failed and we were unable to recover it. 00:30:09.053 [2024-07-23 01:09:53.021680] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.053 [2024-07-23 01:09:53.021823] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.053 [2024-07-23 01:09:53.021848] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.053 [2024-07-23 01:09:53.021862] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.053 [2024-07-23 01:09:53.021875] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:09.053 [2024-07-23 01:09:53.021902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.053 qpair failed and we were unable to recover it. 00:30:09.053 [2024-07-23 01:09:53.031766] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.053 [2024-07-23 01:09:53.031906] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.053 [2024-07-23 01:09:53.031931] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.053 [2024-07-23 01:09:53.031945] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.053 [2024-07-23 01:09:53.031957] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:09.053 [2024-07-23 01:09:53.031985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.053 qpair failed and we were unable to recover it. 00:30:09.053 [2024-07-23 01:09:53.041744] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.053 [2024-07-23 01:09:53.041880] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.053 [2024-07-23 01:09:53.041905] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.053 [2024-07-23 01:09:53.041930] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.053 [2024-07-23 01:09:53.041943] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:09.053 [2024-07-23 01:09:53.041971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.053 qpair failed and we were unable to recover it. 00:30:09.053 [2024-07-23 01:09:53.051770] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.053 [2024-07-23 01:09:53.051936] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.053 [2024-07-23 01:09:53.051962] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.053 [2024-07-23 01:09:53.051976] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.053 [2024-07-23 01:09:53.051989] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:09.053 [2024-07-23 01:09:53.052016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.053 qpair failed and we were unable to recover it. 00:30:09.053 [2024-07-23 01:09:53.061817] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.053 [2024-07-23 01:09:53.061961] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.053 [2024-07-23 01:09:53.061986] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.053 [2024-07-23 01:09:53.062000] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.053 [2024-07-23 01:09:53.062013] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:09.053 [2024-07-23 01:09:53.062040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.053 qpair failed and we were unable to recover it. 00:30:09.053 [2024-07-23 01:09:53.071819] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.053 [2024-07-23 01:09:53.071960] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.053 [2024-07-23 01:09:53.071985] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.053 [2024-07-23 01:09:53.071999] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.053 [2024-07-23 01:09:53.072011] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:09.053 [2024-07-23 01:09:53.072038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.053 qpair failed and we were unable to recover it. 00:30:09.053 [2024-07-23 01:09:53.081894] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.053 [2024-07-23 01:09:53.082058] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.053 [2024-07-23 01:09:53.082084] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.053 [2024-07-23 01:09:53.082098] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.053 [2024-07-23 01:09:53.082110] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:09.053 [2024-07-23 01:09:53.082138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.053 qpair failed and we were unable to recover it. 00:30:09.053 [2024-07-23 01:09:53.091944] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.053 [2024-07-23 01:09:53.092085] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.053 [2024-07-23 01:09:53.092111] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.053 [2024-07-23 01:09:53.092138] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.053 [2024-07-23 01:09:53.092153] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:09.053 [2024-07-23 01:09:53.092183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.053 qpair failed and we were unable to recover it. 00:30:09.053 [2024-07-23 01:09:53.101944] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.053 [2024-07-23 01:09:53.102102] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.053 [2024-07-23 01:09:53.102127] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.053 [2024-07-23 01:09:53.102142] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.053 [2024-07-23 01:09:53.102155] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:09.053 [2024-07-23 01:09:53.102182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.053 qpair failed and we were unable to recover it. 00:30:09.053 [2024-07-23 01:09:53.112071] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.053 [2024-07-23 01:09:53.112232] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.053 [2024-07-23 01:09:53.112258] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.053 [2024-07-23 01:09:53.112272] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.053 [2024-07-23 01:09:53.112284] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:09.053 [2024-07-23 01:09:53.112312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.053 qpair failed and we were unable to recover it. 00:30:09.053 [2024-07-23 01:09:53.121979] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.053 [2024-07-23 01:09:53.122119] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.053 [2024-07-23 01:09:53.122144] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.053 [2024-07-23 01:09:53.122158] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.053 [2024-07-23 01:09:53.122171] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:09.053 [2024-07-23 01:09:53.122199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.053 qpair failed and we were unable to recover it. 00:30:09.053 [2024-07-23 01:09:53.132012] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.053 [2024-07-23 01:09:53.132152] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.053 [2024-07-23 01:09:53.132178] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.053 [2024-07-23 01:09:53.132192] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.053 [2024-07-23 01:09:53.132204] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:09.053 [2024-07-23 01:09:53.132231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.053 qpair failed and we were unable to recover it. 00:30:09.053 [2024-07-23 01:09:53.142052] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.054 [2024-07-23 01:09:53.142190] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.054 [2024-07-23 01:09:53.142215] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.054 [2024-07-23 01:09:53.142229] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.054 [2024-07-23 01:09:53.142242] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:09.054 [2024-07-23 01:09:53.142269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.054 qpair failed and we were unable to recover it. 00:30:09.054 [2024-07-23 01:09:53.152102] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.054 [2024-07-23 01:09:53.152244] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.054 [2024-07-23 01:09:53.152270] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.054 [2024-07-23 01:09:53.152283] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.054 [2024-07-23 01:09:53.152296] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:09.054 [2024-07-23 01:09:53.152325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.054 qpair failed and we were unable to recover it. 00:30:09.054 [2024-07-23 01:09:53.162115] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.054 [2024-07-23 01:09:53.162258] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.054 [2024-07-23 01:09:53.162283] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.054 [2024-07-23 01:09:53.162297] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.054 [2024-07-23 01:09:53.162310] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:09.054 [2024-07-23 01:09:53.162338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.054 qpair failed and we were unable to recover it. 00:30:09.054 [2024-07-23 01:09:53.172165] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.054 [2024-07-23 01:09:53.172311] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.054 [2024-07-23 01:09:53.172337] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.054 [2024-07-23 01:09:53.172350] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.054 [2024-07-23 01:09:53.172363] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:09.054 [2024-07-23 01:09:53.172390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.054 qpair failed and we were unable to recover it. 00:30:09.054 [2024-07-23 01:09:53.182145] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.054 [2024-07-23 01:09:53.182282] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.054 [2024-07-23 01:09:53.182312] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.054 [2024-07-23 01:09:53.182327] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.054 [2024-07-23 01:09:53.182340] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:09.054 [2024-07-23 01:09:53.182367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.054 qpair failed and we were unable to recover it. 00:30:09.054 [2024-07-23 01:09:53.192200] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.054 [2024-07-23 01:09:53.192339] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.054 [2024-07-23 01:09:53.192365] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.054 [2024-07-23 01:09:53.192379] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.054 [2024-07-23 01:09:53.192391] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:09.054 [2024-07-23 01:09:53.192421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.054 qpair failed and we were unable to recover it. 00:30:09.054 [2024-07-23 01:09:53.202184] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.054 [2024-07-23 01:09:53.202327] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.054 [2024-07-23 01:09:53.202353] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.054 [2024-07-23 01:09:53.202367] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.054 [2024-07-23 01:09:53.202380] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:09.054 [2024-07-23 01:09:53.202408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.054 qpair failed and we were unable to recover it. 00:30:09.054 [2024-07-23 01:09:53.212224] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.054 [2024-07-23 01:09:53.212364] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.054 [2024-07-23 01:09:53.212388] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.054 [2024-07-23 01:09:53.212402] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.054 [2024-07-23 01:09:53.212414] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:09.054 [2024-07-23 01:09:53.212442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.054 qpair failed and we were unable to recover it. 00:30:09.054 [2024-07-23 01:09:53.222248] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.054 [2024-07-23 01:09:53.222382] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.054 [2024-07-23 01:09:53.222408] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.054 [2024-07-23 01:09:53.222422] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.054 [2024-07-23 01:09:53.222434] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:09.054 [2024-07-23 01:09:53.222462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.054 qpair failed and we were unable to recover it. 00:30:09.054 [2024-07-23 01:09:53.232295] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.054 [2024-07-23 01:09:53.232461] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.054 [2024-07-23 01:09:53.232487] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.054 [2024-07-23 01:09:53.232501] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.054 [2024-07-23 01:09:53.232514] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:09.054 [2024-07-23 01:09:53.232543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.054 qpair failed and we were unable to recover it. 00:30:09.054 [2024-07-23 01:09:53.242324] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.054 [2024-07-23 01:09:53.242461] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.054 [2024-07-23 01:09:53.242486] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.054 [2024-07-23 01:09:53.242500] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.054 [2024-07-23 01:09:53.242513] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:09.054 [2024-07-23 01:09:53.242540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.054 qpair failed and we were unable to recover it. 00:30:09.054 [2024-07-23 01:09:53.252453] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.054 [2024-07-23 01:09:53.252598] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.054 [2024-07-23 01:09:53.252632] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.054 [2024-07-23 01:09:53.252647] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.054 [2024-07-23 01:09:53.252660] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:09.054 [2024-07-23 01:09:53.252687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.054 qpair failed and we were unable to recover it. 00:30:09.314 [2024-07-23 01:09:53.262383] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.314 [2024-07-23 01:09:53.262526] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.314 [2024-07-23 01:09:53.262551] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.314 [2024-07-23 01:09:53.262564] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.314 [2024-07-23 01:09:53.262577] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:09.314 [2024-07-23 01:09:53.262604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.314 qpair failed and we were unable to recover it. 00:30:09.314 [2024-07-23 01:09:53.272398] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.314 [2024-07-23 01:09:53.272544] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.314 [2024-07-23 01:09:53.272574] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.314 [2024-07-23 01:09:53.272589] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.314 [2024-07-23 01:09:53.272602] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:09.314 [2024-07-23 01:09:53.272636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.314 qpair failed and we were unable to recover it. 00:30:09.314 [2024-07-23 01:09:53.282436] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.314 [2024-07-23 01:09:53.282576] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.314 [2024-07-23 01:09:53.282602] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.314 [2024-07-23 01:09:53.282622] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.314 [2024-07-23 01:09:53.282636] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:09.314 [2024-07-23 01:09:53.282664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.314 qpair failed and we were unable to recover it. 00:30:09.314 [2024-07-23 01:09:53.292496] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.314 [2024-07-23 01:09:53.292652] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.314 [2024-07-23 01:09:53.292678] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.314 [2024-07-23 01:09:53.292691] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.314 [2024-07-23 01:09:53.292704] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:09.314 [2024-07-23 01:09:53.292731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.314 qpair failed and we were unable to recover it. 00:30:09.314 [2024-07-23 01:09:53.302502] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.314 [2024-07-23 01:09:53.302648] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.314 [2024-07-23 01:09:53.302674] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.314 [2024-07-23 01:09:53.302688] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.314 [2024-07-23 01:09:53.302701] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:09.314 [2024-07-23 01:09:53.302728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.314 qpair failed and we were unable to recover it. 00:30:09.314 [2024-07-23 01:09:53.312520] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.314 [2024-07-23 01:09:53.312671] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.314 [2024-07-23 01:09:53.312696] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.314 [2024-07-23 01:09:53.312710] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.314 [2024-07-23 01:09:53.312723] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:09.314 [2024-07-23 01:09:53.312756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.314 qpair failed and we were unable to recover it. 00:30:09.314 [2024-07-23 01:09:53.322562] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.314 [2024-07-23 01:09:53.322729] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.314 [2024-07-23 01:09:53.322755] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.314 [2024-07-23 01:09:53.322769] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.314 [2024-07-23 01:09:53.322782] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:09.314 [2024-07-23 01:09:53.322810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.314 qpair failed and we were unable to recover it. 00:30:09.314 [2024-07-23 01:09:53.332587] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.314 [2024-07-23 01:09:53.332737] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.314 [2024-07-23 01:09:53.332761] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.314 [2024-07-23 01:09:53.332775] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.314 [2024-07-23 01:09:53.332788] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:09.314 [2024-07-23 01:09:53.332815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.314 qpair failed and we were unable to recover it. 00:30:09.314 [2024-07-23 01:09:53.342597] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.314 [2024-07-23 01:09:53.342751] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.314 [2024-07-23 01:09:53.342777] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.314 [2024-07-23 01:09:53.342791] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.314 [2024-07-23 01:09:53.342803] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:09.314 [2024-07-23 01:09:53.342830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.314 qpair failed and we were unable to recover it. 00:30:09.314 [2024-07-23 01:09:53.352725] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.314 [2024-07-23 01:09:53.352864] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.314 [2024-07-23 01:09:53.352889] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.314 [2024-07-23 01:09:53.352903] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.314 [2024-07-23 01:09:53.352915] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:09.314 [2024-07-23 01:09:53.352943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.314 qpair failed and we were unable to recover it. 00:30:09.314 [2024-07-23 01:09:53.362719] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.314 [2024-07-23 01:09:53.362859] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.314 [2024-07-23 01:09:53.362888] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.314 [2024-07-23 01:09:53.362903] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.314 [2024-07-23 01:09:53.362916] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:09.314 [2024-07-23 01:09:53.362943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.314 qpair failed and we were unable to recover it. 00:30:09.314 [2024-07-23 01:09:53.372740] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.314 [2024-07-23 01:09:53.372884] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.314 [2024-07-23 01:09:53.372909] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.314 [2024-07-23 01:09:53.372922] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.314 [2024-07-23 01:09:53.372935] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:09.314 [2024-07-23 01:09:53.372963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.314 qpair failed and we were unable to recover it. 00:30:09.315 [2024-07-23 01:09:53.382751] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.315 [2024-07-23 01:09:53.382897] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.315 [2024-07-23 01:09:53.382922] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.315 [2024-07-23 01:09:53.382935] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.315 [2024-07-23 01:09:53.382947] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:09.315 [2024-07-23 01:09:53.382974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.315 qpair failed and we were unable to recover it. 00:30:09.315 [2024-07-23 01:09:53.392782] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.315 [2024-07-23 01:09:53.392933] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.315 [2024-07-23 01:09:53.392958] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.315 [2024-07-23 01:09:53.392972] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.315 [2024-07-23 01:09:53.392984] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:09.315 [2024-07-23 01:09:53.393011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.315 qpair failed and we were unable to recover it. 00:30:09.315 [2024-07-23 01:09:53.402795] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.315 [2024-07-23 01:09:53.402937] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.315 [2024-07-23 01:09:53.402962] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.315 [2024-07-23 01:09:53.402976] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.315 [2024-07-23 01:09:53.402989] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:09.315 [2024-07-23 01:09:53.403022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.315 qpair failed and we were unable to recover it. 00:30:09.315 [2024-07-23 01:09:53.412821] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.315 [2024-07-23 01:09:53.412970] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.315 [2024-07-23 01:09:53.412995] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.315 [2024-07-23 01:09:53.413009] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.315 [2024-07-23 01:09:53.413021] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:09.315 [2024-07-23 01:09:53.413049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.315 qpair failed and we were unable to recover it. 00:30:09.315 [2024-07-23 01:09:53.422864] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.315 [2024-07-23 01:09:53.423065] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.315 [2024-07-23 01:09:53.423091] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.315 [2024-07-23 01:09:53.423105] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.315 [2024-07-23 01:09:53.423118] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:09.315 [2024-07-23 01:09:53.423147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.315 qpair failed and we were unable to recover it. 00:30:09.315 [2024-07-23 01:09:53.432867] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.315 [2024-07-23 01:09:53.433007] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.315 [2024-07-23 01:09:53.433032] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.315 [2024-07-23 01:09:53.433046] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.315 [2024-07-23 01:09:53.433058] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:09.315 [2024-07-23 01:09:53.433086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.315 qpair failed and we were unable to recover it. 00:30:09.315 [2024-07-23 01:09:53.442897] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.315 [2024-07-23 01:09:53.443052] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.315 [2024-07-23 01:09:53.443077] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.315 [2024-07-23 01:09:53.443091] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.315 [2024-07-23 01:09:53.443104] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:09.315 [2024-07-23 01:09:53.443133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.315 qpair failed and we were unable to recover it. 00:30:09.315 [2024-07-23 01:09:53.452920] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.315 [2024-07-23 01:09:53.453062] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.315 [2024-07-23 01:09:53.453094] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.315 [2024-07-23 01:09:53.453109] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.315 [2024-07-23 01:09:53.453122] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:09.315 [2024-07-23 01:09:53.453149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.315 qpair failed and we were unable to recover it. 00:30:09.315 [2024-07-23 01:09:53.462973] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.315 [2024-07-23 01:09:53.463159] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.315 [2024-07-23 01:09:53.463183] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.315 [2024-07-23 01:09:53.463197] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.315 [2024-07-23 01:09:53.463210] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:09.315 [2024-07-23 01:09:53.463239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.315 qpair failed and we were unable to recover it. 00:30:09.315 [2024-07-23 01:09:53.472978] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.315 [2024-07-23 01:09:53.473116] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.315 [2024-07-23 01:09:53.473140] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.315 [2024-07-23 01:09:53.473154] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.315 [2024-07-23 01:09:53.473166] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:09.315 [2024-07-23 01:09:53.473194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.315 qpair failed and we were unable to recover it. 00:30:09.315 [2024-07-23 01:09:53.482994] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.315 [2024-07-23 01:09:53.483132] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.315 [2024-07-23 01:09:53.483157] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.315 [2024-07-23 01:09:53.483171] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.315 [2024-07-23 01:09:53.483183] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:09.315 [2024-07-23 01:09:53.483211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.315 qpair failed and we were unable to recover it. 00:30:09.315 [2024-07-23 01:09:53.493097] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.315 [2024-07-23 01:09:53.493245] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.315 [2024-07-23 01:09:53.493274] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.315 [2024-07-23 01:09:53.493289] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.315 [2024-07-23 01:09:53.493302] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:09.315 [2024-07-23 01:09:53.493336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.315 qpair failed and we were unable to recover it. 00:30:09.315 [2024-07-23 01:09:53.503052] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.315 [2024-07-23 01:09:53.503193] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.315 [2024-07-23 01:09:53.503219] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.315 [2024-07-23 01:09:53.503233] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.315 [2024-07-23 01:09:53.503246] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:09.315 [2024-07-23 01:09:53.503273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.315 qpair failed and we were unable to recover it. 00:30:09.315 [2024-07-23 01:09:53.513142] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.315 [2024-07-23 01:09:53.513296] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.315 [2024-07-23 01:09:53.513322] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.315 [2024-07-23 01:09:53.513336] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.315 [2024-07-23 01:09:53.513348] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:09.315 [2024-07-23 01:09:53.513376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.315 qpair failed and we were unable to recover it. 00:30:09.573 [2024-07-23 01:09:53.523134] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.573 [2024-07-23 01:09:53.523282] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.573 [2024-07-23 01:09:53.523308] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.573 [2024-07-23 01:09:53.523322] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.573 [2024-07-23 01:09:53.523334] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:09.573 [2024-07-23 01:09:53.523362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.573 qpair failed and we were unable to recover it. 00:30:09.573 [2024-07-23 01:09:53.533160] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.573 [2024-07-23 01:09:53.533301] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.573 [2024-07-23 01:09:53.533326] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.573 [2024-07-23 01:09:53.533340] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.573 [2024-07-23 01:09:53.533353] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:09.573 [2024-07-23 01:09:53.533380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.573 qpair failed and we were unable to recover it. 00:30:09.573 [2024-07-23 01:09:53.543201] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.573 [2024-07-23 01:09:53.543385] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.573 [2024-07-23 01:09:53.543415] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.573 [2024-07-23 01:09:53.543430] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.573 [2024-07-23 01:09:53.543441] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:09.573 [2024-07-23 01:09:53.543469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.573 qpair failed and we were unable to recover it. 00:30:09.573 [2024-07-23 01:09:53.553296] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.573 [2024-07-23 01:09:53.553430] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.573 [2024-07-23 01:09:53.553456] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.573 [2024-07-23 01:09:53.553470] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.573 [2024-07-23 01:09:53.553483] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:09.573 [2024-07-23 01:09:53.553509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.573 qpair failed and we were unable to recover it. 00:30:09.573 [2024-07-23 01:09:53.563248] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.573 [2024-07-23 01:09:53.563386] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.573 [2024-07-23 01:09:53.563411] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.573 [2024-07-23 01:09:53.563425] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.573 [2024-07-23 01:09:53.563438] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:09.573 [2024-07-23 01:09:53.563466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.573 qpair failed and we were unable to recover it. 00:30:09.573 [2024-07-23 01:09:53.573281] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.573 [2024-07-23 01:09:53.573420] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.573 [2024-07-23 01:09:53.573445] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.573 [2024-07-23 01:09:53.573459] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.574 [2024-07-23 01:09:53.573472] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:09.574 [2024-07-23 01:09:53.573499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.574 qpair failed and we were unable to recover it. 00:30:09.574 [2024-07-23 01:09:53.583350] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.574 [2024-07-23 01:09:53.583531] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.574 [2024-07-23 01:09:53.583556] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.574 [2024-07-23 01:09:53.583570] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.574 [2024-07-23 01:09:53.583583] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:09.574 [2024-07-23 01:09:53.583626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.574 qpair failed and we were unable to recover it. 00:30:09.574 [2024-07-23 01:09:53.593309] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.574 [2024-07-23 01:09:53.593456] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.574 [2024-07-23 01:09:53.593482] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.574 [2024-07-23 01:09:53.593496] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.574 [2024-07-23 01:09:53.593508] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:09.574 [2024-07-23 01:09:53.593536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.574 qpair failed and we were unable to recover it. 00:30:09.574 [2024-07-23 01:09:53.603351] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.574 [2024-07-23 01:09:53.603501] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.574 [2024-07-23 01:09:53.603526] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.574 [2024-07-23 01:09:53.603540] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.574 [2024-07-23 01:09:53.603553] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:09.574 [2024-07-23 01:09:53.603581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.574 qpair failed and we were unable to recover it. 00:30:09.574 [2024-07-23 01:09:53.613428] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.574 [2024-07-23 01:09:53.613581] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.574 [2024-07-23 01:09:53.613606] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.574 [2024-07-23 01:09:53.613633] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.574 [2024-07-23 01:09:53.613647] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:09.574 [2024-07-23 01:09:53.613676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.574 qpair failed and we were unable to recover it. 00:30:09.574 [2024-07-23 01:09:53.623429] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.574 [2024-07-23 01:09:53.623573] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.574 [2024-07-23 01:09:53.623599] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.574 [2024-07-23 01:09:53.623618] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.574 [2024-07-23 01:09:53.623633] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:09.574 [2024-07-23 01:09:53.623661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.574 qpair failed and we were unable to recover it. 00:30:09.574 [2024-07-23 01:09:53.633457] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.574 [2024-07-23 01:09:53.633605] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.574 [2024-07-23 01:09:53.633640] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.574 [2024-07-23 01:09:53.633655] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.574 [2024-07-23 01:09:53.633667] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:09.574 [2024-07-23 01:09:53.633694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.574 qpair failed and we were unable to recover it. 00:30:09.574 [2024-07-23 01:09:53.643477] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.574 [2024-07-23 01:09:53.643632] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.574 [2024-07-23 01:09:53.643658] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.574 [2024-07-23 01:09:53.643672] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.574 [2024-07-23 01:09:53.643686] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:09.574 [2024-07-23 01:09:53.643715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.574 qpair failed and we were unable to recover it. 00:30:09.574 [2024-07-23 01:09:53.653539] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.574 [2024-07-23 01:09:53.653692] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.574 [2024-07-23 01:09:53.653717] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.574 [2024-07-23 01:09:53.653730] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.574 [2024-07-23 01:09:53.653744] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:09.574 [2024-07-23 01:09:53.653771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.574 qpair failed and we were unable to recover it. 00:30:09.574 [2024-07-23 01:09:53.663522] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.574 [2024-07-23 01:09:53.663667] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.574 [2024-07-23 01:09:53.663692] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.574 [2024-07-23 01:09:53.663706] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.574 [2024-07-23 01:09:53.663718] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:09.574 [2024-07-23 01:09:53.663746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.574 qpair failed and we were unable to recover it. 00:30:09.574 [2024-07-23 01:09:53.673582] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.574 [2024-07-23 01:09:53.673734] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.574 [2024-07-23 01:09:53.673759] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.574 [2024-07-23 01:09:53.673773] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.574 [2024-07-23 01:09:53.673791] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:09.574 [2024-07-23 01:09:53.673820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.574 qpair failed and we were unable to recover it. 00:30:09.574 [2024-07-23 01:09:53.683594] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.574 [2024-07-23 01:09:53.683760] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.574 [2024-07-23 01:09:53.683785] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.574 [2024-07-23 01:09:53.683799] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.574 [2024-07-23 01:09:53.683812] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:09.574 [2024-07-23 01:09:53.683839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.574 qpair failed and we were unable to recover it. 00:30:09.574 [2024-07-23 01:09:53.693650] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.574 [2024-07-23 01:09:53.693793] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.574 [2024-07-23 01:09:53.693818] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.574 [2024-07-23 01:09:53.693832] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.574 [2024-07-23 01:09:53.693845] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:09.574 [2024-07-23 01:09:53.693873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.574 qpair failed and we were unable to recover it. 00:30:09.574 [2024-07-23 01:09:53.703655] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.574 [2024-07-23 01:09:53.703792] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.574 [2024-07-23 01:09:53.703817] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.574 [2024-07-23 01:09:53.703831] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.574 [2024-07-23 01:09:53.703843] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:09.575 [2024-07-23 01:09:53.703871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.575 qpair failed and we were unable to recover it. 00:30:09.575 [2024-07-23 01:09:53.713677] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.575 [2024-07-23 01:09:53.713832] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.575 [2024-07-23 01:09:53.713857] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.575 [2024-07-23 01:09:53.713871] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.575 [2024-07-23 01:09:53.713883] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:09.575 [2024-07-23 01:09:53.713912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.575 qpair failed and we were unable to recover it. 00:30:09.575 [2024-07-23 01:09:53.723696] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.575 [2024-07-23 01:09:53.723847] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.575 [2024-07-23 01:09:53.723873] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.575 [2024-07-23 01:09:53.723887] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.575 [2024-07-23 01:09:53.723899] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:09.575 [2024-07-23 01:09:53.723927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.575 qpair failed and we were unable to recover it. 00:30:09.575 [2024-07-23 01:09:53.733738] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.575 [2024-07-23 01:09:53.733886] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.575 [2024-07-23 01:09:53.733911] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.575 [2024-07-23 01:09:53.733924] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.575 [2024-07-23 01:09:53.733937] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:09.575 [2024-07-23 01:09:53.733964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.575 qpair failed and we were unable to recover it. 00:30:09.575 [2024-07-23 01:09:53.743756] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.575 [2024-07-23 01:09:53.743917] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.575 [2024-07-23 01:09:53.743942] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.575 [2024-07-23 01:09:53.743956] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.575 [2024-07-23 01:09:53.743968] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:09.575 [2024-07-23 01:09:53.743996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.575 qpair failed and we were unable to recover it. 00:30:09.575 [2024-07-23 01:09:53.753815] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.575 [2024-07-23 01:09:53.753967] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.575 [2024-07-23 01:09:53.753993] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.575 [2024-07-23 01:09:53.754008] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.575 [2024-07-23 01:09:53.754021] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:09.575 [2024-07-23 01:09:53.754048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.575 qpair failed and we were unable to recover it. 00:30:09.575 [2024-07-23 01:09:53.763832] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.575 [2024-07-23 01:09:53.763968] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.575 [2024-07-23 01:09:53.763993] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.575 [2024-07-23 01:09:53.764007] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.575 [2024-07-23 01:09:53.764025] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:09.575 [2024-07-23 01:09:53.764054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.575 qpair failed and we were unable to recover it. 00:30:09.575 [2024-07-23 01:09:53.773923] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.575 [2024-07-23 01:09:53.774072] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.575 [2024-07-23 01:09:53.774097] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.575 [2024-07-23 01:09:53.774111] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.575 [2024-07-23 01:09:53.774124] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:09.575 [2024-07-23 01:09:53.774155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.575 qpair failed and we were unable to recover it. 00:30:09.833 [2024-07-23 01:09:53.783897] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.833 [2024-07-23 01:09:53.784041] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.833 [2024-07-23 01:09:53.784067] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.834 [2024-07-23 01:09:53.784080] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.834 [2024-07-23 01:09:53.784093] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:09.834 [2024-07-23 01:09:53.784120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.834 qpair failed and we were unable to recover it. 00:30:09.834 [2024-07-23 01:09:53.793909] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.834 [2024-07-23 01:09:53.794047] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.834 [2024-07-23 01:09:53.794073] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.834 [2024-07-23 01:09:53.794087] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.834 [2024-07-23 01:09:53.794099] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:09.834 [2024-07-23 01:09:53.794126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.834 qpair failed and we were unable to recover it. 00:30:09.834 [2024-07-23 01:09:53.803951] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.834 [2024-07-23 01:09:53.804124] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.834 [2024-07-23 01:09:53.804151] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.834 [2024-07-23 01:09:53.804166] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.834 [2024-07-23 01:09:53.804179] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:09.834 [2024-07-23 01:09:53.804207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.834 qpair failed and we were unable to recover it. 00:30:09.834 [2024-07-23 01:09:53.813974] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.834 [2024-07-23 01:09:53.814124] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.834 [2024-07-23 01:09:53.814150] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.834 [2024-07-23 01:09:53.814164] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.834 [2024-07-23 01:09:53.814176] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:09.834 [2024-07-23 01:09:53.814203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.834 qpair failed and we were unable to recover it. 00:30:09.834 [2024-07-23 01:09:53.824038] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.834 [2024-07-23 01:09:53.824180] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.834 [2024-07-23 01:09:53.824206] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.834 [2024-07-23 01:09:53.824220] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.834 [2024-07-23 01:09:53.824232] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:09.834 [2024-07-23 01:09:53.824260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.834 qpair failed and we were unable to recover it. 00:30:09.834 [2024-07-23 01:09:53.834048] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.834 [2024-07-23 01:09:53.834191] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.834 [2024-07-23 01:09:53.834217] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.834 [2024-07-23 01:09:53.834231] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.834 [2024-07-23 01:09:53.834245] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:09.834 [2024-07-23 01:09:53.834272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.834 qpair failed and we were unable to recover it. 00:30:09.834 [2024-07-23 01:09:53.844059] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.834 [2024-07-23 01:09:53.844200] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.834 [2024-07-23 01:09:53.844225] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.834 [2024-07-23 01:09:53.844239] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.834 [2024-07-23 01:09:53.844251] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:09.834 [2024-07-23 01:09:53.844278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.834 qpair failed and we were unable to recover it. 00:30:09.834 [2024-07-23 01:09:53.854156] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.834 [2024-07-23 01:09:53.854365] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.834 [2024-07-23 01:09:53.854393] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.834 [2024-07-23 01:09:53.854407] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.834 [2024-07-23 01:09:53.854425] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:09.834 [2024-07-23 01:09:53.854454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.834 qpair failed and we were unable to recover it. 00:30:09.834 [2024-07-23 01:09:53.864104] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.834 [2024-07-23 01:09:53.864237] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.834 [2024-07-23 01:09:53.864262] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.834 [2024-07-23 01:09:53.864275] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.834 [2024-07-23 01:09:53.864288] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:09.834 [2024-07-23 01:09:53.864315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.834 qpair failed and we were unable to recover it. 00:30:09.834 [2024-07-23 01:09:53.874143] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.834 [2024-07-23 01:09:53.874279] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.834 [2024-07-23 01:09:53.874305] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.834 [2024-07-23 01:09:53.874318] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.834 [2024-07-23 01:09:53.874330] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:09.834 [2024-07-23 01:09:53.874358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.834 qpair failed and we were unable to recover it. 00:30:09.834 [2024-07-23 01:09:53.884157] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.834 [2024-07-23 01:09:53.884298] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.834 [2024-07-23 01:09:53.884323] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.834 [2024-07-23 01:09:53.884337] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.834 [2024-07-23 01:09:53.884350] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:09.834 [2024-07-23 01:09:53.884378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.834 qpair failed and we were unable to recover it. 00:30:09.834 [2024-07-23 01:09:53.894266] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.834 [2024-07-23 01:09:53.894464] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.834 [2024-07-23 01:09:53.894489] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.834 [2024-07-23 01:09:53.894503] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.834 [2024-07-23 01:09:53.894516] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:09.834 [2024-07-23 01:09:53.894545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.834 qpair failed and we were unable to recover it. 00:30:09.834 [2024-07-23 01:09:53.904266] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.834 [2024-07-23 01:09:53.904442] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.834 [2024-07-23 01:09:53.904468] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.834 [2024-07-23 01:09:53.904482] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.834 [2024-07-23 01:09:53.904494] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:09.834 [2024-07-23 01:09:53.904521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.834 qpair failed and we were unable to recover it. 00:30:09.834 [2024-07-23 01:09:53.914345] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.834 [2024-07-23 01:09:53.914506] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.834 [2024-07-23 01:09:53.914532] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.834 [2024-07-23 01:09:53.914546] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.835 [2024-07-23 01:09:53.914559] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:09.835 [2024-07-23 01:09:53.914586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.835 qpair failed and we were unable to recover it. 00:30:09.835 [2024-07-23 01:09:53.924314] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.835 [2024-07-23 01:09:53.924488] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.835 [2024-07-23 01:09:53.924514] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.835 [2024-07-23 01:09:53.924528] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.835 [2024-07-23 01:09:53.924541] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:09.835 [2024-07-23 01:09:53.924568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.835 qpair failed and we were unable to recover it. 00:30:09.835 [2024-07-23 01:09:53.934327] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.835 [2024-07-23 01:09:53.934514] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.835 [2024-07-23 01:09:53.934539] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.835 [2024-07-23 01:09:53.934553] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.835 [2024-07-23 01:09:53.934566] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:09.835 [2024-07-23 01:09:53.934593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.835 qpair failed and we were unable to recover it. 00:30:09.835 [2024-07-23 01:09:53.944347] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.835 [2024-07-23 01:09:53.944489] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.835 [2024-07-23 01:09:53.944513] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.835 [2024-07-23 01:09:53.944527] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.835 [2024-07-23 01:09:53.944545] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:09.835 [2024-07-23 01:09:53.944573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.835 qpair failed and we were unable to recover it. 00:30:09.835 [2024-07-23 01:09:53.954371] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.835 [2024-07-23 01:09:53.954512] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.835 [2024-07-23 01:09:53.954538] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.835 [2024-07-23 01:09:53.954551] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.835 [2024-07-23 01:09:53.954564] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:09.835 [2024-07-23 01:09:53.954591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.835 qpair failed and we were unable to recover it. 00:30:09.835 [2024-07-23 01:09:53.964414] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.835 [2024-07-23 01:09:53.964595] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.835 [2024-07-23 01:09:53.964628] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.835 [2024-07-23 01:09:53.964643] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.835 [2024-07-23 01:09:53.964656] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:09.835 [2024-07-23 01:09:53.964684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.835 qpair failed and we were unable to recover it. 00:30:09.835 [2024-07-23 01:09:53.974457] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.835 [2024-07-23 01:09:53.974599] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.835 [2024-07-23 01:09:53.974630] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.835 [2024-07-23 01:09:53.974645] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.835 [2024-07-23 01:09:53.974658] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:09.835 [2024-07-23 01:09:53.974685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.835 qpair failed and we were unable to recover it. 00:30:09.835 [2024-07-23 01:09:53.984554] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.835 [2024-07-23 01:09:53.984712] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.835 [2024-07-23 01:09:53.984737] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.835 [2024-07-23 01:09:53.984752] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.835 [2024-07-23 01:09:53.984764] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:09.835 [2024-07-23 01:09:53.984792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.835 qpair failed and we were unable to recover it. 00:30:09.835 [2024-07-23 01:09:53.994502] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.835 [2024-07-23 01:09:53.994680] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.835 [2024-07-23 01:09:53.994706] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.835 [2024-07-23 01:09:53.994720] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.835 [2024-07-23 01:09:53.994732] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:09.835 [2024-07-23 01:09:53.994760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.835 qpair failed and we were unable to recover it. 00:30:09.835 [2024-07-23 01:09:54.004544] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.835 [2024-07-23 01:09:54.004689] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.835 [2024-07-23 01:09:54.004714] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.835 [2024-07-23 01:09:54.004728] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.835 [2024-07-23 01:09:54.004741] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:09.835 [2024-07-23 01:09:54.004769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.835 qpair failed and we were unable to recover it. 00:30:09.835 [2024-07-23 01:09:54.014548] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.835 [2024-07-23 01:09:54.014688] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.835 [2024-07-23 01:09:54.014722] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.835 [2024-07-23 01:09:54.014737] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.835 [2024-07-23 01:09:54.014749] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:09.835 [2024-07-23 01:09:54.014777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.835 qpair failed and we were unable to recover it. 00:30:09.835 [2024-07-23 01:09:54.024588] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.835 [2024-07-23 01:09:54.024736] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.835 [2024-07-23 01:09:54.024761] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.835 [2024-07-23 01:09:54.024775] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.835 [2024-07-23 01:09:54.024788] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:09.835 [2024-07-23 01:09:54.024816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.835 qpair failed and we were unable to recover it. 00:30:09.835 [2024-07-23 01:09:54.034662] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.835 [2024-07-23 01:09:54.034803] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.835 [2024-07-23 01:09:54.034828] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.835 [2024-07-23 01:09:54.034848] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.835 [2024-07-23 01:09:54.034861] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:09.835 [2024-07-23 01:09:54.034889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.835 qpair failed and we were unable to recover it. 00:30:10.094 [2024-07-23 01:09:54.044658] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.094 [2024-07-23 01:09:54.044817] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.094 [2024-07-23 01:09:54.044842] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.094 [2024-07-23 01:09:54.044855] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.094 [2024-07-23 01:09:54.044868] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:10.095 [2024-07-23 01:09:54.044896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.095 qpair failed and we were unable to recover it. 00:30:10.095 [2024-07-23 01:09:54.054715] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.095 [2024-07-23 01:09:54.054865] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.095 [2024-07-23 01:09:54.054890] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.095 [2024-07-23 01:09:54.054904] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.095 [2024-07-23 01:09:54.054917] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:10.095 [2024-07-23 01:09:54.054945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.095 qpair failed and we were unable to recover it. 00:30:10.095 [2024-07-23 01:09:54.064739] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.095 [2024-07-23 01:09:54.064883] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.095 [2024-07-23 01:09:54.064908] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.095 [2024-07-23 01:09:54.064922] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.095 [2024-07-23 01:09:54.064934] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:10.095 [2024-07-23 01:09:54.064962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.095 qpair failed and we were unable to recover it. 00:30:10.095 [2024-07-23 01:09:54.074746] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.095 [2024-07-23 01:09:54.074911] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.095 [2024-07-23 01:09:54.074937] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.095 [2024-07-23 01:09:54.074951] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.095 [2024-07-23 01:09:54.074963] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:10.095 [2024-07-23 01:09:54.074990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.095 qpair failed and we were unable to recover it. 00:30:10.095 [2024-07-23 01:09:54.084760] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.095 [2024-07-23 01:09:54.084897] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.095 [2024-07-23 01:09:54.084923] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.095 [2024-07-23 01:09:54.084937] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.095 [2024-07-23 01:09:54.084950] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:10.095 [2024-07-23 01:09:54.084977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.095 qpair failed and we were unable to recover it. 00:30:10.095 [2024-07-23 01:09:54.094849] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.095 [2024-07-23 01:09:54.094988] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.095 [2024-07-23 01:09:54.095013] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.095 [2024-07-23 01:09:54.095027] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.095 [2024-07-23 01:09:54.095040] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:10.095 [2024-07-23 01:09:54.095067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.095 qpair failed and we were unable to recover it. 00:30:10.095 [2024-07-23 01:09:54.104808] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.095 [2024-07-23 01:09:54.104961] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.095 [2024-07-23 01:09:54.104986] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.095 [2024-07-23 01:09:54.104999] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.095 [2024-07-23 01:09:54.105012] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:10.095 [2024-07-23 01:09:54.105039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.095 qpair failed and we were unable to recover it. 00:30:10.095 [2024-07-23 01:09:54.114921] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.095 [2024-07-23 01:09:54.115074] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.095 [2024-07-23 01:09:54.115099] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.095 [2024-07-23 01:09:54.115113] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.095 [2024-07-23 01:09:54.115126] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:10.095 [2024-07-23 01:09:54.115153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.095 qpair failed and we were unable to recover it. 00:30:10.095 [2024-07-23 01:09:54.124857] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.095 [2024-07-23 01:09:54.125001] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.095 [2024-07-23 01:09:54.125027] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.095 [2024-07-23 01:09:54.125047] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.095 [2024-07-23 01:09:54.125060] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:10.095 [2024-07-23 01:09:54.125088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.095 qpair failed and we were unable to recover it. 00:30:10.095 [2024-07-23 01:09:54.134935] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.095 [2024-07-23 01:09:54.135086] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.095 [2024-07-23 01:09:54.135111] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.095 [2024-07-23 01:09:54.135125] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.095 [2024-07-23 01:09:54.135138] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:10.095 [2024-07-23 01:09:54.135165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.095 qpair failed and we were unable to recover it. 00:30:10.095 [2024-07-23 01:09:54.144937] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.095 [2024-07-23 01:09:54.145075] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.095 [2024-07-23 01:09:54.145099] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.095 [2024-07-23 01:09:54.145113] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.095 [2024-07-23 01:09:54.145125] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:10.095 [2024-07-23 01:09:54.145152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.095 qpair failed and we were unable to recover it. 00:30:10.095 [2024-07-23 01:09:54.155011] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.095 [2024-07-23 01:09:54.155219] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.095 [2024-07-23 01:09:54.155246] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.095 [2024-07-23 01:09:54.155260] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.095 [2024-07-23 01:09:54.155276] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:10.095 [2024-07-23 01:09:54.155306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.095 qpair failed and we were unable to recover it. 00:30:10.095 [2024-07-23 01:09:54.164990] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.095 [2024-07-23 01:09:54.165132] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.095 [2024-07-23 01:09:54.165158] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.095 [2024-07-23 01:09:54.165173] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.095 [2024-07-23 01:09:54.165186] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:10.095 [2024-07-23 01:09:54.165213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.095 qpair failed and we were unable to recover it. 00:30:10.095 [2024-07-23 01:09:54.175026] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.095 [2024-07-23 01:09:54.175170] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.095 [2024-07-23 01:09:54.175195] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.095 [2024-07-23 01:09:54.175209] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.095 [2024-07-23 01:09:54.175221] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:10.095 [2024-07-23 01:09:54.175249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.095 qpair failed and we were unable to recover it. 00:30:10.095 [2024-07-23 01:09:54.185082] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.095 [2024-07-23 01:09:54.185225] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.096 [2024-07-23 01:09:54.185250] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.096 [2024-07-23 01:09:54.185264] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.096 [2024-07-23 01:09:54.185277] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:10.096 [2024-07-23 01:09:54.185304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.096 qpair failed and we were unable to recover it. 00:30:10.096 [2024-07-23 01:09:54.195083] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.096 [2024-07-23 01:09:54.195229] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.096 [2024-07-23 01:09:54.195255] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.096 [2024-07-23 01:09:54.195269] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.096 [2024-07-23 01:09:54.195282] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:10.096 [2024-07-23 01:09:54.195309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.096 qpair failed and we were unable to recover it. 00:30:10.096 [2024-07-23 01:09:54.205145] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.096 [2024-07-23 01:09:54.205285] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.096 [2024-07-23 01:09:54.205311] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.096 [2024-07-23 01:09:54.205325] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.096 [2024-07-23 01:09:54.205338] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:10.096 [2024-07-23 01:09:54.205366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.096 qpair failed and we were unable to recover it. 00:30:10.096 [2024-07-23 01:09:54.215194] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.096 [2024-07-23 01:09:54.215338] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.096 [2024-07-23 01:09:54.215363] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.096 [2024-07-23 01:09:54.215383] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.096 [2024-07-23 01:09:54.215396] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:10.096 [2024-07-23 01:09:54.215424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.096 qpair failed and we were unable to recover it. 00:30:10.096 [2024-07-23 01:09:54.225153] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.096 [2024-07-23 01:09:54.225292] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.096 [2024-07-23 01:09:54.225317] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.096 [2024-07-23 01:09:54.225331] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.096 [2024-07-23 01:09:54.225344] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:10.096 [2024-07-23 01:09:54.225371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.096 qpair failed and we were unable to recover it. 00:30:10.096 [2024-07-23 01:09:54.235202] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.096 [2024-07-23 01:09:54.235342] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.096 [2024-07-23 01:09:54.235368] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.096 [2024-07-23 01:09:54.235381] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.096 [2024-07-23 01:09:54.235394] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:10.096 [2024-07-23 01:09:54.235421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.096 qpair failed and we were unable to recover it. 00:30:10.096 [2024-07-23 01:09:54.245249] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.096 [2024-07-23 01:09:54.245414] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.096 [2024-07-23 01:09:54.245439] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.096 [2024-07-23 01:09:54.245453] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.096 [2024-07-23 01:09:54.245466] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:10.096 [2024-07-23 01:09:54.245493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.096 qpair failed and we were unable to recover it. 00:30:10.096 [2024-07-23 01:09:54.255293] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.096 [2024-07-23 01:09:54.255478] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.096 [2024-07-23 01:09:54.255503] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.096 [2024-07-23 01:09:54.255517] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.096 [2024-07-23 01:09:54.255529] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:10.096 [2024-07-23 01:09:54.255556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.096 qpair failed and we were unable to recover it. 00:30:10.096 [2024-07-23 01:09:54.265271] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.096 [2024-07-23 01:09:54.265407] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.096 [2024-07-23 01:09:54.265433] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.096 [2024-07-23 01:09:54.265447] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.096 [2024-07-23 01:09:54.265460] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:10.096 [2024-07-23 01:09:54.265488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.096 qpair failed and we were unable to recover it. 00:30:10.096 [2024-07-23 01:09:54.275356] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.096 [2024-07-23 01:09:54.275501] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.096 [2024-07-23 01:09:54.275527] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.096 [2024-07-23 01:09:54.275541] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.096 [2024-07-23 01:09:54.275553] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:10.096 [2024-07-23 01:09:54.275581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.096 qpair failed and we were unable to recover it. 00:30:10.096 [2024-07-23 01:09:54.285351] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.096 [2024-07-23 01:09:54.285486] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.096 [2024-07-23 01:09:54.285511] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.096 [2024-07-23 01:09:54.285526] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.096 [2024-07-23 01:09:54.285539] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:10.096 [2024-07-23 01:09:54.285566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.096 qpair failed and we were unable to recover it. 00:30:10.096 [2024-07-23 01:09:54.295398] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.096 [2024-07-23 01:09:54.295543] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.096 [2024-07-23 01:09:54.295569] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.096 [2024-07-23 01:09:54.295583] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.096 [2024-07-23 01:09:54.295596] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:10.096 [2024-07-23 01:09:54.295630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.096 qpair failed and we were unable to recover it. 00:30:10.355 [2024-07-23 01:09:54.305491] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.355 [2024-07-23 01:09:54.305644] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.355 [2024-07-23 01:09:54.305669] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.355 [2024-07-23 01:09:54.305689] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.355 [2024-07-23 01:09:54.305702] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:10.355 [2024-07-23 01:09:54.305730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.355 qpair failed and we were unable to recover it. 00:30:10.355 [2024-07-23 01:09:54.315437] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.355 [2024-07-23 01:09:54.315570] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.355 [2024-07-23 01:09:54.315595] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.355 [2024-07-23 01:09:54.315609] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.355 [2024-07-23 01:09:54.315629] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:10.355 [2024-07-23 01:09:54.315657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.355 qpair failed and we were unable to recover it. 00:30:10.355 [2024-07-23 01:09:54.325477] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.355 [2024-07-23 01:09:54.325627] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.355 [2024-07-23 01:09:54.325653] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.355 [2024-07-23 01:09:54.325667] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.355 [2024-07-23 01:09:54.325680] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:10.355 [2024-07-23 01:09:54.325708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.355 qpair failed and we were unable to recover it. 00:30:10.355 [2024-07-23 01:09:54.335520] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.355 [2024-07-23 01:09:54.335668] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.355 [2024-07-23 01:09:54.335694] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.355 [2024-07-23 01:09:54.335708] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.355 [2024-07-23 01:09:54.335721] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:10.355 [2024-07-23 01:09:54.335749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.355 qpair failed and we were unable to recover it. 00:30:10.355 [2024-07-23 01:09:54.345554] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.355 [2024-07-23 01:09:54.345736] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.355 [2024-07-23 01:09:54.345762] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.355 [2024-07-23 01:09:54.345776] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.355 [2024-07-23 01:09:54.345788] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:10.355 [2024-07-23 01:09:54.345816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.355 qpair failed and we were unable to recover it. 00:30:10.356 [2024-07-23 01:09:54.355550] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.356 [2024-07-23 01:09:54.355697] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.356 [2024-07-23 01:09:54.355723] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.356 [2024-07-23 01:09:54.355737] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.356 [2024-07-23 01:09:54.355749] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:10.356 [2024-07-23 01:09:54.355777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.356 qpair failed and we were unable to recover it. 00:30:10.356 [2024-07-23 01:09:54.365596] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.356 [2024-07-23 01:09:54.365759] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.356 [2024-07-23 01:09:54.365785] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.356 [2024-07-23 01:09:54.365799] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.356 [2024-07-23 01:09:54.365811] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:10.356 [2024-07-23 01:09:54.365841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.356 qpair failed and we were unable to recover it. 00:30:10.356 [2024-07-23 01:09:54.375631] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.356 [2024-07-23 01:09:54.375812] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.356 [2024-07-23 01:09:54.375837] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.356 [2024-07-23 01:09:54.375851] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.356 [2024-07-23 01:09:54.375864] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:10.356 [2024-07-23 01:09:54.375894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.356 qpair failed and we were unable to recover it. 00:30:10.356 [2024-07-23 01:09:54.385669] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.356 [2024-07-23 01:09:54.385809] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.356 [2024-07-23 01:09:54.385833] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.356 [2024-07-23 01:09:54.385847] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.356 [2024-07-23 01:09:54.385859] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:10.356 [2024-07-23 01:09:54.385886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.356 qpair failed and we were unable to recover it. 00:30:10.356 [2024-07-23 01:09:54.395668] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.356 [2024-07-23 01:09:54.395806] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.356 [2024-07-23 01:09:54.395835] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.356 [2024-07-23 01:09:54.395850] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.356 [2024-07-23 01:09:54.395862] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:10.356 [2024-07-23 01:09:54.395890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.356 qpair failed and we were unable to recover it. 00:30:10.356 [2024-07-23 01:09:54.405709] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.356 [2024-07-23 01:09:54.405850] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.356 [2024-07-23 01:09:54.405875] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.356 [2024-07-23 01:09:54.405889] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.356 [2024-07-23 01:09:54.405901] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:10.356 [2024-07-23 01:09:54.405929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.356 qpair failed and we were unable to recover it. 00:30:10.356 [2024-07-23 01:09:54.415824] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.356 [2024-07-23 01:09:54.415971] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.356 [2024-07-23 01:09:54.415997] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.356 [2024-07-23 01:09:54.416011] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.356 [2024-07-23 01:09:54.416023] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:10.356 [2024-07-23 01:09:54.416050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.356 qpair failed and we were unable to recover it. 00:30:10.356 [2024-07-23 01:09:54.425750] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.356 [2024-07-23 01:09:54.425886] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.356 [2024-07-23 01:09:54.425911] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.356 [2024-07-23 01:09:54.425924] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.356 [2024-07-23 01:09:54.425936] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:10.356 [2024-07-23 01:09:54.425965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.356 qpair failed and we were unable to recover it. 00:30:10.356 [2024-07-23 01:09:54.435775] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.356 [2024-07-23 01:09:54.435915] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.356 [2024-07-23 01:09:54.435940] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.356 [2024-07-23 01:09:54.435954] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.356 [2024-07-23 01:09:54.435966] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:10.356 [2024-07-23 01:09:54.435993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.356 qpair failed and we were unable to recover it. 00:30:10.356 [2024-07-23 01:09:54.445896] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.356 [2024-07-23 01:09:54.446039] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.356 [2024-07-23 01:09:54.446065] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.356 [2024-07-23 01:09:54.446079] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.356 [2024-07-23 01:09:54.446092] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:10.356 [2024-07-23 01:09:54.446119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.356 qpair failed and we were unable to recover it. 00:30:10.356 [2024-07-23 01:09:54.455840] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.356 [2024-07-23 01:09:54.455982] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.356 [2024-07-23 01:09:54.456007] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.356 [2024-07-23 01:09:54.456021] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.356 [2024-07-23 01:09:54.456034] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:10.356 [2024-07-23 01:09:54.456061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.356 qpair failed and we were unable to recover it. 00:30:10.356 [2024-07-23 01:09:54.465873] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.356 [2024-07-23 01:09:54.466018] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.356 [2024-07-23 01:09:54.466043] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.356 [2024-07-23 01:09:54.466057] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.356 [2024-07-23 01:09:54.466070] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:10.356 [2024-07-23 01:09:54.466097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.356 qpair failed and we were unable to recover it. 00:30:10.356 [2024-07-23 01:09:54.475920] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.356 [2024-07-23 01:09:54.476056] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.356 [2024-07-23 01:09:54.476081] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.356 [2024-07-23 01:09:54.476095] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.356 [2024-07-23 01:09:54.476107] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:10.356 [2024-07-23 01:09:54.476134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.356 qpair failed and we were unable to recover it. 00:30:10.356 [2024-07-23 01:09:54.485917] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.356 [2024-07-23 01:09:54.486053] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.357 [2024-07-23 01:09:54.486083] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.357 [2024-07-23 01:09:54.486098] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.357 [2024-07-23 01:09:54.486111] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:10.357 [2024-07-23 01:09:54.486138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.357 qpair failed and we were unable to recover it. 00:30:10.357 [2024-07-23 01:09:54.495978] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.357 [2024-07-23 01:09:54.496126] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.357 [2024-07-23 01:09:54.496151] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.357 [2024-07-23 01:09:54.496165] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.357 [2024-07-23 01:09:54.496178] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:10.357 [2024-07-23 01:09:54.496205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.357 qpair failed and we were unable to recover it. 00:30:10.357 [2024-07-23 01:09:54.506020] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.357 [2024-07-23 01:09:54.506187] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.357 [2024-07-23 01:09:54.506213] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.357 [2024-07-23 01:09:54.506226] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.357 [2024-07-23 01:09:54.506239] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:10.357 [2024-07-23 01:09:54.506268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.357 qpair failed and we were unable to recover it. 00:30:10.357 [2024-07-23 01:09:54.516078] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.357 [2024-07-23 01:09:54.516211] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.357 [2024-07-23 01:09:54.516236] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.357 [2024-07-23 01:09:54.516249] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.357 [2024-07-23 01:09:54.516262] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:10.357 [2024-07-23 01:09:54.516289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.357 qpair failed and we were unable to recover it. 00:30:10.357 [2024-07-23 01:09:54.526133] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.357 [2024-07-23 01:09:54.526265] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.357 [2024-07-23 01:09:54.526290] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.357 [2024-07-23 01:09:54.526304] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.357 [2024-07-23 01:09:54.526317] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:10.357 [2024-07-23 01:09:54.526351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.357 qpair failed and we were unable to recover it. 00:30:10.357 [2024-07-23 01:09:54.536120] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.357 [2024-07-23 01:09:54.536270] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.357 [2024-07-23 01:09:54.536295] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.357 [2024-07-23 01:09:54.536309] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.357 [2024-07-23 01:09:54.536321] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:10.357 [2024-07-23 01:09:54.536348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.357 qpair failed and we were unable to recover it. 00:30:10.357 [2024-07-23 01:09:54.546102] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.357 [2024-07-23 01:09:54.546246] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.357 [2024-07-23 01:09:54.546271] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.357 [2024-07-23 01:09:54.546285] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.357 [2024-07-23 01:09:54.546297] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:10.357 [2024-07-23 01:09:54.546324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.357 qpair failed and we were unable to recover it. 00:30:10.357 [2024-07-23 01:09:54.556155] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.357 [2024-07-23 01:09:54.556300] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.357 [2024-07-23 01:09:54.556326] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.357 [2024-07-23 01:09:54.556340] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.357 [2024-07-23 01:09:54.556352] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:10.357 [2024-07-23 01:09:54.556379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.357 qpair failed and we were unable to recover it. 00:30:10.616 [2024-07-23 01:09:54.566193] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.616 [2024-07-23 01:09:54.566343] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.616 [2024-07-23 01:09:54.566368] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.616 [2024-07-23 01:09:54.566382] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.616 [2024-07-23 01:09:54.566395] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:10.616 [2024-07-23 01:09:54.566422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.616 qpair failed and we were unable to recover it. 00:30:10.616 [2024-07-23 01:09:54.576267] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.616 [2024-07-23 01:09:54.576418] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.616 [2024-07-23 01:09:54.576450] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.616 [2024-07-23 01:09:54.576469] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.616 [2024-07-23 01:09:54.576482] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:10.616 [2024-07-23 01:09:54.576510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.616 qpair failed and we were unable to recover it. 00:30:10.616 [2024-07-23 01:09:54.586303] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.616 [2024-07-23 01:09:54.586444] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.616 [2024-07-23 01:09:54.586471] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.616 [2024-07-23 01:09:54.586485] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.616 [2024-07-23 01:09:54.586497] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:10.616 [2024-07-23 01:09:54.586525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.616 qpair failed and we were unable to recover it. 00:30:10.616 [2024-07-23 01:09:54.596252] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.616 [2024-07-23 01:09:54.596391] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.616 [2024-07-23 01:09:54.596416] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.616 [2024-07-23 01:09:54.596430] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.616 [2024-07-23 01:09:54.596443] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:10.616 [2024-07-23 01:09:54.596470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.616 qpair failed and we were unable to recover it. 00:30:10.616 [2024-07-23 01:09:54.606323] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.616 [2024-07-23 01:09:54.606500] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.616 [2024-07-23 01:09:54.606527] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.616 [2024-07-23 01:09:54.606542] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.616 [2024-07-23 01:09:54.606554] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:10.616 [2024-07-23 01:09:54.606584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.616 qpair failed and we were unable to recover it. 00:30:10.616 [2024-07-23 01:09:54.616325] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.616 [2024-07-23 01:09:54.616503] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.616 [2024-07-23 01:09:54.616529] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.616 [2024-07-23 01:09:54.616542] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.616 [2024-07-23 01:09:54.616555] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:10.616 [2024-07-23 01:09:54.616588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.616 qpair failed and we were unable to recover it. 00:30:10.616 [2024-07-23 01:09:54.626337] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.616 [2024-07-23 01:09:54.626546] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.617 [2024-07-23 01:09:54.626572] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.617 [2024-07-23 01:09:54.626587] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.617 [2024-07-23 01:09:54.626603] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:10.617 [2024-07-23 01:09:54.626642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.617 qpair failed and we were unable to recover it. 00:30:10.617 [2024-07-23 01:09:54.636383] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.617 [2024-07-23 01:09:54.636518] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.617 [2024-07-23 01:09:54.636542] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.617 [2024-07-23 01:09:54.636556] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.617 [2024-07-23 01:09:54.636567] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:10.617 [2024-07-23 01:09:54.636595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.617 qpair failed and we were unable to recover it. 00:30:10.617 [2024-07-23 01:09:54.646414] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.617 [2024-07-23 01:09:54.646560] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.617 [2024-07-23 01:09:54.646586] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.617 [2024-07-23 01:09:54.646599] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.617 [2024-07-23 01:09:54.646619] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:10.617 [2024-07-23 01:09:54.646650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.617 qpair failed and we were unable to recover it. 00:30:10.617 [2024-07-23 01:09:54.656435] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.617 [2024-07-23 01:09:54.656573] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.617 [2024-07-23 01:09:54.656599] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.617 [2024-07-23 01:09:54.656621] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.617 [2024-07-23 01:09:54.656636] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:10.617 [2024-07-23 01:09:54.656664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.617 qpair failed and we were unable to recover it. 00:30:10.617 [2024-07-23 01:09:54.666447] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.617 [2024-07-23 01:09:54.666583] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.617 [2024-07-23 01:09:54.666619] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.617 [2024-07-23 01:09:54.666636] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.617 [2024-07-23 01:09:54.666650] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:10.617 [2024-07-23 01:09:54.666677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.617 qpair failed and we were unable to recover it. 00:30:10.617 [2024-07-23 01:09:54.676486] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.617 [2024-07-23 01:09:54.676628] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.617 [2024-07-23 01:09:54.676654] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.617 [2024-07-23 01:09:54.676668] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.617 [2024-07-23 01:09:54.676681] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:10.617 [2024-07-23 01:09:54.676708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.617 qpair failed and we were unable to recover it. 00:30:10.617 [2024-07-23 01:09:54.686499] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.617 [2024-07-23 01:09:54.686652] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.617 [2024-07-23 01:09:54.686678] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.617 [2024-07-23 01:09:54.686691] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.617 [2024-07-23 01:09:54.686704] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:10.617 [2024-07-23 01:09:54.686732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.617 qpair failed and we were unable to recover it. 00:30:10.617 [2024-07-23 01:09:54.696544] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.617 [2024-07-23 01:09:54.696690] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.617 [2024-07-23 01:09:54.696715] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.617 [2024-07-23 01:09:54.696729] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.617 [2024-07-23 01:09:54.696741] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:10.617 [2024-07-23 01:09:54.696769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.617 qpair failed and we were unable to recover it. 00:30:10.617 [2024-07-23 01:09:54.706622] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.617 [2024-07-23 01:09:54.706800] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.617 [2024-07-23 01:09:54.706825] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.617 [2024-07-23 01:09:54.706839] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.617 [2024-07-23 01:09:54.706851] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:10.617 [2024-07-23 01:09:54.706885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.617 qpair failed and we were unable to recover it. 00:30:10.617 [2024-07-23 01:09:54.716620] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.617 [2024-07-23 01:09:54.716756] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.617 [2024-07-23 01:09:54.716781] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.617 [2024-07-23 01:09:54.716795] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.617 [2024-07-23 01:09:54.716808] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:10.617 [2024-07-23 01:09:54.716835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.617 qpair failed and we were unable to recover it. 00:30:10.617 [2024-07-23 01:09:54.726648] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.617 [2024-07-23 01:09:54.726820] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.617 [2024-07-23 01:09:54.726845] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.617 [2024-07-23 01:09:54.726858] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.617 [2024-07-23 01:09:54.726871] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:10.617 [2024-07-23 01:09:54.726900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.617 qpair failed and we were unable to recover it. 00:30:10.617 [2024-07-23 01:09:54.736801] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.617 [2024-07-23 01:09:54.736944] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.617 [2024-07-23 01:09:54.736969] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.617 [2024-07-23 01:09:54.736983] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.617 [2024-07-23 01:09:54.736995] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:10.617 [2024-07-23 01:09:54.737022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.617 qpair failed and we were unable to recover it. 00:30:10.617 [2024-07-23 01:09:54.746722] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.617 [2024-07-23 01:09:54.746864] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.617 [2024-07-23 01:09:54.746889] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.617 [2024-07-23 01:09:54.746902] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.617 [2024-07-23 01:09:54.746915] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:10.617 [2024-07-23 01:09:54.746942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.617 qpair failed and we were unable to recover it. 00:30:10.617 [2024-07-23 01:09:54.756742] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.617 [2024-07-23 01:09:54.756885] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.617 [2024-07-23 01:09:54.756915] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.617 [2024-07-23 01:09:54.756929] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.618 [2024-07-23 01:09:54.756942] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:10.618 [2024-07-23 01:09:54.756969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.618 qpair failed and we were unable to recover it. 00:30:10.618 [2024-07-23 01:09:54.766804] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.618 [2024-07-23 01:09:54.766941] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.618 [2024-07-23 01:09:54.766966] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.618 [2024-07-23 01:09:54.766980] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.618 [2024-07-23 01:09:54.766993] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:10.618 [2024-07-23 01:09:54.767021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.618 qpair failed and we were unable to recover it. 00:30:10.618 [2024-07-23 01:09:54.776796] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.618 [2024-07-23 01:09:54.776939] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.618 [2024-07-23 01:09:54.776964] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.618 [2024-07-23 01:09:54.776977] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.618 [2024-07-23 01:09:54.776989] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:10.618 [2024-07-23 01:09:54.777016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.618 qpair failed and we were unable to recover it. 00:30:10.618 [2024-07-23 01:09:54.786814] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.618 [2024-07-23 01:09:54.786951] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.618 [2024-07-23 01:09:54.786976] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.618 [2024-07-23 01:09:54.786989] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.618 [2024-07-23 01:09:54.787002] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:10.618 [2024-07-23 01:09:54.787029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.618 qpair failed and we were unable to recover it. 00:30:10.618 [2024-07-23 01:09:54.796843] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.618 [2024-07-23 01:09:54.796983] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.618 [2024-07-23 01:09:54.797009] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.618 [2024-07-23 01:09:54.797023] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.618 [2024-07-23 01:09:54.797035] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:10.618 [2024-07-23 01:09:54.797067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.618 qpair failed and we were unable to recover it. 00:30:10.618 [2024-07-23 01:09:54.806901] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.618 [2024-07-23 01:09:54.807068] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.618 [2024-07-23 01:09:54.807092] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.618 [2024-07-23 01:09:54.807106] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.618 [2024-07-23 01:09:54.807119] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:10.618 [2024-07-23 01:09:54.807146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.618 qpair failed and we were unable to recover it. 00:30:10.618 [2024-07-23 01:09:54.816933] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.618 [2024-07-23 01:09:54.817088] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.618 [2024-07-23 01:09:54.817112] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.618 [2024-07-23 01:09:54.817126] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.618 [2024-07-23 01:09:54.817139] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:10.618 [2024-07-23 01:09:54.817166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.618 qpair failed and we were unable to recover it. 00:30:10.877 [2024-07-23 01:09:54.826970] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.877 [2024-07-23 01:09:54.827109] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.877 [2024-07-23 01:09:54.827133] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.877 [2024-07-23 01:09:54.827147] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.877 [2024-07-23 01:09:54.827160] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:10.877 [2024-07-23 01:09:54.827187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.877 qpair failed and we were unable to recover it. 00:30:10.877 [2024-07-23 01:09:54.836949] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.877 [2024-07-23 01:09:54.837096] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.877 [2024-07-23 01:09:54.837121] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.877 [2024-07-23 01:09:54.837135] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.877 [2024-07-23 01:09:54.837147] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:10.877 [2024-07-23 01:09:54.837174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.877 qpair failed and we were unable to recover it. 00:30:10.877 [2024-07-23 01:09:54.847021] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.877 [2024-07-23 01:09:54.847164] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.877 [2024-07-23 01:09:54.847194] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.877 [2024-07-23 01:09:54.847209] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.877 [2024-07-23 01:09:54.847221] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:10.877 [2024-07-23 01:09:54.847249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.877 qpair failed and we were unable to recover it. 00:30:10.877 [2024-07-23 01:09:54.857026] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.877 [2024-07-23 01:09:54.857171] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.877 [2024-07-23 01:09:54.857196] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.877 [2024-07-23 01:09:54.857210] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.877 [2024-07-23 01:09:54.857222] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:10.877 [2024-07-23 01:09:54.857251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.877 qpair failed and we were unable to recover it. 00:30:10.877 [2024-07-23 01:09:54.867076] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.877 [2024-07-23 01:09:54.867228] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.877 [2024-07-23 01:09:54.867253] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.877 [2024-07-23 01:09:54.867267] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.877 [2024-07-23 01:09:54.867279] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:10.877 [2024-07-23 01:09:54.867307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.877 qpair failed and we were unable to recover it. 00:30:10.877 [2024-07-23 01:09:54.877110] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.877 [2024-07-23 01:09:54.877246] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.877 [2024-07-23 01:09:54.877271] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.877 [2024-07-23 01:09:54.877285] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.877 [2024-07-23 01:09:54.877298] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:10.877 [2024-07-23 01:09:54.877324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.877 qpair failed and we were unable to recover it. 00:30:10.877 [2024-07-23 01:09:54.887161] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.877 [2024-07-23 01:09:54.887323] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.877 [2024-07-23 01:09:54.887348] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.877 [2024-07-23 01:09:54.887361] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.877 [2024-07-23 01:09:54.887379] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:10.877 [2024-07-23 01:09:54.887408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.877 qpair failed and we were unable to recover it. 00:30:10.877 [2024-07-23 01:09:54.897133] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.877 [2024-07-23 01:09:54.897279] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.877 [2024-07-23 01:09:54.897304] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.877 [2024-07-23 01:09:54.897318] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.877 [2024-07-23 01:09:54.897330] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:10.877 [2024-07-23 01:09:54.897357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.877 qpair failed and we were unable to recover it. 00:30:10.877 [2024-07-23 01:09:54.907201] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.877 [2024-07-23 01:09:54.907350] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.877 [2024-07-23 01:09:54.907375] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.877 [2024-07-23 01:09:54.907389] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.877 [2024-07-23 01:09:54.907402] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:10.877 [2024-07-23 01:09:54.907429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.877 qpair failed and we were unable to recover it. 00:30:10.877 [2024-07-23 01:09:54.917252] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.877 [2024-07-23 01:09:54.917430] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.877 [2024-07-23 01:09:54.917458] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.877 [2024-07-23 01:09:54.917472] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.877 [2024-07-23 01:09:54.917488] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:10.878 [2024-07-23 01:09:54.917516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.878 qpair failed and we were unable to recover it. 00:30:10.878 [2024-07-23 01:09:54.927223] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.878 [2024-07-23 01:09:54.927361] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.878 [2024-07-23 01:09:54.927386] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.878 [2024-07-23 01:09:54.927400] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.878 [2024-07-23 01:09:54.927413] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:10.878 [2024-07-23 01:09:54.927441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.878 qpair failed and we were unable to recover it. 00:30:10.878 [2024-07-23 01:09:54.937256] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.878 [2024-07-23 01:09:54.937399] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.878 [2024-07-23 01:09:54.937424] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.878 [2024-07-23 01:09:54.937439] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.878 [2024-07-23 01:09:54.937452] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:10.878 [2024-07-23 01:09:54.937479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.878 qpair failed and we were unable to recover it. 00:30:10.878 [2024-07-23 01:09:54.947297] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.878 [2024-07-23 01:09:54.947445] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.878 [2024-07-23 01:09:54.947470] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.878 [2024-07-23 01:09:54.947483] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.878 [2024-07-23 01:09:54.947496] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:10.878 [2024-07-23 01:09:54.947523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.878 qpair failed and we were unable to recover it. 00:30:10.878 [2024-07-23 01:09:54.957329] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.878 [2024-07-23 01:09:54.957473] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.878 [2024-07-23 01:09:54.957499] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.878 [2024-07-23 01:09:54.957513] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.878 [2024-07-23 01:09:54.957525] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:10.878 [2024-07-23 01:09:54.957553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.878 qpair failed and we were unable to recover it. 00:30:10.878 [2024-07-23 01:09:54.967354] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.878 [2024-07-23 01:09:54.967496] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.878 [2024-07-23 01:09:54.967523] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.878 [2024-07-23 01:09:54.967537] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.878 [2024-07-23 01:09:54.967550] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:10.878 [2024-07-23 01:09:54.967577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.878 qpair failed and we were unable to recover it. 00:30:10.878 [2024-07-23 01:09:54.977381] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.878 [2024-07-23 01:09:54.977526] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.878 [2024-07-23 01:09:54.977552] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.878 [2024-07-23 01:09:54.977565] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.878 [2024-07-23 01:09:54.977585] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:10.878 [2024-07-23 01:09:54.977624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.878 qpair failed and we were unable to recover it. 00:30:10.878 [2024-07-23 01:09:54.987428] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.878 [2024-07-23 01:09:54.987603] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.878 [2024-07-23 01:09:54.987636] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.878 [2024-07-23 01:09:54.987651] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.878 [2024-07-23 01:09:54.987663] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:10.878 [2024-07-23 01:09:54.987691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.878 qpair failed and we were unable to recover it. 00:30:10.878 [2024-07-23 01:09:54.997438] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.878 [2024-07-23 01:09:54.997594] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.878 [2024-07-23 01:09:54.997634] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.878 [2024-07-23 01:09:54.997648] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.878 [2024-07-23 01:09:54.997661] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:10.878 [2024-07-23 01:09:54.997689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.878 qpair failed and we were unable to recover it. 00:30:10.878 [2024-07-23 01:09:55.007458] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.878 [2024-07-23 01:09:55.007590] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.878 [2024-07-23 01:09:55.007621] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.878 [2024-07-23 01:09:55.007637] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.878 [2024-07-23 01:09:55.007649] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:10.878 [2024-07-23 01:09:55.007677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.878 qpair failed and we were unable to recover it. 00:30:10.878 [2024-07-23 01:09:55.017512] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.878 [2024-07-23 01:09:55.017665] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.878 [2024-07-23 01:09:55.017691] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.878 [2024-07-23 01:09:55.017705] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.878 [2024-07-23 01:09:55.017717] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:10.878 [2024-07-23 01:09:55.017745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.878 qpair failed and we were unable to recover it. 00:30:10.878 [2024-07-23 01:09:55.027522] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.878 [2024-07-23 01:09:55.027673] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.878 [2024-07-23 01:09:55.027698] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.878 [2024-07-23 01:09:55.027712] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.878 [2024-07-23 01:09:55.027725] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:10.878 [2024-07-23 01:09:55.027752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.878 qpair failed and we were unable to recover it. 00:30:10.878 [2024-07-23 01:09:55.037594] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.878 [2024-07-23 01:09:55.037782] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.878 [2024-07-23 01:09:55.037808] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.878 [2024-07-23 01:09:55.037827] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.878 [2024-07-23 01:09:55.037840] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:10.878 [2024-07-23 01:09:55.037868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.878 qpair failed and we were unable to recover it. 00:30:10.878 [2024-07-23 01:09:55.047584] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.878 [2024-07-23 01:09:55.047742] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.878 [2024-07-23 01:09:55.047767] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.878 [2024-07-23 01:09:55.047781] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.878 [2024-07-23 01:09:55.047794] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:10.878 [2024-07-23 01:09:55.047822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.878 qpair failed and we were unable to recover it. 00:30:10.878 [2024-07-23 01:09:55.057654] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.879 [2024-07-23 01:09:55.057796] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.879 [2024-07-23 01:09:55.057821] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.879 [2024-07-23 01:09:55.057835] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.879 [2024-07-23 01:09:55.057847] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:10.879 [2024-07-23 01:09:55.057875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.879 qpair failed and we were unable to recover it. 00:30:10.879 [2024-07-23 01:09:55.067653] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.879 [2024-07-23 01:09:55.067789] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.879 [2024-07-23 01:09:55.067815] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.879 [2024-07-23 01:09:55.067828] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.879 [2024-07-23 01:09:55.067846] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:10.879 [2024-07-23 01:09:55.067875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.879 qpair failed and we were unable to recover it. 00:30:10.879 [2024-07-23 01:09:55.077720] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.879 [2024-07-23 01:09:55.077872] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.879 [2024-07-23 01:09:55.077897] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.879 [2024-07-23 01:09:55.077913] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.879 [2024-07-23 01:09:55.077925] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:10.879 [2024-07-23 01:09:55.077953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.879 qpair failed and we were unable to recover it. 00:30:11.138 [2024-07-23 01:09:55.087733] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.138 [2024-07-23 01:09:55.087880] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.138 [2024-07-23 01:09:55.087905] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.138 [2024-07-23 01:09:55.087919] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.138 [2024-07-23 01:09:55.087932] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:11.138 [2024-07-23 01:09:55.087959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.138 qpair failed and we were unable to recover it. 00:30:11.138 [2024-07-23 01:09:55.097738] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.138 [2024-07-23 01:09:55.097879] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.138 [2024-07-23 01:09:55.097904] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.138 [2024-07-23 01:09:55.097917] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.138 [2024-07-23 01:09:55.097930] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:11.138 [2024-07-23 01:09:55.097957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.138 qpair failed and we were unable to recover it. 00:30:11.138 [2024-07-23 01:09:55.107780] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.138 [2024-07-23 01:09:55.107927] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.138 [2024-07-23 01:09:55.107951] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.138 [2024-07-23 01:09:55.107964] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.138 [2024-07-23 01:09:55.107977] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:11.138 [2024-07-23 01:09:55.108005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.138 qpair failed and we were unable to recover it. 00:30:11.138 [2024-07-23 01:09:55.117808] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.138 [2024-07-23 01:09:55.117950] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.138 [2024-07-23 01:09:55.117975] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.138 [2024-07-23 01:09:55.117989] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.138 [2024-07-23 01:09:55.118001] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:11.138 [2024-07-23 01:09:55.118028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.138 qpair failed and we were unable to recover it. 00:30:11.138 [2024-07-23 01:09:55.127807] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.138 [2024-07-23 01:09:55.127943] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.138 [2024-07-23 01:09:55.127969] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.138 [2024-07-23 01:09:55.127983] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.138 [2024-07-23 01:09:55.127996] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:11.138 [2024-07-23 01:09:55.128023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.138 qpair failed and we were unable to recover it. 00:30:11.138 [2024-07-23 01:09:55.137851] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.138 [2024-07-23 01:09:55.137990] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.138 [2024-07-23 01:09:55.138015] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.138 [2024-07-23 01:09:55.138029] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.138 [2024-07-23 01:09:55.138042] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:11.138 [2024-07-23 01:09:55.138069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.138 qpair failed and we were unable to recover it. 00:30:11.138 [2024-07-23 01:09:55.147862] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.138 [2024-07-23 01:09:55.148047] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.138 [2024-07-23 01:09:55.148073] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.138 [2024-07-23 01:09:55.148087] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.138 [2024-07-23 01:09:55.148099] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:11.138 [2024-07-23 01:09:55.148128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.138 qpair failed and we were unable to recover it. 00:30:11.138 [2024-07-23 01:09:55.157883] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.138 [2024-07-23 01:09:55.158072] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.138 [2024-07-23 01:09:55.158098] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.138 [2024-07-23 01:09:55.158111] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.138 [2024-07-23 01:09:55.158129] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:11.138 [2024-07-23 01:09:55.158158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.138 qpair failed and we were unable to recover it. 00:30:11.138 [2024-07-23 01:09:55.167935] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.138 [2024-07-23 01:09:55.168074] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.138 [2024-07-23 01:09:55.168099] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.138 [2024-07-23 01:09:55.168113] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.138 [2024-07-23 01:09:55.168125] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:11.138 [2024-07-23 01:09:55.168153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.138 qpair failed and we were unable to recover it. 00:30:11.138 [2024-07-23 01:09:55.177970] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.138 [2024-07-23 01:09:55.178146] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.138 [2024-07-23 01:09:55.178173] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.138 [2024-07-23 01:09:55.178187] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.138 [2024-07-23 01:09:55.178201] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:11.138 [2024-07-23 01:09:55.178229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.138 qpair failed and we were unable to recover it. 00:30:11.138 [2024-07-23 01:09:55.187975] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.138 [2024-07-23 01:09:55.188115] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.138 [2024-07-23 01:09:55.188139] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.138 [2024-07-23 01:09:55.188154] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.138 [2024-07-23 01:09:55.188166] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:11.138 [2024-07-23 01:09:55.188194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.138 qpair failed and we were unable to recover it. 00:30:11.138 [2024-07-23 01:09:55.198028] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.138 [2024-07-23 01:09:55.198211] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.138 [2024-07-23 01:09:55.198237] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.138 [2024-07-23 01:09:55.198251] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.138 [2024-07-23 01:09:55.198264] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:11.139 [2024-07-23 01:09:55.198291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.139 qpair failed and we were unable to recover it. 00:30:11.139 [2024-07-23 01:09:55.208035] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.139 [2024-07-23 01:09:55.208167] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.139 [2024-07-23 01:09:55.208192] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.139 [2024-07-23 01:09:55.208206] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.139 [2024-07-23 01:09:55.208219] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:11.139 [2024-07-23 01:09:55.208247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.139 qpair failed and we were unable to recover it. 00:30:11.139 [2024-07-23 01:09:55.218096] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.139 [2024-07-23 01:09:55.218266] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.139 [2024-07-23 01:09:55.218291] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.139 [2024-07-23 01:09:55.218305] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.139 [2024-07-23 01:09:55.218318] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:11.139 [2024-07-23 01:09:55.218345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.139 qpair failed and we were unable to recover it. 00:30:11.139 [2024-07-23 01:09:55.228118] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.139 [2024-07-23 01:09:55.228296] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.139 [2024-07-23 01:09:55.228321] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.139 [2024-07-23 01:09:55.228335] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.139 [2024-07-23 01:09:55.228347] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:11.139 [2024-07-23 01:09:55.228375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.139 qpair failed and we were unable to recover it. 00:30:11.139 [2024-07-23 01:09:55.238229] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.139 [2024-07-23 01:09:55.238364] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.139 [2024-07-23 01:09:55.238389] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.139 [2024-07-23 01:09:55.238402] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.139 [2024-07-23 01:09:55.238415] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:11.139 [2024-07-23 01:09:55.238442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.139 qpair failed and we were unable to recover it. 00:30:11.139 [2024-07-23 01:09:55.248167] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.139 [2024-07-23 01:09:55.248305] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.139 [2024-07-23 01:09:55.248331] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.139 [2024-07-23 01:09:55.248352] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.139 [2024-07-23 01:09:55.248365] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:11.139 [2024-07-23 01:09:55.248393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.139 qpair failed and we were unable to recover it. 00:30:11.139 [2024-07-23 01:09:55.258241] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.139 [2024-07-23 01:09:55.258408] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.139 [2024-07-23 01:09:55.258433] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.139 [2024-07-23 01:09:55.258447] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.139 [2024-07-23 01:09:55.258459] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:11.139 [2024-07-23 01:09:55.258486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.139 qpair failed and we were unable to recover it. 00:30:11.139 [2024-07-23 01:09:55.268245] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.139 [2024-07-23 01:09:55.268434] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.139 [2024-07-23 01:09:55.268459] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.139 [2024-07-23 01:09:55.268473] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.139 [2024-07-23 01:09:55.268485] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:11.139 [2024-07-23 01:09:55.268515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.139 qpair failed and we were unable to recover it. 00:30:11.139 [2024-07-23 01:09:55.278236] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.139 [2024-07-23 01:09:55.278371] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.139 [2024-07-23 01:09:55.278397] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.139 [2024-07-23 01:09:55.278410] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.139 [2024-07-23 01:09:55.278423] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:11.139 [2024-07-23 01:09:55.278450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.139 qpair failed and we were unable to recover it. 00:30:11.139 [2024-07-23 01:09:55.288297] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.139 [2024-07-23 01:09:55.288445] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.139 [2024-07-23 01:09:55.288471] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.139 [2024-07-23 01:09:55.288485] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.139 [2024-07-23 01:09:55.288497] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:11.139 [2024-07-23 01:09:55.288525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.139 qpair failed and we were unable to recover it. 00:30:11.139 [2024-07-23 01:09:55.298320] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.139 [2024-07-23 01:09:55.298465] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.139 [2024-07-23 01:09:55.298490] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.139 [2024-07-23 01:09:55.298504] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.139 [2024-07-23 01:09:55.298516] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:11.139 [2024-07-23 01:09:55.298544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.139 qpair failed and we were unable to recover it. 00:30:11.139 [2024-07-23 01:09:55.308331] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.139 [2024-07-23 01:09:55.308506] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.139 [2024-07-23 01:09:55.308531] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.139 [2024-07-23 01:09:55.308545] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.139 [2024-07-23 01:09:55.308558] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:11.139 [2024-07-23 01:09:55.308584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.139 qpair failed and we were unable to recover it. 00:30:11.139 [2024-07-23 01:09:55.318367] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.139 [2024-07-23 01:09:55.318498] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.139 [2024-07-23 01:09:55.318523] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.139 [2024-07-23 01:09:55.318537] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.139 [2024-07-23 01:09:55.318549] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:11.139 [2024-07-23 01:09:55.318576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.139 qpair failed and we were unable to recover it. 00:30:11.139 [2024-07-23 01:09:55.328501] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.139 [2024-07-23 01:09:55.328678] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.139 [2024-07-23 01:09:55.328704] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.139 [2024-07-23 01:09:55.328718] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.139 [2024-07-23 01:09:55.328731] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:11.139 [2024-07-23 01:09:55.328759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.139 qpair failed and we were unable to recover it. 00:30:11.139 [2024-07-23 01:09:55.338458] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.140 [2024-07-23 01:09:55.338606] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.140 [2024-07-23 01:09:55.338638] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.140 [2024-07-23 01:09:55.338663] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.140 [2024-07-23 01:09:55.338677] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:11.140 [2024-07-23 01:09:55.338707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.140 qpair failed and we were unable to recover it. 00:30:11.398 [2024-07-23 01:09:55.348460] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.398 [2024-07-23 01:09:55.348637] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.399 [2024-07-23 01:09:55.348665] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.399 [2024-07-23 01:09:55.348679] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.399 [2024-07-23 01:09:55.348691] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:11.399 [2024-07-23 01:09:55.348719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.399 qpair failed and we were unable to recover it. 00:30:11.399 [2024-07-23 01:09:55.358468] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.399 [2024-07-23 01:09:55.358626] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.399 [2024-07-23 01:09:55.358652] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.399 [2024-07-23 01:09:55.358666] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.399 [2024-07-23 01:09:55.358680] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:11.399 [2024-07-23 01:09:55.358708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.399 qpair failed and we were unable to recover it. 00:30:11.399 [2024-07-23 01:09:55.368517] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.399 [2024-07-23 01:09:55.368656] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.399 [2024-07-23 01:09:55.368682] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.399 [2024-07-23 01:09:55.368697] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.399 [2024-07-23 01:09:55.368710] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:11.399 [2024-07-23 01:09:55.368740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.399 qpair failed and we were unable to recover it. 00:30:11.399 [2024-07-23 01:09:55.378563] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.399 [2024-07-23 01:09:55.378718] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.399 [2024-07-23 01:09:55.378744] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.399 [2024-07-23 01:09:55.378758] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.399 [2024-07-23 01:09:55.378771] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:11.399 [2024-07-23 01:09:55.378799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.399 qpair failed and we were unable to recover it. 00:30:11.399 [2024-07-23 01:09:55.388570] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.399 [2024-07-23 01:09:55.388720] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.399 [2024-07-23 01:09:55.388746] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.399 [2024-07-23 01:09:55.388761] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.399 [2024-07-23 01:09:55.388773] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:11.399 [2024-07-23 01:09:55.388801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.399 qpair failed and we were unable to recover it. 00:30:11.399 [2024-07-23 01:09:55.398609] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.399 [2024-07-23 01:09:55.398772] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.399 [2024-07-23 01:09:55.398797] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.399 [2024-07-23 01:09:55.398812] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.399 [2024-07-23 01:09:55.398824] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:11.399 [2024-07-23 01:09:55.398851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.399 qpair failed and we were unable to recover it. 00:30:11.399 [2024-07-23 01:09:55.408665] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.399 [2024-07-23 01:09:55.408810] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.399 [2024-07-23 01:09:55.408837] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.399 [2024-07-23 01:09:55.408856] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.399 [2024-07-23 01:09:55.408868] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:11.399 [2024-07-23 01:09:55.408898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.399 qpair failed and we were unable to recover it. 00:30:11.399 [2024-07-23 01:09:55.418692] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.399 [2024-07-23 01:09:55.418859] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.399 [2024-07-23 01:09:55.418891] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.399 [2024-07-23 01:09:55.418906] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.399 [2024-07-23 01:09:55.418919] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:11.399 [2024-07-23 01:09:55.418947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.399 qpair failed and we were unable to recover it. 00:30:11.399 [2024-07-23 01:09:55.428754] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.399 [2024-07-23 01:09:55.428918] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.399 [2024-07-23 01:09:55.428944] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.399 [2024-07-23 01:09:55.428964] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.399 [2024-07-23 01:09:55.428978] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:11.399 [2024-07-23 01:09:55.429006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.399 qpair failed and we were unable to recover it. 00:30:11.399 [2024-07-23 01:09:55.438743] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.399 [2024-07-23 01:09:55.438881] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.399 [2024-07-23 01:09:55.438906] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.399 [2024-07-23 01:09:55.438920] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.399 [2024-07-23 01:09:55.438933] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:11.399 [2024-07-23 01:09:55.438961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.399 qpair failed and we were unable to recover it. 00:30:11.399 [2024-07-23 01:09:55.448742] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.399 [2024-07-23 01:09:55.448881] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.399 [2024-07-23 01:09:55.448907] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.399 [2024-07-23 01:09:55.448921] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.399 [2024-07-23 01:09:55.448934] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:11.399 [2024-07-23 01:09:55.448962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.399 qpair failed and we were unable to recover it. 00:30:11.399 [2024-07-23 01:09:55.458780] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.399 [2024-07-23 01:09:55.458922] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.399 [2024-07-23 01:09:55.458947] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.399 [2024-07-23 01:09:55.458961] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.399 [2024-07-23 01:09:55.458974] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:11.399 [2024-07-23 01:09:55.459001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.399 qpair failed and we were unable to recover it. 00:30:11.399 [2024-07-23 01:09:55.468852] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.399 [2024-07-23 01:09:55.469016] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.399 [2024-07-23 01:09:55.469040] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.399 [2024-07-23 01:09:55.469054] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.399 [2024-07-23 01:09:55.469066] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:11.399 [2024-07-23 01:09:55.469093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.399 qpair failed and we were unable to recover it. 00:30:11.399 [2024-07-23 01:09:55.478843] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.399 [2024-07-23 01:09:55.478980] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.399 [2024-07-23 01:09:55.479006] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.399 [2024-07-23 01:09:55.479020] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.400 [2024-07-23 01:09:55.479032] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:11.400 [2024-07-23 01:09:55.479059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.400 qpair failed and we were unable to recover it. 00:30:11.400 [2024-07-23 01:09:55.488892] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.400 [2024-07-23 01:09:55.489026] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.400 [2024-07-23 01:09:55.489052] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.400 [2024-07-23 01:09:55.489066] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.400 [2024-07-23 01:09:55.489078] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:11.400 [2024-07-23 01:09:55.489106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.400 qpair failed and we were unable to recover it. 00:30:11.400 [2024-07-23 01:09:55.498929] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.400 [2024-07-23 01:09:55.499071] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.400 [2024-07-23 01:09:55.499096] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.400 [2024-07-23 01:09:55.499110] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.400 [2024-07-23 01:09:55.499123] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:11.400 [2024-07-23 01:09:55.499150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.400 qpair failed and we were unable to recover it. 00:30:11.400 [2024-07-23 01:09:55.508930] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.400 [2024-07-23 01:09:55.509089] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.400 [2024-07-23 01:09:55.509115] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.400 [2024-07-23 01:09:55.509129] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.400 [2024-07-23 01:09:55.509141] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:11.400 [2024-07-23 01:09:55.509168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.400 qpair failed and we were unable to recover it. 00:30:11.400 [2024-07-23 01:09:55.519021] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.400 [2024-07-23 01:09:55.519170] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.400 [2024-07-23 01:09:55.519195] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.400 [2024-07-23 01:09:55.519215] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.400 [2024-07-23 01:09:55.519228] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:11.400 [2024-07-23 01:09:55.519256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.400 qpair failed and we were unable to recover it. 00:30:11.400 [2024-07-23 01:09:55.528994] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.400 [2024-07-23 01:09:55.529134] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.400 [2024-07-23 01:09:55.529160] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.400 [2024-07-23 01:09:55.529174] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.400 [2024-07-23 01:09:55.529187] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:11.400 [2024-07-23 01:09:55.529215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.400 qpair failed and we were unable to recover it. 00:30:11.400 [2024-07-23 01:09:55.539007] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.400 [2024-07-23 01:09:55.539147] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.400 [2024-07-23 01:09:55.539172] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.400 [2024-07-23 01:09:55.539186] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.400 [2024-07-23 01:09:55.539198] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:11.400 [2024-07-23 01:09:55.539225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.400 qpair failed and we were unable to recover it. 00:30:11.400 [2024-07-23 01:09:55.549079] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.400 [2024-07-23 01:09:55.549217] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.400 [2024-07-23 01:09:55.549242] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.400 [2024-07-23 01:09:55.549255] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.400 [2024-07-23 01:09:55.549267] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:11.400 [2024-07-23 01:09:55.549297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.400 qpair failed and we were unable to recover it. 00:30:11.400 [2024-07-23 01:09:55.559091] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.400 [2024-07-23 01:09:55.559228] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.400 [2024-07-23 01:09:55.559253] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.400 [2024-07-23 01:09:55.559266] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.400 [2024-07-23 01:09:55.559279] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:11.400 [2024-07-23 01:09:55.559306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.400 qpair failed and we were unable to recover it. 00:30:11.400 [2024-07-23 01:09:55.569119] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.400 [2024-07-23 01:09:55.569262] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.400 [2024-07-23 01:09:55.569287] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.400 [2024-07-23 01:09:55.569301] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.400 [2024-07-23 01:09:55.569314] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:11.400 [2024-07-23 01:09:55.569341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.400 qpair failed and we were unable to recover it. 00:30:11.400 [2024-07-23 01:09:55.579149] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.400 [2024-07-23 01:09:55.579291] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.400 [2024-07-23 01:09:55.579317] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.400 [2024-07-23 01:09:55.579330] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.400 [2024-07-23 01:09:55.579343] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:11.400 [2024-07-23 01:09:55.579370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.400 qpair failed and we were unable to recover it. 00:30:11.400 [2024-07-23 01:09:55.589184] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.400 [2024-07-23 01:09:55.589327] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.400 [2024-07-23 01:09:55.589352] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.400 [2024-07-23 01:09:55.589366] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.400 [2024-07-23 01:09:55.589379] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:11.400 [2024-07-23 01:09:55.589405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.400 qpair failed and we were unable to recover it. 00:30:11.400 [2024-07-23 01:09:55.599182] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.400 [2024-07-23 01:09:55.599327] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.400 [2024-07-23 01:09:55.599353] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.400 [2024-07-23 01:09:55.599366] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.400 [2024-07-23 01:09:55.599379] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:11.400 [2024-07-23 01:09:55.599406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.400 qpair failed and we were unable to recover it. 00:30:11.659 [2024-07-23 01:09:55.609258] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.659 [2024-07-23 01:09:55.609407] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.659 [2024-07-23 01:09:55.609437] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.659 [2024-07-23 01:09:55.609452] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.659 [2024-07-23 01:09:55.609465] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:11.659 [2024-07-23 01:09:55.609492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.659 qpair failed and we were unable to recover it. 00:30:11.659 [2024-07-23 01:09:55.619285] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.659 [2024-07-23 01:09:55.619428] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.659 [2024-07-23 01:09:55.619454] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.659 [2024-07-23 01:09:55.619468] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.659 [2024-07-23 01:09:55.619480] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:11.659 [2024-07-23 01:09:55.619508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.659 qpair failed and we were unable to recover it. 00:30:11.659 [2024-07-23 01:09:55.629263] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.659 [2024-07-23 01:09:55.629425] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.659 [2024-07-23 01:09:55.629451] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.659 [2024-07-23 01:09:55.629464] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.659 [2024-07-23 01:09:55.629477] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:11.659 [2024-07-23 01:09:55.629504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.659 qpair failed and we were unable to recover it. 00:30:11.659 [2024-07-23 01:09:55.639287] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.659 [2024-07-23 01:09:55.639446] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.660 [2024-07-23 01:09:55.639472] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.660 [2024-07-23 01:09:55.639489] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.660 [2024-07-23 01:09:55.639501] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:11.660 [2024-07-23 01:09:55.639529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.660 qpair failed and we were unable to recover it. 00:30:11.660 [2024-07-23 01:09:55.649318] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.660 [2024-07-23 01:09:55.649458] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.660 [2024-07-23 01:09:55.649483] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.660 [2024-07-23 01:09:55.649498] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.660 [2024-07-23 01:09:55.649510] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:11.660 [2024-07-23 01:09:55.649538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.660 qpair failed and we were unable to recover it. 00:30:11.660 [2024-07-23 01:09:55.659382] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.660 [2024-07-23 01:09:55.659524] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.660 [2024-07-23 01:09:55.659549] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.660 [2024-07-23 01:09:55.659563] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.660 [2024-07-23 01:09:55.659576] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:11.660 [2024-07-23 01:09:55.659603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.660 qpair failed and we were unable to recover it. 00:30:11.660 [2024-07-23 01:09:55.669387] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.660 [2024-07-23 01:09:55.669535] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.660 [2024-07-23 01:09:55.669561] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.660 [2024-07-23 01:09:55.669575] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.660 [2024-07-23 01:09:55.669588] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:11.660 [2024-07-23 01:09:55.669621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.660 qpair failed and we were unable to recover it. 00:30:11.660 [2024-07-23 01:09:55.679413] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.660 [2024-07-23 01:09:55.679550] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.660 [2024-07-23 01:09:55.679575] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.660 [2024-07-23 01:09:55.679589] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.660 [2024-07-23 01:09:55.679601] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:11.660 [2024-07-23 01:09:55.679635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.660 qpair failed and we were unable to recover it. 00:30:11.660 [2024-07-23 01:09:55.689458] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.660 [2024-07-23 01:09:55.689605] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.660 [2024-07-23 01:09:55.689640] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.660 [2024-07-23 01:09:55.689654] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.660 [2024-07-23 01:09:55.689667] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:11.660 [2024-07-23 01:09:55.689695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.660 qpair failed and we were unable to recover it. 00:30:11.660 [2024-07-23 01:09:55.699498] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.660 [2024-07-23 01:09:55.699675] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.660 [2024-07-23 01:09:55.699705] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.660 [2024-07-23 01:09:55.699720] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.660 [2024-07-23 01:09:55.699732] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:11.660 [2024-07-23 01:09:55.699760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.660 qpair failed and we were unable to recover it. 00:30:11.660 [2024-07-23 01:09:55.709482] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.660 [2024-07-23 01:09:55.709628] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.660 [2024-07-23 01:09:55.709653] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.660 [2024-07-23 01:09:55.709666] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.660 [2024-07-23 01:09:55.709679] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:11.660 [2024-07-23 01:09:55.709707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.660 qpair failed and we were unable to recover it. 00:30:11.660 [2024-07-23 01:09:55.719503] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.660 [2024-07-23 01:09:55.719641] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.660 [2024-07-23 01:09:55.719665] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.660 [2024-07-23 01:09:55.719679] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.660 [2024-07-23 01:09:55.719691] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:11.660 [2024-07-23 01:09:55.719718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.660 qpair failed and we were unable to recover it. 00:30:11.660 [2024-07-23 01:09:55.729556] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.660 [2024-07-23 01:09:55.729701] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.660 [2024-07-23 01:09:55.729727] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.660 [2024-07-23 01:09:55.729741] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.660 [2024-07-23 01:09:55.729753] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:11.660 [2024-07-23 01:09:55.729782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.660 qpair failed and we were unable to recover it. 00:30:11.660 [2024-07-23 01:09:55.739620] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.660 [2024-07-23 01:09:55.739761] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.660 [2024-07-23 01:09:55.739786] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.660 [2024-07-23 01:09:55.739800] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.660 [2024-07-23 01:09:55.739813] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:11.660 [2024-07-23 01:09:55.739850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.660 qpair failed and we were unable to recover it. 00:30:11.660 [2024-07-23 01:09:55.749619] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.660 [2024-07-23 01:09:55.749756] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.660 [2024-07-23 01:09:55.749781] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.660 [2024-07-23 01:09:55.749795] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.660 [2024-07-23 01:09:55.749808] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:11.660 [2024-07-23 01:09:55.749836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.660 qpair failed and we were unable to recover it. 00:30:11.660 [2024-07-23 01:09:55.759681] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.660 [2024-07-23 01:09:55.759817] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.660 [2024-07-23 01:09:55.759842] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.660 [2024-07-23 01:09:55.759856] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.660 [2024-07-23 01:09:55.759868] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:11.660 [2024-07-23 01:09:55.759895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.660 qpair failed and we were unable to recover it. 00:30:11.660 [2024-07-23 01:09:55.769701] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.660 [2024-07-23 01:09:55.769835] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.660 [2024-07-23 01:09:55.769860] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.660 [2024-07-23 01:09:55.769874] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.660 [2024-07-23 01:09:55.769886] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cf4350 00:30:11.661 [2024-07-23 01:09:55.769914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.661 qpair failed and we were unable to recover it. 00:30:11.661 [2024-07-23 01:09:55.779727] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.661 [2024-07-23 01:09:55.779897] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.661 [2024-07-23 01:09:55.779930] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.661 [2024-07-23 01:09:55.779950] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.661 [2024-07-23 01:09:55.779965] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb170000b90 00:30:11.661 [2024-07-23 01:09:55.779998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.661 qpair failed and we were unable to recover it. 00:30:11.661 [2024-07-23 01:09:55.789743] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.661 [2024-07-23 01:09:55.789884] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.661 [2024-07-23 01:09:55.789916] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.661 [2024-07-23 01:09:55.789931] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.661 [2024-07-23 01:09:55.789945] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb170000b90 00:30:11.661 [2024-07-23 01:09:55.789975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.661 qpair failed and we were unable to recover it. 00:30:11.661 [2024-07-23 01:09:55.799772] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.661 [2024-07-23 01:09:55.799957] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.661 [2024-07-23 01:09:55.799989] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.661 [2024-07-23 01:09:55.800005] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.661 [2024-07-23 01:09:55.800019] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb178000b90 00:30:11.661 [2024-07-23 01:09:55.800049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.661 qpair failed and we were unable to recover it. 00:30:11.661 [2024-07-23 01:09:55.809812] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.661 [2024-07-23 01:09:55.809967] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.661 [2024-07-23 01:09:55.809995] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.661 [2024-07-23 01:09:55.810010] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.661 [2024-07-23 01:09:55.810023] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb178000b90 00:30:11.661 [2024-07-23 01:09:55.810052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.661 qpair failed and we were unable to recover it. 00:30:11.661 [2024-07-23 01:09:55.819826] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.661 [2024-07-23 01:09:55.819967] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.661 [2024-07-23 01:09:55.819999] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.661 [2024-07-23 01:09:55.820015] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.661 [2024-07-23 01:09:55.820028] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb168000b90 00:30:11.661 [2024-07-23 01:09:55.820060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.661 qpair failed and we were unable to recover it. 00:30:11.661 [2024-07-23 01:09:55.829866] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.661 [2024-07-23 01:09:55.830009] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.661 [2024-07-23 01:09:55.830036] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.661 [2024-07-23 01:09:55.830051] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.661 [2024-07-23 01:09:55.830064] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb168000b90 00:30:11.661 [2024-07-23 01:09:55.830099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.661 qpair failed and we were unable to recover it. 00:30:11.661 [2024-07-23 01:09:55.830341] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d01dc0 is same with the state(5) to be set 00:30:11.661 [2024-07-23 01:09:55.830464] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d01dc0 (9): Bad file descriptor 00:30:11.661 Initializing NVMe Controllers 00:30:11.661 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:11.661 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:11.661 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:30:11.661 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:30:11.661 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:30:11.661 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:30:11.661 Initialization complete. Launching workers. 00:30:11.661 Starting thread on core 1 00:30:11.661 Starting thread on core 2 00:30:11.661 Starting thread on core 3 00:30:11.661 Starting thread on core 0 00:30:11.661 01:09:55 -- host/target_disconnect.sh@59 -- # sync 00:30:11.661 00:30:11.661 real 0m11.390s 00:30:11.661 user 0m20.804s 00:30:11.661 sys 0m5.343s 00:30:11.661 01:09:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:11.661 01:09:55 -- common/autotest_common.sh@10 -- # set +x 00:30:11.661 ************************************ 00:30:11.661 END TEST nvmf_target_disconnect_tc2 00:30:11.661 ************************************ 00:30:11.919 01:09:55 -- host/target_disconnect.sh@80 -- # '[' -n '' ']' 00:30:11.919 01:09:55 -- host/target_disconnect.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:30:11.919 01:09:55 -- host/target_disconnect.sh@85 -- # nvmftestfini 00:30:11.919 01:09:55 -- nvmf/common.sh@476 -- # nvmfcleanup 00:30:11.919 01:09:55 -- nvmf/common.sh@116 -- # sync 00:30:11.919 01:09:55 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:30:11.919 01:09:55 -- nvmf/common.sh@119 -- # set +e 00:30:11.919 01:09:55 -- nvmf/common.sh@120 -- # for i in {1..20} 00:30:11.919 01:09:55 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:30:11.919 rmmod nvme_tcp 00:30:11.919 rmmod nvme_fabrics 00:30:11.919 rmmod nvme_keyring 00:30:11.919 01:09:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:30:11.919 01:09:55 -- nvmf/common.sh@123 -- # set -e 00:30:11.919 01:09:55 -- nvmf/common.sh@124 -- # return 0 00:30:11.919 01:09:55 -- nvmf/common.sh@477 -- # '[' -n 3530360 ']' 00:30:11.919 01:09:55 -- nvmf/common.sh@478 -- # killprocess 3530360 00:30:11.920 01:09:55 -- common/autotest_common.sh@926 -- # '[' -z 3530360 ']' 00:30:11.920 01:09:55 -- common/autotest_common.sh@930 -- # kill -0 3530360 00:30:11.920 01:09:55 -- common/autotest_common.sh@931 -- # uname 00:30:11.920 01:09:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:11.920 01:09:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3530360 00:30:11.920 01:09:55 -- common/autotest_common.sh@932 -- # process_name=reactor_4 00:30:11.920 01:09:55 -- common/autotest_common.sh@936 -- # '[' reactor_4 = sudo ']' 00:30:11.920 01:09:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3530360' 00:30:11.920 killing process with pid 3530360 00:30:11.920 01:09:55 -- common/autotest_common.sh@945 -- # kill 3530360 00:30:11.920 01:09:55 -- common/autotest_common.sh@950 -- # wait 3530360 00:30:12.179 01:09:56 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:30:12.179 01:09:56 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:30:12.179 01:09:56 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:30:12.179 01:09:56 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:12.179 01:09:56 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:30:12.179 01:09:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:12.179 01:09:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:12.179 01:09:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:14.083 01:09:58 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:30:14.083 00:30:14.083 real 0m16.114s 00:30:14.083 user 0m46.742s 00:30:14.083 sys 0m7.361s 00:30:14.083 01:09:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:14.083 01:09:58 -- common/autotest_common.sh@10 -- # set +x 00:30:14.083 ************************************ 00:30:14.083 END TEST nvmf_target_disconnect 00:30:14.083 ************************************ 00:30:14.083 01:09:58 -- nvmf/nvmf.sh@127 -- # timing_exit host 00:30:14.083 01:09:58 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:14.083 01:09:58 -- common/autotest_common.sh@10 -- # set +x 00:30:14.083 01:09:58 -- nvmf/nvmf.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:30:14.083 00:30:14.083 real 22m27.913s 00:30:14.083 user 64m46.889s 00:30:14.083 sys 5m34.783s 00:30:14.083 01:09:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:14.083 01:09:58 -- common/autotest_common.sh@10 -- # set +x 00:30:14.083 ************************************ 00:30:14.083 END TEST nvmf_tcp 00:30:14.083 ************************************ 00:30:14.341 01:09:58 -- spdk/autotest.sh@296 -- # [[ 0 -eq 0 ]] 00:30:14.341 01:09:58 -- spdk/autotest.sh@297 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:30:14.341 01:09:58 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:30:14.341 01:09:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:14.341 01:09:58 -- common/autotest_common.sh@10 -- # set +x 00:30:14.341 ************************************ 00:30:14.341 START TEST spdkcli_nvmf_tcp 00:30:14.341 ************************************ 00:30:14.341 01:09:58 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:30:14.341 * Looking for test storage... 00:30:14.341 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:30:14.341 01:09:58 -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:30:14.341 01:09:58 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:30:14.341 01:09:58 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:30:14.341 01:09:58 -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:14.341 01:09:58 -- nvmf/common.sh@7 -- # uname -s 00:30:14.341 01:09:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:14.341 01:09:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:14.341 01:09:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:14.341 01:09:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:14.341 01:09:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:14.341 01:09:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:14.341 01:09:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:14.341 01:09:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:14.341 01:09:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:14.341 01:09:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:14.341 01:09:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:14.341 01:09:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:14.341 01:09:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:14.341 01:09:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:14.341 01:09:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:14.341 01:09:58 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:14.341 01:09:58 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:14.341 01:09:58 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:14.341 01:09:58 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:14.341 01:09:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:14.341 01:09:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:14.341 01:09:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:14.341 01:09:58 -- paths/export.sh@5 -- # export PATH 00:30:14.341 01:09:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:14.341 01:09:58 -- nvmf/common.sh@46 -- # : 0 00:30:14.341 01:09:58 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:30:14.341 01:09:58 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:30:14.341 01:09:58 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:30:14.341 01:09:58 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:14.341 01:09:58 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:14.341 01:09:58 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:30:14.341 01:09:58 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:30:14.341 01:09:58 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:30:14.341 01:09:58 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:30:14.341 01:09:58 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:30:14.341 01:09:58 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:30:14.341 01:09:58 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:30:14.341 01:09:58 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:14.341 01:09:58 -- common/autotest_common.sh@10 -- # set +x 00:30:14.341 01:09:58 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:30:14.341 01:09:58 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3531516 00:30:14.341 01:09:58 -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:30:14.341 01:09:58 -- spdkcli/common.sh@34 -- # waitforlisten 3531516 00:30:14.341 01:09:58 -- common/autotest_common.sh@819 -- # '[' -z 3531516 ']' 00:30:14.341 01:09:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:14.341 01:09:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:14.341 01:09:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:14.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:14.341 01:09:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:14.341 01:09:58 -- common/autotest_common.sh@10 -- # set +x 00:30:14.341 [2024-07-23 01:09:58.401500] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:30:14.341 [2024-07-23 01:09:58.401601] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3531516 ] 00:30:14.341 EAL: No free 2048 kB hugepages reported on node 1 00:30:14.341 [2024-07-23 01:09:58.462482] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:14.599 [2024-07-23 01:09:58.548685] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:14.599 [2024-07-23 01:09:58.551636] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:14.599 [2024-07-23 01:09:58.551646] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:15.170 01:09:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:15.170 01:09:59 -- common/autotest_common.sh@852 -- # return 0 00:30:15.170 01:09:59 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:30:15.170 01:09:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:15.170 01:09:59 -- common/autotest_common.sh@10 -- # set +x 00:30:15.426 01:09:59 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:30:15.426 01:09:59 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:30:15.426 01:09:59 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:30:15.426 01:09:59 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:15.426 01:09:59 -- common/autotest_common.sh@10 -- # set +x 00:30:15.426 01:09:59 -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:30:15.426 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:30:15.426 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:30:15.426 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:30:15.426 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:30:15.426 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:30:15.426 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:30:15.426 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:30:15.426 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:30:15.426 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:30:15.426 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:15.426 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:15.426 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:30:15.426 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:15.426 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:15.426 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:30:15.426 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:15.426 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:30:15.426 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:30:15.426 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:15.426 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:30:15.426 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:30:15.426 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:30:15.426 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:30:15.426 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:15.426 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:30:15.426 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:30:15.426 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:30:15.426 ' 00:30:15.683 [2024-07-23 01:09:59.754171] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:30:18.212 [2024-07-23 01:10:01.892076] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:19.145 [2024-07-23 01:10:03.112451] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:30:21.670 [2024-07-23 01:10:05.383716] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:30:23.567 [2024-07-23 01:10:07.354319] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:30:24.940 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:30:24.940 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:30:24.940 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:30:24.940 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:30:24.940 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:30:24.940 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:30:24.940 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:30:24.940 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:30:24.940 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:30:24.940 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:30:24.940 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:24.940 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:24.940 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:30:24.940 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:24.940 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:24.940 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:30:24.940 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:24.940 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:30:24.940 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:30:24.940 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:24.940 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:30:24.940 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:30:24.940 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:30:24.940 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:30:24.940 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:24.940 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:30:24.940 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:30:24.940 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:30:24.940 01:10:08 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:30:24.940 01:10:08 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:24.940 01:10:08 -- common/autotest_common.sh@10 -- # set +x 00:30:24.940 01:10:08 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:30:24.940 01:10:08 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:24.940 01:10:08 -- common/autotest_common.sh@10 -- # set +x 00:30:24.940 01:10:08 -- spdkcli/nvmf.sh@69 -- # check_match 00:30:24.940 01:10:08 -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:30:25.506 01:10:09 -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:30:25.506 01:10:09 -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:30:25.506 01:10:09 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:30:25.506 01:10:09 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:25.506 01:10:09 -- common/autotest_common.sh@10 -- # set +x 00:30:25.506 01:10:09 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:30:25.506 01:10:09 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:25.506 01:10:09 -- common/autotest_common.sh@10 -- # set +x 00:30:25.506 01:10:09 -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:30:25.506 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:30:25.506 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:30:25.506 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:30:25.506 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:30:25.506 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:30:25.506 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:30:25.506 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:30:25.506 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:30:25.506 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:30:25.506 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:30:25.506 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:30:25.507 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:30:25.507 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:30:25.507 ' 00:30:30.769 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:30:30.769 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:30:30.769 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:30:30.769 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:30:30.769 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:30:30.769 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:30:30.769 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:30:30.769 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:30:30.769 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:30:30.769 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:30:30.769 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:30:30.769 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:30:30.769 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:30:30.769 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:30:30.769 01:10:14 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:30:30.769 01:10:14 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:30.769 01:10:14 -- common/autotest_common.sh@10 -- # set +x 00:30:30.769 01:10:14 -- spdkcli/nvmf.sh@90 -- # killprocess 3531516 00:30:30.769 01:10:14 -- common/autotest_common.sh@926 -- # '[' -z 3531516 ']' 00:30:30.769 01:10:14 -- common/autotest_common.sh@930 -- # kill -0 3531516 00:30:30.769 01:10:14 -- common/autotest_common.sh@931 -- # uname 00:30:30.769 01:10:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:30.769 01:10:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3531516 00:30:30.769 01:10:14 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:30:30.769 01:10:14 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:30:30.769 01:10:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3531516' 00:30:30.769 killing process with pid 3531516 00:30:30.769 01:10:14 -- common/autotest_common.sh@945 -- # kill 3531516 00:30:30.769 [2024-07-23 01:10:14.801887] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:30:30.769 01:10:14 -- common/autotest_common.sh@950 -- # wait 3531516 00:30:31.028 01:10:15 -- spdkcli/nvmf.sh@1 -- # cleanup 00:30:31.028 01:10:15 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:30:31.028 01:10:15 -- spdkcli/common.sh@13 -- # '[' -n 3531516 ']' 00:30:31.028 01:10:15 -- spdkcli/common.sh@14 -- # killprocess 3531516 00:30:31.028 01:10:15 -- common/autotest_common.sh@926 -- # '[' -z 3531516 ']' 00:30:31.028 01:10:15 -- common/autotest_common.sh@930 -- # kill -0 3531516 00:30:31.028 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (3531516) - No such process 00:30:31.028 01:10:15 -- common/autotest_common.sh@953 -- # echo 'Process with pid 3531516 is not found' 00:30:31.028 Process with pid 3531516 is not found 00:30:31.028 01:10:15 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:30:31.028 01:10:15 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:30:31.028 01:10:15 -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:30:31.028 00:30:31.028 real 0m16.727s 00:30:31.028 user 0m35.476s 00:30:31.028 sys 0m0.860s 00:30:31.028 01:10:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:31.028 01:10:15 -- common/autotest_common.sh@10 -- # set +x 00:30:31.028 ************************************ 00:30:31.028 END TEST spdkcli_nvmf_tcp 00:30:31.028 ************************************ 00:30:31.028 01:10:15 -- spdk/autotest.sh@298 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:30:31.028 01:10:15 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:30:31.028 01:10:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:31.028 01:10:15 -- common/autotest_common.sh@10 -- # set +x 00:30:31.028 ************************************ 00:30:31.028 START TEST nvmf_identify_passthru 00:30:31.028 ************************************ 00:30:31.028 01:10:15 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:30:31.028 * Looking for test storage... 00:30:31.028 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:31.029 01:10:15 -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:31.029 01:10:15 -- nvmf/common.sh@7 -- # uname -s 00:30:31.029 01:10:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:31.029 01:10:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:31.029 01:10:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:31.029 01:10:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:31.029 01:10:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:31.029 01:10:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:31.029 01:10:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:31.029 01:10:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:31.029 01:10:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:31.029 01:10:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:31.029 01:10:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:31.029 01:10:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:31.029 01:10:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:31.029 01:10:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:31.029 01:10:15 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:31.029 01:10:15 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:31.029 01:10:15 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:31.029 01:10:15 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:31.029 01:10:15 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:31.029 01:10:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:31.029 01:10:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:31.029 01:10:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:31.029 01:10:15 -- paths/export.sh@5 -- # export PATH 00:30:31.029 01:10:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:31.029 01:10:15 -- nvmf/common.sh@46 -- # : 0 00:30:31.029 01:10:15 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:30:31.029 01:10:15 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:30:31.029 01:10:15 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:30:31.029 01:10:15 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:31.029 01:10:15 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:31.029 01:10:15 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:30:31.029 01:10:15 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:30:31.029 01:10:15 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:30:31.029 01:10:15 -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:31.029 01:10:15 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:31.029 01:10:15 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:31.029 01:10:15 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:31.029 01:10:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:31.029 01:10:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:31.029 01:10:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:31.029 01:10:15 -- paths/export.sh@5 -- # export PATH 00:30:31.029 01:10:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:31.029 01:10:15 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:30:31.029 01:10:15 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:30:31.029 01:10:15 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:31.029 01:10:15 -- nvmf/common.sh@436 -- # prepare_net_devs 00:30:31.029 01:10:15 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:30:31.029 01:10:15 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:30:31.029 01:10:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:31.029 01:10:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:31.029 01:10:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:31.029 01:10:15 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:30:31.029 01:10:15 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:30:31.029 01:10:15 -- nvmf/common.sh@284 -- # xtrace_disable 00:30:31.029 01:10:15 -- common/autotest_common.sh@10 -- # set +x 00:30:32.933 01:10:17 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:30:32.933 01:10:17 -- nvmf/common.sh@290 -- # pci_devs=() 00:30:32.933 01:10:17 -- nvmf/common.sh@290 -- # local -a pci_devs 00:30:32.933 01:10:17 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:30:32.933 01:10:17 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:30:32.933 01:10:17 -- nvmf/common.sh@292 -- # pci_drivers=() 00:30:32.933 01:10:17 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:30:32.933 01:10:17 -- nvmf/common.sh@294 -- # net_devs=() 00:30:32.933 01:10:17 -- nvmf/common.sh@294 -- # local -ga net_devs 00:30:32.933 01:10:17 -- nvmf/common.sh@295 -- # e810=() 00:30:32.933 01:10:17 -- nvmf/common.sh@295 -- # local -ga e810 00:30:32.933 01:10:17 -- nvmf/common.sh@296 -- # x722=() 00:30:32.933 01:10:17 -- nvmf/common.sh@296 -- # local -ga x722 00:30:32.933 01:10:17 -- nvmf/common.sh@297 -- # mlx=() 00:30:32.933 01:10:17 -- nvmf/common.sh@297 -- # local -ga mlx 00:30:32.933 01:10:17 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:32.933 01:10:17 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:32.933 01:10:17 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:32.933 01:10:17 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:32.933 01:10:17 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:32.933 01:10:17 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:32.933 01:10:17 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:32.933 01:10:17 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:32.933 01:10:17 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:32.933 01:10:17 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:32.933 01:10:17 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:32.933 01:10:17 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:30:32.933 01:10:17 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:30:32.933 01:10:17 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:30:32.933 01:10:17 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:30:32.933 01:10:17 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:30:32.933 01:10:17 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:30:32.933 01:10:17 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:32.933 01:10:17 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:32.933 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:32.933 01:10:17 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:32.933 01:10:17 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:32.933 01:10:17 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:32.933 01:10:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:32.933 01:10:17 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:32.933 01:10:17 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:32.933 01:10:17 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:32.933 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:32.933 01:10:17 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:32.933 01:10:17 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:32.933 01:10:17 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:32.933 01:10:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:32.933 01:10:17 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:32.933 01:10:17 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:30:32.933 01:10:17 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:30:32.933 01:10:17 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:30:32.933 01:10:17 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:32.933 01:10:17 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:32.933 01:10:17 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:32.933 01:10:17 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:32.933 01:10:17 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:32.933 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:32.933 01:10:17 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:32.933 01:10:17 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:32.933 01:10:17 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:32.933 01:10:17 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:32.933 01:10:17 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:32.933 01:10:17 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:32.933 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:32.933 01:10:17 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:32.933 01:10:17 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:30:32.933 01:10:17 -- nvmf/common.sh@402 -- # is_hw=yes 00:30:32.933 01:10:17 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:30:32.933 01:10:17 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:30:32.933 01:10:17 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:30:32.933 01:10:17 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:32.933 01:10:17 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:32.933 01:10:17 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:32.933 01:10:17 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:30:32.933 01:10:17 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:32.933 01:10:17 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:32.933 01:10:17 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:30:32.933 01:10:17 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:32.933 01:10:17 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:32.933 01:10:17 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:30:32.933 01:10:17 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:30:32.933 01:10:17 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:30:32.933 01:10:17 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:32.933 01:10:17 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:32.933 01:10:17 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:32.933 01:10:17 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:30:32.933 01:10:17 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:32.933 01:10:17 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:33.194 01:10:17 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:33.194 01:10:17 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:30:33.194 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:33.194 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:30:33.194 00:30:33.194 --- 10.0.0.2 ping statistics --- 00:30:33.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:33.194 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:30:33.194 01:10:17 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:33.194 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:33.194 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:30:33.194 00:30:33.194 --- 10.0.0.1 ping statistics --- 00:30:33.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:33.194 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:30:33.194 01:10:17 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:33.194 01:10:17 -- nvmf/common.sh@410 -- # return 0 00:30:33.194 01:10:17 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:30:33.194 01:10:17 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:33.194 01:10:17 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:30:33.194 01:10:17 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:30:33.194 01:10:17 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:33.194 01:10:17 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:30:33.194 01:10:17 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:30:33.194 01:10:17 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:30:33.194 01:10:17 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:33.194 01:10:17 -- common/autotest_common.sh@10 -- # set +x 00:30:33.194 01:10:17 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:30:33.194 01:10:17 -- common/autotest_common.sh@1509 -- # bdfs=() 00:30:33.194 01:10:17 -- common/autotest_common.sh@1509 -- # local bdfs 00:30:33.194 01:10:17 -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:30:33.194 01:10:17 -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:30:33.194 01:10:17 -- common/autotest_common.sh@1498 -- # bdfs=() 00:30:33.194 01:10:17 -- common/autotest_common.sh@1498 -- # local bdfs 00:30:33.194 01:10:17 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:33.194 01:10:17 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:33.194 01:10:17 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:30:33.194 01:10:17 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:30:33.194 01:10:17 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:30:33.194 01:10:17 -- common/autotest_common.sh@1512 -- # echo 0000:88:00.0 00:30:33.194 01:10:17 -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:30:33.194 01:10:17 -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:30:33.194 01:10:17 -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:30:33.194 01:10:17 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:30:33.194 01:10:17 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:30:33.194 EAL: No free 2048 kB hugepages reported on node 1 00:30:37.410 01:10:21 -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:30:37.410 01:10:21 -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:30:37.410 01:10:21 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:30:37.410 01:10:21 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:30:37.410 EAL: No free 2048 kB hugepages reported on node 1 00:30:41.593 01:10:25 -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:30:41.593 01:10:25 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:30:41.593 01:10:25 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:41.593 01:10:25 -- common/autotest_common.sh@10 -- # set +x 00:30:41.593 01:10:25 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:30:41.593 01:10:25 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:41.593 01:10:25 -- common/autotest_common.sh@10 -- # set +x 00:30:41.593 01:10:25 -- target/identify_passthru.sh@31 -- # nvmfpid=3536294 00:30:41.593 01:10:25 -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:30:41.593 01:10:25 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:41.593 01:10:25 -- target/identify_passthru.sh@35 -- # waitforlisten 3536294 00:30:41.593 01:10:25 -- common/autotest_common.sh@819 -- # '[' -z 3536294 ']' 00:30:41.593 01:10:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:41.593 01:10:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:41.593 01:10:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:41.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:41.593 01:10:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:41.593 01:10:25 -- common/autotest_common.sh@10 -- # set +x 00:30:41.593 [2024-07-23 01:10:25.685091] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:30:41.593 [2024-07-23 01:10:25.685165] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:41.593 EAL: No free 2048 kB hugepages reported on node 1 00:30:41.593 [2024-07-23 01:10:25.748991] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:41.851 [2024-07-23 01:10:25.833312] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:41.851 [2024-07-23 01:10:25.833452] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:41.851 [2024-07-23 01:10:25.833468] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:41.851 [2024-07-23 01:10:25.833480] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:41.851 [2024-07-23 01:10:25.833544] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:41.851 [2024-07-23 01:10:25.833592] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:41.851 [2024-07-23 01:10:25.833660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:41.851 [2024-07-23 01:10:25.833663] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:41.851 01:10:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:41.851 01:10:25 -- common/autotest_common.sh@852 -- # return 0 00:30:41.851 01:10:25 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:30:41.851 01:10:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:41.851 01:10:25 -- common/autotest_common.sh@10 -- # set +x 00:30:41.851 INFO: Log level set to 20 00:30:41.851 INFO: Requests: 00:30:41.851 { 00:30:41.851 "jsonrpc": "2.0", 00:30:41.851 "method": "nvmf_set_config", 00:30:41.851 "id": 1, 00:30:41.851 "params": { 00:30:41.851 "admin_cmd_passthru": { 00:30:41.851 "identify_ctrlr": true 00:30:41.851 } 00:30:41.851 } 00:30:41.851 } 00:30:41.851 00:30:41.851 INFO: response: 00:30:41.851 { 00:30:41.851 "jsonrpc": "2.0", 00:30:41.851 "id": 1, 00:30:41.851 "result": true 00:30:41.851 } 00:30:41.851 00:30:41.851 01:10:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:41.851 01:10:25 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:30:41.851 01:10:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:41.851 01:10:25 -- common/autotest_common.sh@10 -- # set +x 00:30:41.851 INFO: Setting log level to 20 00:30:41.851 INFO: Setting log level to 20 00:30:41.851 INFO: Log level set to 20 00:30:41.851 INFO: Log level set to 20 00:30:41.851 INFO: Requests: 00:30:41.851 { 00:30:41.851 "jsonrpc": "2.0", 00:30:41.851 "method": "framework_start_init", 00:30:41.851 "id": 1 00:30:41.851 } 00:30:41.851 00:30:41.851 INFO: Requests: 00:30:41.851 { 00:30:41.851 "jsonrpc": "2.0", 00:30:41.851 "method": "framework_start_init", 00:30:41.851 "id": 1 00:30:41.851 } 00:30:41.851 00:30:41.851 [2024-07-23 01:10:25.985832] nvmf_tgt.c: 423:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:30:41.851 INFO: response: 00:30:41.851 { 00:30:41.851 "jsonrpc": "2.0", 00:30:41.851 "id": 1, 00:30:41.851 "result": true 00:30:41.851 } 00:30:41.851 00:30:41.851 INFO: response: 00:30:41.851 { 00:30:41.851 "jsonrpc": "2.0", 00:30:41.851 "id": 1, 00:30:41.851 "result": true 00:30:41.851 } 00:30:41.851 00:30:41.851 01:10:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:41.851 01:10:25 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:41.851 01:10:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:41.851 01:10:25 -- common/autotest_common.sh@10 -- # set +x 00:30:41.851 INFO: Setting log level to 40 00:30:41.851 INFO: Setting log level to 40 00:30:41.851 INFO: Setting log level to 40 00:30:41.851 [2024-07-23 01:10:25.995743] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:41.851 01:10:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:41.851 01:10:26 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:30:41.851 01:10:26 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:41.851 01:10:26 -- common/autotest_common.sh@10 -- # set +x 00:30:41.852 01:10:26 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:30:41.852 01:10:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:41.852 01:10:26 -- common/autotest_common.sh@10 -- # set +x 00:30:45.130 Nvme0n1 00:30:45.130 01:10:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:45.130 01:10:28 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:30:45.130 01:10:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:45.130 01:10:28 -- common/autotest_common.sh@10 -- # set +x 00:30:45.130 01:10:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:45.130 01:10:28 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:30:45.130 01:10:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:45.130 01:10:28 -- common/autotest_common.sh@10 -- # set +x 00:30:45.130 01:10:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:45.130 01:10:28 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:45.130 01:10:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:45.130 01:10:28 -- common/autotest_common.sh@10 -- # set +x 00:30:45.130 [2024-07-23 01:10:28.882667] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:45.130 01:10:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:45.130 01:10:28 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:30:45.130 01:10:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:45.130 01:10:28 -- common/autotest_common.sh@10 -- # set +x 00:30:45.130 [2024-07-23 01:10:28.890391] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:30:45.130 [ 00:30:45.130 { 00:30:45.130 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:45.130 "subtype": "Discovery", 00:30:45.130 "listen_addresses": [], 00:30:45.130 "allow_any_host": true, 00:30:45.130 "hosts": [] 00:30:45.130 }, 00:30:45.130 { 00:30:45.130 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:45.130 "subtype": "NVMe", 00:30:45.130 "listen_addresses": [ 00:30:45.130 { 00:30:45.130 "transport": "TCP", 00:30:45.130 "trtype": "TCP", 00:30:45.130 "adrfam": "IPv4", 00:30:45.130 "traddr": "10.0.0.2", 00:30:45.130 "trsvcid": "4420" 00:30:45.130 } 00:30:45.130 ], 00:30:45.130 "allow_any_host": true, 00:30:45.130 "hosts": [], 00:30:45.130 "serial_number": "SPDK00000000000001", 00:30:45.130 "model_number": "SPDK bdev Controller", 00:30:45.130 "max_namespaces": 1, 00:30:45.130 "min_cntlid": 1, 00:30:45.130 "max_cntlid": 65519, 00:30:45.130 "namespaces": [ 00:30:45.130 { 00:30:45.130 "nsid": 1, 00:30:45.130 "bdev_name": "Nvme0n1", 00:30:45.130 "name": "Nvme0n1", 00:30:45.130 "nguid": "49B33FDEB4A74EE1B834E1DA6457381A", 00:30:45.130 "uuid": "49b33fde-b4a7-4ee1-b834-e1da6457381a" 00:30:45.130 } 00:30:45.130 ] 00:30:45.130 } 00:30:45.130 ] 00:30:45.130 01:10:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:45.130 01:10:28 -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:45.130 01:10:28 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:30:45.130 01:10:28 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:30:45.130 EAL: No free 2048 kB hugepages reported on node 1 00:30:45.130 01:10:29 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:30:45.130 01:10:29 -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:45.130 01:10:29 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:30:45.130 01:10:29 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:30:45.130 EAL: No free 2048 kB hugepages reported on node 1 00:30:45.130 01:10:29 -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:30:45.130 01:10:29 -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:30:45.130 01:10:29 -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:30:45.130 01:10:29 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:45.130 01:10:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:45.130 01:10:29 -- common/autotest_common.sh@10 -- # set +x 00:30:45.130 01:10:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:45.130 01:10:29 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:30:45.130 01:10:29 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:30:45.130 01:10:29 -- nvmf/common.sh@476 -- # nvmfcleanup 00:30:45.130 01:10:29 -- nvmf/common.sh@116 -- # sync 00:30:45.130 01:10:29 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:30:45.130 01:10:29 -- nvmf/common.sh@119 -- # set +e 00:30:45.130 01:10:29 -- nvmf/common.sh@120 -- # for i in {1..20} 00:30:45.130 01:10:29 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:30:45.130 rmmod nvme_tcp 00:30:45.130 rmmod nvme_fabrics 00:30:45.130 rmmod nvme_keyring 00:30:45.130 01:10:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:30:45.130 01:10:29 -- nvmf/common.sh@123 -- # set -e 00:30:45.130 01:10:29 -- nvmf/common.sh@124 -- # return 0 00:30:45.130 01:10:29 -- nvmf/common.sh@477 -- # '[' -n 3536294 ']' 00:30:45.130 01:10:29 -- nvmf/common.sh@478 -- # killprocess 3536294 00:30:45.130 01:10:29 -- common/autotest_common.sh@926 -- # '[' -z 3536294 ']' 00:30:45.130 01:10:29 -- common/autotest_common.sh@930 -- # kill -0 3536294 00:30:45.130 01:10:29 -- common/autotest_common.sh@931 -- # uname 00:30:45.130 01:10:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:45.130 01:10:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3536294 00:30:45.387 01:10:29 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:30:45.387 01:10:29 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:30:45.387 01:10:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3536294' 00:30:45.387 killing process with pid 3536294 00:30:45.387 01:10:29 -- common/autotest_common.sh@945 -- # kill 3536294 00:30:45.387 [2024-07-23 01:10:29.342510] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:30:45.387 01:10:29 -- common/autotest_common.sh@950 -- # wait 3536294 00:30:46.760 01:10:30 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:30:46.760 01:10:30 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:30:46.760 01:10:30 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:30:46.760 01:10:30 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:46.760 01:10:30 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:30:46.760 01:10:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:46.760 01:10:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:46.760 01:10:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:49.295 01:10:32 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:30:49.295 00:30:49.295 real 0m17.896s 00:30:49.295 user 0m26.675s 00:30:49.295 sys 0m2.218s 00:30:49.295 01:10:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:49.295 01:10:32 -- common/autotest_common.sh@10 -- # set +x 00:30:49.295 ************************************ 00:30:49.295 END TEST nvmf_identify_passthru 00:30:49.295 ************************************ 00:30:49.295 01:10:32 -- spdk/autotest.sh@300 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:30:49.295 01:10:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:49.295 01:10:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:49.295 01:10:32 -- common/autotest_common.sh@10 -- # set +x 00:30:49.295 ************************************ 00:30:49.295 START TEST nvmf_dif 00:30:49.295 ************************************ 00:30:49.295 01:10:32 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:30:49.295 * Looking for test storage... 00:30:49.295 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:49.295 01:10:33 -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:49.295 01:10:33 -- nvmf/common.sh@7 -- # uname -s 00:30:49.295 01:10:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:49.295 01:10:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:49.295 01:10:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:49.295 01:10:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:49.295 01:10:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:49.295 01:10:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:49.295 01:10:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:49.295 01:10:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:49.295 01:10:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:49.295 01:10:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:49.295 01:10:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:49.295 01:10:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:49.295 01:10:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:49.295 01:10:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:49.295 01:10:33 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:49.296 01:10:33 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:49.296 01:10:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:49.296 01:10:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:49.296 01:10:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:49.296 01:10:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:49.296 01:10:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:49.296 01:10:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:49.296 01:10:33 -- paths/export.sh@5 -- # export PATH 00:30:49.296 01:10:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:49.296 01:10:33 -- nvmf/common.sh@46 -- # : 0 00:30:49.296 01:10:33 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:30:49.296 01:10:33 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:30:49.296 01:10:33 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:30:49.296 01:10:33 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:49.296 01:10:33 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:49.296 01:10:33 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:30:49.296 01:10:33 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:30:49.296 01:10:33 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:30:49.296 01:10:33 -- target/dif.sh@15 -- # NULL_META=16 00:30:49.296 01:10:33 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:30:49.296 01:10:33 -- target/dif.sh@15 -- # NULL_SIZE=64 00:30:49.296 01:10:33 -- target/dif.sh@15 -- # NULL_DIF=1 00:30:49.296 01:10:33 -- target/dif.sh@135 -- # nvmftestinit 00:30:49.296 01:10:33 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:30:49.296 01:10:33 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:49.296 01:10:33 -- nvmf/common.sh@436 -- # prepare_net_devs 00:30:49.296 01:10:33 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:30:49.296 01:10:33 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:30:49.296 01:10:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:49.296 01:10:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:49.296 01:10:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:49.296 01:10:33 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:30:49.296 01:10:33 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:30:49.296 01:10:33 -- nvmf/common.sh@284 -- # xtrace_disable 00:30:49.296 01:10:33 -- common/autotest_common.sh@10 -- # set +x 00:30:51.197 01:10:35 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:30:51.197 01:10:35 -- nvmf/common.sh@290 -- # pci_devs=() 00:30:51.197 01:10:35 -- nvmf/common.sh@290 -- # local -a pci_devs 00:30:51.197 01:10:35 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:30:51.197 01:10:35 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:30:51.197 01:10:35 -- nvmf/common.sh@292 -- # pci_drivers=() 00:30:51.197 01:10:35 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:30:51.197 01:10:35 -- nvmf/common.sh@294 -- # net_devs=() 00:30:51.197 01:10:35 -- nvmf/common.sh@294 -- # local -ga net_devs 00:30:51.197 01:10:35 -- nvmf/common.sh@295 -- # e810=() 00:30:51.197 01:10:35 -- nvmf/common.sh@295 -- # local -ga e810 00:30:51.197 01:10:35 -- nvmf/common.sh@296 -- # x722=() 00:30:51.197 01:10:35 -- nvmf/common.sh@296 -- # local -ga x722 00:30:51.197 01:10:35 -- nvmf/common.sh@297 -- # mlx=() 00:30:51.197 01:10:35 -- nvmf/common.sh@297 -- # local -ga mlx 00:30:51.197 01:10:35 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:51.197 01:10:35 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:51.197 01:10:35 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:51.197 01:10:35 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:51.197 01:10:35 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:51.197 01:10:35 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:51.197 01:10:35 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:51.197 01:10:35 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:51.197 01:10:35 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:51.197 01:10:35 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:51.197 01:10:35 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:51.197 01:10:35 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:30:51.197 01:10:35 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:30:51.197 01:10:35 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:30:51.197 01:10:35 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:30:51.197 01:10:35 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:30:51.197 01:10:35 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:30:51.197 01:10:35 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:51.197 01:10:35 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:51.197 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:51.198 01:10:35 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:51.198 01:10:35 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:51.198 01:10:35 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:51.198 01:10:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:51.198 01:10:35 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:51.198 01:10:35 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:51.198 01:10:35 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:51.198 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:51.198 01:10:35 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:51.198 01:10:35 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:51.198 01:10:35 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:51.198 01:10:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:51.198 01:10:35 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:51.198 01:10:35 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:30:51.198 01:10:35 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:30:51.198 01:10:35 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:30:51.198 01:10:35 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:51.198 01:10:35 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:51.198 01:10:35 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:51.198 01:10:35 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:51.198 01:10:35 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:51.198 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:51.198 01:10:35 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:51.198 01:10:35 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:51.198 01:10:35 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:51.198 01:10:35 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:51.198 01:10:35 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:51.198 01:10:35 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:51.198 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:51.198 01:10:35 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:51.198 01:10:35 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:30:51.198 01:10:35 -- nvmf/common.sh@402 -- # is_hw=yes 00:30:51.198 01:10:35 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:30:51.198 01:10:35 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:30:51.198 01:10:35 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:30:51.198 01:10:35 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:51.198 01:10:35 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:51.198 01:10:35 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:51.198 01:10:35 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:30:51.198 01:10:35 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:51.198 01:10:35 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:51.198 01:10:35 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:30:51.198 01:10:35 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:51.198 01:10:35 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:51.198 01:10:35 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:30:51.198 01:10:35 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:30:51.198 01:10:35 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:30:51.198 01:10:35 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:51.198 01:10:35 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:51.198 01:10:35 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:51.198 01:10:35 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:30:51.198 01:10:35 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:51.198 01:10:35 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:51.198 01:10:35 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:51.198 01:10:35 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:30:51.198 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:51.198 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:30:51.198 00:30:51.198 --- 10.0.0.2 ping statistics --- 00:30:51.198 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:51.198 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:30:51.198 01:10:35 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:51.198 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:51.198 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:30:51.198 00:30:51.198 --- 10.0.0.1 ping statistics --- 00:30:51.198 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:51.198 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:30:51.198 01:10:35 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:51.198 01:10:35 -- nvmf/common.sh@410 -- # return 0 00:30:51.198 01:10:35 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:30:51.198 01:10:35 -- nvmf/common.sh@439 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:52.133 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:30:52.133 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:30:52.133 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:30:52.133 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:30:52.133 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:30:52.133 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:30:52.133 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:30:52.133 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:30:52.133 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:30:52.133 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:30:52.133 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:30:52.133 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:30:52.133 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:30:52.133 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:30:52.133 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:30:52.133 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:30:52.133 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:30:52.391 01:10:36 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:52.391 01:10:36 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:30:52.391 01:10:36 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:30:52.391 01:10:36 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:52.391 01:10:36 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:30:52.391 01:10:36 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:30:52.391 01:10:36 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:30:52.391 01:10:36 -- target/dif.sh@137 -- # nvmfappstart 00:30:52.391 01:10:36 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:30:52.391 01:10:36 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:52.391 01:10:36 -- common/autotest_common.sh@10 -- # set +x 00:30:52.391 01:10:36 -- nvmf/common.sh@469 -- # nvmfpid=3539513 00:30:52.391 01:10:36 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:30:52.391 01:10:36 -- nvmf/common.sh@470 -- # waitforlisten 3539513 00:30:52.391 01:10:36 -- common/autotest_common.sh@819 -- # '[' -z 3539513 ']' 00:30:52.391 01:10:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:52.391 01:10:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:52.391 01:10:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:52.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:52.391 01:10:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:52.391 01:10:36 -- common/autotest_common.sh@10 -- # set +x 00:30:52.391 [2024-07-23 01:10:36.528869] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:30:52.391 [2024-07-23 01:10:36.528951] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:52.391 EAL: No free 2048 kB hugepages reported on node 1 00:30:52.649 [2024-07-23 01:10:36.601542] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:52.649 [2024-07-23 01:10:36.691233] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:52.649 [2024-07-23 01:10:36.691405] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:52.649 [2024-07-23 01:10:36.691426] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:52.649 [2024-07-23 01:10:36.691441] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:52.649 [2024-07-23 01:10:36.691472] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:53.582 01:10:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:53.582 01:10:37 -- common/autotest_common.sh@852 -- # return 0 00:30:53.582 01:10:37 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:30:53.582 01:10:37 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:53.582 01:10:37 -- common/autotest_common.sh@10 -- # set +x 00:30:53.582 01:10:37 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:53.582 01:10:37 -- target/dif.sh@139 -- # create_transport 00:30:53.582 01:10:37 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:30:53.582 01:10:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:53.582 01:10:37 -- common/autotest_common.sh@10 -- # set +x 00:30:53.582 [2024-07-23 01:10:37.530198] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:53.582 01:10:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:53.582 01:10:37 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:30:53.582 01:10:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:53.582 01:10:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:53.582 01:10:37 -- common/autotest_common.sh@10 -- # set +x 00:30:53.582 ************************************ 00:30:53.582 START TEST fio_dif_1_default 00:30:53.582 ************************************ 00:30:53.582 01:10:37 -- common/autotest_common.sh@1104 -- # fio_dif_1 00:30:53.582 01:10:37 -- target/dif.sh@86 -- # create_subsystems 0 00:30:53.582 01:10:37 -- target/dif.sh@28 -- # local sub 00:30:53.582 01:10:37 -- target/dif.sh@30 -- # for sub in "$@" 00:30:53.582 01:10:37 -- target/dif.sh@31 -- # create_subsystem 0 00:30:53.582 01:10:37 -- target/dif.sh@18 -- # local sub_id=0 00:30:53.582 01:10:37 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:53.582 01:10:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:53.582 01:10:37 -- common/autotest_common.sh@10 -- # set +x 00:30:53.582 bdev_null0 00:30:53.582 01:10:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:53.582 01:10:37 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:53.582 01:10:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:53.582 01:10:37 -- common/autotest_common.sh@10 -- # set +x 00:30:53.582 01:10:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:53.582 01:10:37 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:53.582 01:10:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:53.582 01:10:37 -- common/autotest_common.sh@10 -- # set +x 00:30:53.582 01:10:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:53.582 01:10:37 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:53.582 01:10:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:53.582 01:10:37 -- common/autotest_common.sh@10 -- # set +x 00:30:53.582 [2024-07-23 01:10:37.566438] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:53.582 01:10:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:53.582 01:10:37 -- target/dif.sh@87 -- # fio /dev/fd/62 00:30:53.582 01:10:37 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:30:53.582 01:10:37 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:53.582 01:10:37 -- nvmf/common.sh@520 -- # config=() 00:30:53.582 01:10:37 -- nvmf/common.sh@520 -- # local subsystem config 00:30:53.582 01:10:37 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:30:53.582 01:10:37 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:53.582 01:10:37 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:30:53.582 { 00:30:53.582 "params": { 00:30:53.582 "name": "Nvme$subsystem", 00:30:53.582 "trtype": "$TEST_TRANSPORT", 00:30:53.582 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:53.582 "adrfam": "ipv4", 00:30:53.582 "trsvcid": "$NVMF_PORT", 00:30:53.582 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:53.582 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:53.582 "hdgst": ${hdgst:-false}, 00:30:53.582 "ddgst": ${ddgst:-false} 00:30:53.582 }, 00:30:53.582 "method": "bdev_nvme_attach_controller" 00:30:53.582 } 00:30:53.582 EOF 00:30:53.582 )") 00:30:53.582 01:10:37 -- target/dif.sh@82 -- # gen_fio_conf 00:30:53.582 01:10:37 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:53.582 01:10:37 -- target/dif.sh@54 -- # local file 00:30:53.582 01:10:37 -- target/dif.sh@56 -- # cat 00:30:53.582 01:10:37 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:30:53.582 01:10:37 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:53.582 01:10:37 -- common/autotest_common.sh@1318 -- # local sanitizers 00:30:53.582 01:10:37 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:53.582 01:10:37 -- common/autotest_common.sh@1320 -- # shift 00:30:53.582 01:10:37 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:30:53.582 01:10:37 -- nvmf/common.sh@542 -- # cat 00:30:53.582 01:10:37 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:30:53.582 01:10:37 -- target/dif.sh@72 -- # (( file = 1 )) 00:30:53.582 01:10:37 -- target/dif.sh@72 -- # (( file <= files )) 00:30:53.582 01:10:37 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:53.582 01:10:37 -- common/autotest_common.sh@1324 -- # grep libasan 00:30:53.582 01:10:37 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:30:53.582 01:10:37 -- nvmf/common.sh@544 -- # jq . 00:30:53.582 01:10:37 -- nvmf/common.sh@545 -- # IFS=, 00:30:53.582 01:10:37 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:30:53.582 "params": { 00:30:53.582 "name": "Nvme0", 00:30:53.582 "trtype": "tcp", 00:30:53.582 "traddr": "10.0.0.2", 00:30:53.582 "adrfam": "ipv4", 00:30:53.582 "trsvcid": "4420", 00:30:53.582 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:53.582 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:53.582 "hdgst": false, 00:30:53.582 "ddgst": false 00:30:53.582 }, 00:30:53.582 "method": "bdev_nvme_attach_controller" 00:30:53.582 }' 00:30:53.582 01:10:37 -- common/autotest_common.sh@1324 -- # asan_lib= 00:30:53.582 01:10:37 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:30:53.582 01:10:37 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:30:53.582 01:10:37 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:53.582 01:10:37 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:30:53.582 01:10:37 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:30:53.582 01:10:37 -- common/autotest_common.sh@1324 -- # asan_lib= 00:30:53.582 01:10:37 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:30:53.582 01:10:37 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:53.582 01:10:37 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:53.840 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:53.840 fio-3.35 00:30:53.840 Starting 1 thread 00:30:53.840 EAL: No free 2048 kB hugepages reported on node 1 00:30:54.098 [2024-07-23 01:10:38.262454] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:30:54.098 [2024-07-23 01:10:38.262502] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:31:06.327 00:31:06.327 filename0: (groupid=0, jobs=1): err= 0: pid=3539877: Tue Jul 23 01:10:48 2024 00:31:06.327 read: IOPS=189, BW=760KiB/s (778kB/s)(7600KiB/10001msec) 00:31:06.327 slat (nsec): min=3922, max=43285, avg=8380.00, stdev=2389.84 00:31:06.327 clat (usec): min=772, max=45112, avg=21026.85, stdev=20117.68 00:31:06.327 lat (usec): min=779, max=45125, avg=21035.23, stdev=20117.59 00:31:06.327 clat percentiles (usec): 00:31:06.327 | 1.00th=[ 824], 5.00th=[ 840], 10.00th=[ 848], 20.00th=[ 865], 00:31:06.327 | 30.00th=[ 873], 40.00th=[ 898], 50.00th=[41157], 60.00th=[41157], 00:31:06.327 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:06.327 | 99.00th=[41157], 99.50th=[41157], 99.90th=[45351], 99.95th=[45351], 00:31:06.327 | 99.99th=[45351] 00:31:06.327 bw ( KiB/s): min= 672, max= 768, per=99.88%, avg=759.58, stdev=23.47, samples=19 00:31:06.327 iops : min= 168, max= 192, avg=189.89, stdev= 5.87, samples=19 00:31:06.327 lat (usec) : 1000=49.53% 00:31:06.327 lat (msec) : 2=0.37%, 50=50.11% 00:31:06.327 cpu : usr=90.28%, sys=9.43%, ctx=17, majf=0, minf=259 00:31:06.327 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:06.327 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:06.327 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:06.327 issued rwts: total=1900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:06.327 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:06.327 00:31:06.327 Run status group 0 (all jobs): 00:31:06.327 READ: bw=760KiB/s (778kB/s), 760KiB/s-760KiB/s (778kB/s-778kB/s), io=7600KiB (7782kB), run=10001-10001msec 00:31:06.327 01:10:48 -- target/dif.sh@88 -- # destroy_subsystems 0 00:31:06.327 01:10:48 -- target/dif.sh@43 -- # local sub 00:31:06.327 01:10:48 -- target/dif.sh@45 -- # for sub in "$@" 00:31:06.327 01:10:48 -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:06.327 01:10:48 -- target/dif.sh@36 -- # local sub_id=0 00:31:06.327 01:10:48 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:06.328 01:10:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:06.328 01:10:48 -- common/autotest_common.sh@10 -- # set +x 00:31:06.328 01:10:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:06.328 01:10:48 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:06.328 01:10:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:06.328 01:10:48 -- common/autotest_common.sh@10 -- # set +x 00:31:06.328 01:10:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:06.328 00:31:06.328 real 0m11.134s 00:31:06.328 user 0m10.175s 00:31:06.328 sys 0m1.220s 00:31:06.328 01:10:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:06.328 01:10:48 -- common/autotest_common.sh@10 -- # set +x 00:31:06.328 ************************************ 00:31:06.328 END TEST fio_dif_1_default 00:31:06.328 ************************************ 00:31:06.328 01:10:48 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:31:06.328 01:10:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:06.328 01:10:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:06.328 01:10:48 -- common/autotest_common.sh@10 -- # set +x 00:31:06.328 ************************************ 00:31:06.328 START TEST fio_dif_1_multi_subsystems 00:31:06.328 ************************************ 00:31:06.328 01:10:48 -- common/autotest_common.sh@1104 -- # fio_dif_1_multi_subsystems 00:31:06.328 01:10:48 -- target/dif.sh@92 -- # local files=1 00:31:06.328 01:10:48 -- target/dif.sh@94 -- # create_subsystems 0 1 00:31:06.328 01:10:48 -- target/dif.sh@28 -- # local sub 00:31:06.328 01:10:48 -- target/dif.sh@30 -- # for sub in "$@" 00:31:06.328 01:10:48 -- target/dif.sh@31 -- # create_subsystem 0 00:31:06.328 01:10:48 -- target/dif.sh@18 -- # local sub_id=0 00:31:06.328 01:10:48 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:06.328 01:10:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:06.328 01:10:48 -- common/autotest_common.sh@10 -- # set +x 00:31:06.328 bdev_null0 00:31:06.328 01:10:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:06.328 01:10:48 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:06.328 01:10:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:06.328 01:10:48 -- common/autotest_common.sh@10 -- # set +x 00:31:06.328 01:10:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:06.328 01:10:48 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:06.328 01:10:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:06.328 01:10:48 -- common/autotest_common.sh@10 -- # set +x 00:31:06.328 01:10:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:06.328 01:10:48 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:06.328 01:10:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:06.328 01:10:48 -- common/autotest_common.sh@10 -- # set +x 00:31:06.328 [2024-07-23 01:10:48.732230] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:06.328 01:10:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:06.328 01:10:48 -- target/dif.sh@30 -- # for sub in "$@" 00:31:06.328 01:10:48 -- target/dif.sh@31 -- # create_subsystem 1 00:31:06.328 01:10:48 -- target/dif.sh@18 -- # local sub_id=1 00:31:06.328 01:10:48 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:31:06.328 01:10:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:06.328 01:10:48 -- common/autotest_common.sh@10 -- # set +x 00:31:06.328 bdev_null1 00:31:06.328 01:10:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:06.328 01:10:48 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:06.328 01:10:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:06.328 01:10:48 -- common/autotest_common.sh@10 -- # set +x 00:31:06.328 01:10:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:06.328 01:10:48 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:06.328 01:10:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:06.328 01:10:48 -- common/autotest_common.sh@10 -- # set +x 00:31:06.328 01:10:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:06.328 01:10:48 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:06.328 01:10:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:06.328 01:10:48 -- common/autotest_common.sh@10 -- # set +x 00:31:06.328 01:10:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:06.328 01:10:48 -- target/dif.sh@95 -- # fio /dev/fd/62 00:31:06.328 01:10:48 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:31:06.328 01:10:48 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:31:06.328 01:10:48 -- nvmf/common.sh@520 -- # config=() 00:31:06.328 01:10:48 -- nvmf/common.sh@520 -- # local subsystem config 00:31:06.328 01:10:48 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:06.328 01:10:48 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:06.328 01:10:48 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:06.328 { 00:31:06.328 "params": { 00:31:06.328 "name": "Nvme$subsystem", 00:31:06.328 "trtype": "$TEST_TRANSPORT", 00:31:06.328 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:06.328 "adrfam": "ipv4", 00:31:06.328 "trsvcid": "$NVMF_PORT", 00:31:06.328 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:06.328 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:06.328 "hdgst": ${hdgst:-false}, 00:31:06.328 "ddgst": ${ddgst:-false} 00:31:06.328 }, 00:31:06.328 "method": "bdev_nvme_attach_controller" 00:31:06.328 } 00:31:06.328 EOF 00:31:06.328 )") 00:31:06.328 01:10:48 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:06.328 01:10:48 -- target/dif.sh@82 -- # gen_fio_conf 00:31:06.328 01:10:48 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:31:06.328 01:10:48 -- target/dif.sh@54 -- # local file 00:31:06.328 01:10:48 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:06.328 01:10:48 -- target/dif.sh@56 -- # cat 00:31:06.328 01:10:48 -- common/autotest_common.sh@1318 -- # local sanitizers 00:31:06.328 01:10:48 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:06.328 01:10:48 -- common/autotest_common.sh@1320 -- # shift 00:31:06.328 01:10:48 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:31:06.328 01:10:48 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:06.328 01:10:48 -- nvmf/common.sh@542 -- # cat 00:31:06.328 01:10:48 -- target/dif.sh@72 -- # (( file = 1 )) 00:31:06.328 01:10:48 -- target/dif.sh@72 -- # (( file <= files )) 00:31:06.328 01:10:48 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:06.328 01:10:48 -- target/dif.sh@73 -- # cat 00:31:06.328 01:10:48 -- common/autotest_common.sh@1324 -- # grep libasan 00:31:06.328 01:10:48 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:06.328 01:10:48 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:06.328 01:10:48 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:06.328 { 00:31:06.328 "params": { 00:31:06.328 "name": "Nvme$subsystem", 00:31:06.328 "trtype": "$TEST_TRANSPORT", 00:31:06.328 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:06.328 "adrfam": "ipv4", 00:31:06.328 "trsvcid": "$NVMF_PORT", 00:31:06.328 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:06.328 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:06.328 "hdgst": ${hdgst:-false}, 00:31:06.328 "ddgst": ${ddgst:-false} 00:31:06.328 }, 00:31:06.328 "method": "bdev_nvme_attach_controller" 00:31:06.328 } 00:31:06.328 EOF 00:31:06.328 )") 00:31:06.328 01:10:48 -- nvmf/common.sh@542 -- # cat 00:31:06.328 01:10:48 -- target/dif.sh@72 -- # (( file++ )) 00:31:06.328 01:10:48 -- target/dif.sh@72 -- # (( file <= files )) 00:31:06.328 01:10:48 -- nvmf/common.sh@544 -- # jq . 00:31:06.328 01:10:48 -- nvmf/common.sh@545 -- # IFS=, 00:31:06.328 01:10:48 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:31:06.328 "params": { 00:31:06.328 "name": "Nvme0", 00:31:06.328 "trtype": "tcp", 00:31:06.328 "traddr": "10.0.0.2", 00:31:06.328 "adrfam": "ipv4", 00:31:06.328 "trsvcid": "4420", 00:31:06.328 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:06.328 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:06.328 "hdgst": false, 00:31:06.328 "ddgst": false 00:31:06.328 }, 00:31:06.328 "method": "bdev_nvme_attach_controller" 00:31:06.328 },{ 00:31:06.328 "params": { 00:31:06.328 "name": "Nvme1", 00:31:06.328 "trtype": "tcp", 00:31:06.328 "traddr": "10.0.0.2", 00:31:06.328 "adrfam": "ipv4", 00:31:06.328 "trsvcid": "4420", 00:31:06.328 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:06.328 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:06.328 "hdgst": false, 00:31:06.328 "ddgst": false 00:31:06.328 }, 00:31:06.328 "method": "bdev_nvme_attach_controller" 00:31:06.328 }' 00:31:06.328 01:10:48 -- common/autotest_common.sh@1324 -- # asan_lib= 00:31:06.328 01:10:48 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:31:06.328 01:10:48 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:06.328 01:10:48 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:06.328 01:10:48 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:31:06.328 01:10:48 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:06.328 01:10:48 -- common/autotest_common.sh@1324 -- # asan_lib= 00:31:06.328 01:10:48 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:31:06.328 01:10:48 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:06.328 01:10:48 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:06.328 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:06.328 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:06.328 fio-3.35 00:31:06.328 Starting 2 threads 00:31:06.329 EAL: No free 2048 kB hugepages reported on node 1 00:31:06.329 [2024-07-23 01:10:49.576915] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:31:06.329 [2024-07-23 01:10:49.577026] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:31:16.297 00:31:16.297 filename0: (groupid=0, jobs=1): err= 0: pid=3541320: Tue Jul 23 01:10:59 2024 00:31:16.297 read: IOPS=189, BW=758KiB/s (776kB/s)(7584KiB/10011msec) 00:31:16.297 slat (nsec): min=7012, max=97467, avg=9010.84, stdev=3541.63 00:31:16.297 clat (usec): min=795, max=42086, avg=21092.02, stdev=20121.71 00:31:16.297 lat (usec): min=803, max=42098, avg=21101.03, stdev=20121.41 00:31:16.297 clat percentiles (usec): 00:31:16.297 | 1.00th=[ 832], 5.00th=[ 857], 10.00th=[ 865], 20.00th=[ 881], 00:31:16.297 | 30.00th=[ 898], 40.00th=[ 914], 50.00th=[41157], 60.00th=[41157], 00:31:16.297 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:16.297 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:31:16.297 | 99.99th=[42206] 00:31:16.297 bw ( KiB/s): min= 672, max= 768, per=66.06%, avg=756.80, stdev=23.85, samples=20 00:31:16.297 iops : min= 168, max= 192, avg=189.20, stdev= 5.96, samples=20 00:31:16.297 lat (usec) : 1000=49.42% 00:31:16.297 lat (msec) : 2=0.37%, 50=50.21% 00:31:16.297 cpu : usr=94.53%, sys=5.16%, ctx=26, majf=0, minf=200 00:31:16.297 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:16.297 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:16.297 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:16.297 issued rwts: total=1896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:16.297 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:16.297 filename1: (groupid=0, jobs=1): err= 0: pid=3541321: Tue Jul 23 01:10:59 2024 00:31:16.297 read: IOPS=96, BW=387KiB/s (396kB/s)(3872KiB/10006msec) 00:31:16.297 slat (nsec): min=7004, max=38432, avg=9769.13, stdev=4314.39 00:31:16.297 clat (usec): min=40891, max=42865, avg=41316.67, stdev=483.59 00:31:16.297 lat (usec): min=40898, max=42903, avg=41326.44, stdev=484.27 00:31:16.297 clat percentiles (usec): 00:31:16.297 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:31:16.297 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:16.297 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:31:16.297 | 99.00th=[42206], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:31:16.297 | 99.99th=[42730] 00:31:16.297 bw ( KiB/s): min= 384, max= 416, per=33.64%, avg=385.60, stdev= 7.16, samples=20 00:31:16.297 iops : min= 96, max= 104, avg=96.40, stdev= 1.79, samples=20 00:31:16.297 lat (msec) : 50=100.00% 00:31:16.297 cpu : usr=95.04%, sys=4.65%, ctx=15, majf=0, minf=124 00:31:16.297 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:16.297 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:16.297 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:16.297 issued rwts: total=968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:16.297 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:16.297 00:31:16.297 Run status group 0 (all jobs): 00:31:16.297 READ: bw=1144KiB/s (1172kB/s), 387KiB/s-758KiB/s (396kB/s-776kB/s), io=11.2MiB (11.7MB), run=10006-10011msec 00:31:16.297 01:10:59 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:31:16.297 01:10:59 -- target/dif.sh@43 -- # local sub 00:31:16.297 01:10:59 -- target/dif.sh@45 -- # for sub in "$@" 00:31:16.297 01:10:59 -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:16.297 01:10:59 -- target/dif.sh@36 -- # local sub_id=0 00:31:16.297 01:10:59 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:16.297 01:10:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:16.297 01:10:59 -- common/autotest_common.sh@10 -- # set +x 00:31:16.297 01:11:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:16.297 01:11:00 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:16.297 01:11:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:16.297 01:11:00 -- common/autotest_common.sh@10 -- # set +x 00:31:16.297 01:11:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:16.297 01:11:00 -- target/dif.sh@45 -- # for sub in "$@" 00:31:16.297 01:11:00 -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:16.297 01:11:00 -- target/dif.sh@36 -- # local sub_id=1 00:31:16.297 01:11:00 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:16.297 01:11:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:16.297 01:11:00 -- common/autotest_common.sh@10 -- # set +x 00:31:16.297 01:11:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:16.297 01:11:00 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:16.297 01:11:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:16.297 01:11:00 -- common/autotest_common.sh@10 -- # set +x 00:31:16.297 01:11:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:16.297 00:31:16.297 real 0m11.328s 00:31:16.297 user 0m20.200s 00:31:16.297 sys 0m1.280s 00:31:16.297 01:11:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:16.297 01:11:00 -- common/autotest_common.sh@10 -- # set +x 00:31:16.297 ************************************ 00:31:16.297 END TEST fio_dif_1_multi_subsystems 00:31:16.297 ************************************ 00:31:16.297 01:11:00 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:31:16.297 01:11:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:16.297 01:11:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:16.297 01:11:00 -- common/autotest_common.sh@10 -- # set +x 00:31:16.297 ************************************ 00:31:16.297 START TEST fio_dif_rand_params 00:31:16.297 ************************************ 00:31:16.297 01:11:00 -- common/autotest_common.sh@1104 -- # fio_dif_rand_params 00:31:16.297 01:11:00 -- target/dif.sh@100 -- # local NULL_DIF 00:31:16.297 01:11:00 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:31:16.297 01:11:00 -- target/dif.sh@103 -- # NULL_DIF=3 00:31:16.297 01:11:00 -- target/dif.sh@103 -- # bs=128k 00:31:16.297 01:11:00 -- target/dif.sh@103 -- # numjobs=3 00:31:16.297 01:11:00 -- target/dif.sh@103 -- # iodepth=3 00:31:16.297 01:11:00 -- target/dif.sh@103 -- # runtime=5 00:31:16.297 01:11:00 -- target/dif.sh@105 -- # create_subsystems 0 00:31:16.297 01:11:00 -- target/dif.sh@28 -- # local sub 00:31:16.297 01:11:00 -- target/dif.sh@30 -- # for sub in "$@" 00:31:16.297 01:11:00 -- target/dif.sh@31 -- # create_subsystem 0 00:31:16.297 01:11:00 -- target/dif.sh@18 -- # local sub_id=0 00:31:16.297 01:11:00 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:31:16.297 01:11:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:16.297 01:11:00 -- common/autotest_common.sh@10 -- # set +x 00:31:16.297 bdev_null0 00:31:16.297 01:11:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:16.297 01:11:00 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:16.297 01:11:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:16.297 01:11:00 -- common/autotest_common.sh@10 -- # set +x 00:31:16.297 01:11:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:16.297 01:11:00 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:16.297 01:11:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:16.297 01:11:00 -- common/autotest_common.sh@10 -- # set +x 00:31:16.297 01:11:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:16.297 01:11:00 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:16.297 01:11:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:16.297 01:11:00 -- common/autotest_common.sh@10 -- # set +x 00:31:16.297 [2024-07-23 01:11:00.090263] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:16.297 01:11:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:16.297 01:11:00 -- target/dif.sh@106 -- # fio /dev/fd/62 00:31:16.297 01:11:00 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:31:16.297 01:11:00 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:16.297 01:11:00 -- nvmf/common.sh@520 -- # config=() 00:31:16.297 01:11:00 -- nvmf/common.sh@520 -- # local subsystem config 00:31:16.297 01:11:00 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:16.297 01:11:00 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:16.297 { 00:31:16.297 "params": { 00:31:16.298 "name": "Nvme$subsystem", 00:31:16.298 "trtype": "$TEST_TRANSPORT", 00:31:16.298 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:16.298 "adrfam": "ipv4", 00:31:16.298 "trsvcid": "$NVMF_PORT", 00:31:16.298 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:16.298 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:16.298 "hdgst": ${hdgst:-false}, 00:31:16.298 "ddgst": ${ddgst:-false} 00:31:16.298 }, 00:31:16.298 "method": "bdev_nvme_attach_controller" 00:31:16.298 } 00:31:16.298 EOF 00:31:16.298 )") 00:31:16.298 01:11:00 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:16.298 01:11:00 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:16.298 01:11:00 -- target/dif.sh@82 -- # gen_fio_conf 00:31:16.298 01:11:00 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:31:16.298 01:11:00 -- target/dif.sh@54 -- # local file 00:31:16.298 01:11:00 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:16.298 01:11:00 -- common/autotest_common.sh@1318 -- # local sanitizers 00:31:16.298 01:11:00 -- target/dif.sh@56 -- # cat 00:31:16.298 01:11:00 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:16.298 01:11:00 -- common/autotest_common.sh@1320 -- # shift 00:31:16.298 01:11:00 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:31:16.298 01:11:00 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:16.298 01:11:00 -- nvmf/common.sh@542 -- # cat 00:31:16.298 01:11:00 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:16.298 01:11:00 -- target/dif.sh@72 -- # (( file = 1 )) 00:31:16.298 01:11:00 -- common/autotest_common.sh@1324 -- # grep libasan 00:31:16.298 01:11:00 -- target/dif.sh@72 -- # (( file <= files )) 00:31:16.298 01:11:00 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:16.298 01:11:00 -- nvmf/common.sh@544 -- # jq . 00:31:16.298 01:11:00 -- nvmf/common.sh@545 -- # IFS=, 00:31:16.298 01:11:00 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:31:16.298 "params": { 00:31:16.298 "name": "Nvme0", 00:31:16.298 "trtype": "tcp", 00:31:16.298 "traddr": "10.0.0.2", 00:31:16.298 "adrfam": "ipv4", 00:31:16.298 "trsvcid": "4420", 00:31:16.298 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:16.298 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:16.298 "hdgst": false, 00:31:16.298 "ddgst": false 00:31:16.298 }, 00:31:16.298 "method": "bdev_nvme_attach_controller" 00:31:16.298 }' 00:31:16.298 01:11:00 -- common/autotest_common.sh@1324 -- # asan_lib= 00:31:16.298 01:11:00 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:31:16.298 01:11:00 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:16.298 01:11:00 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:16.298 01:11:00 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:31:16.298 01:11:00 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:16.298 01:11:00 -- common/autotest_common.sh@1324 -- # asan_lib= 00:31:16.298 01:11:00 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:31:16.298 01:11:00 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:16.298 01:11:00 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:16.298 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:31:16.298 ... 00:31:16.298 fio-3.35 00:31:16.298 Starting 3 threads 00:31:16.298 EAL: No free 2048 kB hugepages reported on node 1 00:31:16.864 [2024-07-23 01:11:00.840337] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:31:16.864 [2024-07-23 01:11:00.840413] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:31:22.122 00:31:22.122 filename0: (groupid=0, jobs=1): err= 0: pid=3542755: Tue Jul 23 01:11:05 2024 00:31:22.122 read: IOPS=199, BW=24.9MiB/s (26.1MB/s)(125MiB/5004msec) 00:31:22.122 slat (nsec): min=4843, max=26705, avg=12783.12, stdev=1843.18 00:31:22.122 clat (usec): min=4903, max=56422, avg=15022.34, stdev=14052.39 00:31:22.122 lat (usec): min=4915, max=56435, avg=15035.12, stdev=14052.47 00:31:22.122 clat percentiles (usec): 00:31:22.122 | 1.00th=[ 5538], 5.00th=[ 5997], 10.00th=[ 6194], 20.00th=[ 8029], 00:31:22.122 | 30.00th=[ 8455], 40.00th=[ 8979], 50.00th=[ 9896], 60.00th=[11076], 00:31:22.122 | 70.00th=[11994], 80.00th=[13042], 90.00th=[49021], 95.00th=[50594], 00:31:22.122 | 99.00th=[54264], 99.50th=[54789], 99.90th=[56361], 99.95th=[56361], 00:31:22.122 | 99.99th=[56361] 00:31:22.122 bw ( KiB/s): min=19200, max=33024, per=30.73%, avg=25477.20, stdev=3971.03, samples=10 00:31:22.122 iops : min= 150, max= 258, avg=199.00, stdev=31.02, samples=10 00:31:22.122 lat (msec) : 10=51.20%, 20=35.57%, 50=5.71%, 100=7.52% 00:31:22.122 cpu : usr=93.80%, sys=5.78%, ctx=8, majf=0, minf=85 00:31:22.122 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:22.122 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.122 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.122 issued rwts: total=998,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:22.122 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:22.122 filename0: (groupid=0, jobs=1): err= 0: pid=3542756: Tue Jul 23 01:11:05 2024 00:31:22.122 read: IOPS=189, BW=23.6MiB/s (24.8MB/s)(119MiB/5013msec) 00:31:22.122 slat (nsec): min=4720, max=33705, avg=13902.16, stdev=3978.65 00:31:22.122 clat (usec): min=5470, max=91754, avg=15840.99, stdev=15001.70 00:31:22.122 lat (usec): min=5483, max=91779, avg=15854.89, stdev=15001.98 00:31:22.122 clat percentiles (usec): 00:31:22.122 | 1.00th=[ 5604], 5.00th=[ 6128], 10.00th=[ 6783], 20.00th=[ 8029], 00:31:22.122 | 30.00th=[ 8586], 40.00th=[ 9110], 50.00th=[10290], 60.00th=[11207], 00:31:22.122 | 70.00th=[11994], 80.00th=[13173], 90.00th=[50070], 95.00th=[52167], 00:31:22.122 | 99.00th=[53740], 99.50th=[54789], 99.90th=[91751], 99.95th=[91751], 00:31:22.122 | 99.99th=[91751] 00:31:22.122 bw ( KiB/s): min= 9984, max=30976, per=29.19%, avg=24197.00, stdev=6132.31, samples=10 00:31:22.122 iops : min= 78, max= 242, avg=189.00, stdev=47.90, samples=10 00:31:22.122 lat (msec) : 10=47.57%, 20=37.66%, 50=4.75%, 100=10.02% 00:31:22.122 cpu : usr=89.35%, sys=7.46%, ctx=708, majf=0, minf=107 00:31:22.122 IO depths : 1=2.3%, 2=97.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:22.122 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.122 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.122 issued rwts: total=948,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:22.122 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:22.122 filename0: (groupid=0, jobs=1): err= 0: pid=3542757: Tue Jul 23 01:11:05 2024 00:31:22.122 read: IOPS=259, BW=32.5MiB/s (34.1MB/s)(163MiB/5005msec) 00:31:22.122 slat (nsec): min=4794, max=27714, avg=12997.06, stdev=1776.36 00:31:22.122 clat (usec): min=4792, max=91248, avg=11520.85, stdev=10578.13 00:31:22.122 lat (usec): min=4805, max=91261, avg=11533.85, stdev=10578.04 00:31:22.122 clat percentiles (usec): 00:31:22.122 | 1.00th=[ 5080], 5.00th=[ 5407], 10.00th=[ 5866], 20.00th=[ 6521], 00:31:22.122 | 30.00th=[ 7635], 40.00th=[ 8291], 50.00th=[ 8717], 60.00th=[ 9372], 00:31:22.122 | 70.00th=[10683], 80.00th=[11731], 90.00th=[13173], 95.00th=[49021], 00:31:22.122 | 99.00th=[51643], 99.50th=[52691], 99.90th=[55313], 99.95th=[91751], 00:31:22.123 | 99.99th=[91751] 00:31:22.123 bw ( KiB/s): min=25344, max=45312, per=40.08%, avg=33228.80, stdev=6655.78, samples=10 00:31:22.123 iops : min= 198, max= 354, avg=259.60, stdev=52.00, samples=10 00:31:22.123 lat (msec) : 10=64.80%, 20=28.82%, 50=3.00%, 100=3.38% 00:31:22.123 cpu : usr=91.61%, sys=7.45%, ctx=15, majf=0, minf=77 00:31:22.123 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:22.123 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.123 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.123 issued rwts: total=1301,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:22.123 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:22.123 00:31:22.123 Run status group 0 (all jobs): 00:31:22.123 READ: bw=81.0MiB/s (84.9MB/s), 23.6MiB/s-32.5MiB/s (24.8MB/s-34.1MB/s), io=406MiB (426MB), run=5004-5013msec 00:31:22.123 01:11:06 -- target/dif.sh@107 -- # destroy_subsystems 0 00:31:22.123 01:11:06 -- target/dif.sh@43 -- # local sub 00:31:22.123 01:11:06 -- target/dif.sh@45 -- # for sub in "$@" 00:31:22.123 01:11:06 -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:22.123 01:11:06 -- target/dif.sh@36 -- # local sub_id=0 00:31:22.123 01:11:06 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:22.123 01:11:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:22.123 01:11:06 -- common/autotest_common.sh@10 -- # set +x 00:31:22.123 01:11:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:22.123 01:11:06 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:22.123 01:11:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:22.123 01:11:06 -- common/autotest_common.sh@10 -- # set +x 00:31:22.123 01:11:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:22.123 01:11:06 -- target/dif.sh@109 -- # NULL_DIF=2 00:31:22.123 01:11:06 -- target/dif.sh@109 -- # bs=4k 00:31:22.123 01:11:06 -- target/dif.sh@109 -- # numjobs=8 00:31:22.123 01:11:06 -- target/dif.sh@109 -- # iodepth=16 00:31:22.123 01:11:06 -- target/dif.sh@109 -- # runtime= 00:31:22.123 01:11:06 -- target/dif.sh@109 -- # files=2 00:31:22.123 01:11:06 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:31:22.123 01:11:06 -- target/dif.sh@28 -- # local sub 00:31:22.123 01:11:06 -- target/dif.sh@30 -- # for sub in "$@" 00:31:22.123 01:11:06 -- target/dif.sh@31 -- # create_subsystem 0 00:31:22.123 01:11:06 -- target/dif.sh@18 -- # local sub_id=0 00:31:22.123 01:11:06 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:31:22.123 01:11:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:22.123 01:11:06 -- common/autotest_common.sh@10 -- # set +x 00:31:22.123 bdev_null0 00:31:22.123 01:11:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:22.123 01:11:06 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:22.123 01:11:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:22.123 01:11:06 -- common/autotest_common.sh@10 -- # set +x 00:31:22.123 01:11:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:22.123 01:11:06 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:22.123 01:11:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:22.123 01:11:06 -- common/autotest_common.sh@10 -- # set +x 00:31:22.123 01:11:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:22.123 01:11:06 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:22.123 01:11:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:22.123 01:11:06 -- common/autotest_common.sh@10 -- # set +x 00:31:22.123 [2024-07-23 01:11:06.187859] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:22.123 01:11:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:22.123 01:11:06 -- target/dif.sh@30 -- # for sub in "$@" 00:31:22.123 01:11:06 -- target/dif.sh@31 -- # create_subsystem 1 00:31:22.123 01:11:06 -- target/dif.sh@18 -- # local sub_id=1 00:31:22.123 01:11:06 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:31:22.123 01:11:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:22.123 01:11:06 -- common/autotest_common.sh@10 -- # set +x 00:31:22.123 bdev_null1 00:31:22.123 01:11:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:22.123 01:11:06 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:22.123 01:11:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:22.123 01:11:06 -- common/autotest_common.sh@10 -- # set +x 00:31:22.123 01:11:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:22.123 01:11:06 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:22.123 01:11:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:22.123 01:11:06 -- common/autotest_common.sh@10 -- # set +x 00:31:22.123 01:11:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:22.123 01:11:06 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:22.123 01:11:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:22.123 01:11:06 -- common/autotest_common.sh@10 -- # set +x 00:31:22.123 01:11:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:22.123 01:11:06 -- target/dif.sh@30 -- # for sub in "$@" 00:31:22.123 01:11:06 -- target/dif.sh@31 -- # create_subsystem 2 00:31:22.123 01:11:06 -- target/dif.sh@18 -- # local sub_id=2 00:31:22.123 01:11:06 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:31:22.123 01:11:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:22.123 01:11:06 -- common/autotest_common.sh@10 -- # set +x 00:31:22.123 bdev_null2 00:31:22.123 01:11:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:22.123 01:11:06 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:31:22.123 01:11:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:22.123 01:11:06 -- common/autotest_common.sh@10 -- # set +x 00:31:22.123 01:11:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:22.123 01:11:06 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:31:22.123 01:11:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:22.123 01:11:06 -- common/autotest_common.sh@10 -- # set +x 00:31:22.123 01:11:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:22.123 01:11:06 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:22.123 01:11:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:22.123 01:11:06 -- common/autotest_common.sh@10 -- # set +x 00:31:22.123 01:11:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:22.123 01:11:06 -- target/dif.sh@112 -- # fio /dev/fd/62 00:31:22.123 01:11:06 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:31:22.123 01:11:06 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:31:22.123 01:11:06 -- nvmf/common.sh@520 -- # config=() 00:31:22.123 01:11:06 -- nvmf/common.sh@520 -- # local subsystem config 00:31:22.123 01:11:06 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:22.123 01:11:06 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:22.123 01:11:06 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:22.123 { 00:31:22.123 "params": { 00:31:22.123 "name": "Nvme$subsystem", 00:31:22.123 "trtype": "$TEST_TRANSPORT", 00:31:22.123 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:22.123 "adrfam": "ipv4", 00:31:22.123 "trsvcid": "$NVMF_PORT", 00:31:22.123 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:22.123 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:22.123 "hdgst": ${hdgst:-false}, 00:31:22.123 "ddgst": ${ddgst:-false} 00:31:22.123 }, 00:31:22.123 "method": "bdev_nvme_attach_controller" 00:31:22.123 } 00:31:22.123 EOF 00:31:22.123 )") 00:31:22.123 01:11:06 -- target/dif.sh@82 -- # gen_fio_conf 00:31:22.123 01:11:06 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:22.123 01:11:06 -- target/dif.sh@54 -- # local file 00:31:22.123 01:11:06 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:31:22.123 01:11:06 -- target/dif.sh@56 -- # cat 00:31:22.123 01:11:06 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:22.123 01:11:06 -- common/autotest_common.sh@1318 -- # local sanitizers 00:31:22.123 01:11:06 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:22.123 01:11:06 -- common/autotest_common.sh@1320 -- # shift 00:31:22.123 01:11:06 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:31:22.123 01:11:06 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:22.123 01:11:06 -- nvmf/common.sh@542 -- # cat 00:31:22.123 01:11:06 -- target/dif.sh@72 -- # (( file = 1 )) 00:31:22.123 01:11:06 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:22.123 01:11:06 -- target/dif.sh@72 -- # (( file <= files )) 00:31:22.123 01:11:06 -- target/dif.sh@73 -- # cat 00:31:22.123 01:11:06 -- common/autotest_common.sh@1324 -- # grep libasan 00:31:22.123 01:11:06 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:22.123 01:11:06 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:22.123 01:11:06 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:22.123 { 00:31:22.123 "params": { 00:31:22.123 "name": "Nvme$subsystem", 00:31:22.123 "trtype": "$TEST_TRANSPORT", 00:31:22.123 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:22.123 "adrfam": "ipv4", 00:31:22.123 "trsvcid": "$NVMF_PORT", 00:31:22.123 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:22.123 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:22.123 "hdgst": ${hdgst:-false}, 00:31:22.123 "ddgst": ${ddgst:-false} 00:31:22.123 }, 00:31:22.123 "method": "bdev_nvme_attach_controller" 00:31:22.123 } 00:31:22.123 EOF 00:31:22.123 )") 00:31:22.123 01:11:06 -- target/dif.sh@72 -- # (( file++ )) 00:31:22.123 01:11:06 -- target/dif.sh@72 -- # (( file <= files )) 00:31:22.123 01:11:06 -- nvmf/common.sh@542 -- # cat 00:31:22.123 01:11:06 -- target/dif.sh@73 -- # cat 00:31:22.123 01:11:06 -- target/dif.sh@72 -- # (( file++ )) 00:31:22.123 01:11:06 -- target/dif.sh@72 -- # (( file <= files )) 00:31:22.123 01:11:06 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:22.123 01:11:06 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:22.123 { 00:31:22.123 "params": { 00:31:22.123 "name": "Nvme$subsystem", 00:31:22.124 "trtype": "$TEST_TRANSPORT", 00:31:22.124 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:22.124 "adrfam": "ipv4", 00:31:22.124 "trsvcid": "$NVMF_PORT", 00:31:22.124 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:22.124 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:22.124 "hdgst": ${hdgst:-false}, 00:31:22.124 "ddgst": ${ddgst:-false} 00:31:22.124 }, 00:31:22.124 "method": "bdev_nvme_attach_controller" 00:31:22.124 } 00:31:22.124 EOF 00:31:22.124 )") 00:31:22.124 01:11:06 -- nvmf/common.sh@542 -- # cat 00:31:22.124 01:11:06 -- nvmf/common.sh@544 -- # jq . 00:31:22.124 01:11:06 -- nvmf/common.sh@545 -- # IFS=, 00:31:22.124 01:11:06 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:31:22.124 "params": { 00:31:22.124 "name": "Nvme0", 00:31:22.124 "trtype": "tcp", 00:31:22.124 "traddr": "10.0.0.2", 00:31:22.124 "adrfam": "ipv4", 00:31:22.124 "trsvcid": "4420", 00:31:22.124 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:22.124 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:22.124 "hdgst": false, 00:31:22.124 "ddgst": false 00:31:22.124 }, 00:31:22.124 "method": "bdev_nvme_attach_controller" 00:31:22.124 },{ 00:31:22.124 "params": { 00:31:22.124 "name": "Nvme1", 00:31:22.124 "trtype": "tcp", 00:31:22.124 "traddr": "10.0.0.2", 00:31:22.124 "adrfam": "ipv4", 00:31:22.124 "trsvcid": "4420", 00:31:22.124 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:22.124 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:22.124 "hdgst": false, 00:31:22.124 "ddgst": false 00:31:22.124 }, 00:31:22.124 "method": "bdev_nvme_attach_controller" 00:31:22.124 },{ 00:31:22.124 "params": { 00:31:22.124 "name": "Nvme2", 00:31:22.124 "trtype": "tcp", 00:31:22.124 "traddr": "10.0.0.2", 00:31:22.124 "adrfam": "ipv4", 00:31:22.124 "trsvcid": "4420", 00:31:22.124 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:31:22.124 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:31:22.124 "hdgst": false, 00:31:22.124 "ddgst": false 00:31:22.124 }, 00:31:22.124 "method": "bdev_nvme_attach_controller" 00:31:22.124 }' 00:31:22.124 01:11:06 -- common/autotest_common.sh@1324 -- # asan_lib= 00:31:22.124 01:11:06 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:31:22.124 01:11:06 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:22.124 01:11:06 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:22.124 01:11:06 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:31:22.124 01:11:06 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:22.124 01:11:06 -- common/autotest_common.sh@1324 -- # asan_lib= 00:31:22.124 01:11:06 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:31:22.124 01:11:06 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:22.124 01:11:06 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:22.382 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:22.382 ... 00:31:22.382 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:22.382 ... 00:31:22.382 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:22.382 ... 00:31:22.382 fio-3.35 00:31:22.382 Starting 24 threads 00:31:22.382 EAL: No free 2048 kB hugepages reported on node 1 00:31:23.316 [2024-07-23 01:11:07.202593] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:31:23.316 [2024-07-23 01:11:07.202695] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:31:33.288 00:31:33.288 filename0: (groupid=0, jobs=1): err= 0: pid=3543524: Tue Jul 23 01:11:17 2024 00:31:33.288 read: IOPS=504, BW=2019KiB/s (2067kB/s)(19.7MiB/10006msec) 00:31:33.288 slat (usec): min=7, max=120, avg=39.89, stdev=18.40 00:31:33.288 clat (usec): min=4702, max=50071, avg=31363.82, stdev=2424.83 00:31:33.288 lat (usec): min=4726, max=50116, avg=31403.71, stdev=2426.80 00:31:33.288 clat percentiles (usec): 00:31:33.288 | 1.00th=[24773], 5.00th=[30540], 10.00th=[30802], 20.00th=[31065], 00:31:33.288 | 30.00th=[31065], 40.00th=[31327], 50.00th=[31327], 60.00th=[31327], 00:31:33.288 | 70.00th=[31589], 80.00th=[31589], 90.00th=[32113], 95.00th=[32900], 00:31:33.288 | 99.00th=[35390], 99.50th=[49021], 99.90th=[50070], 99.95th=[50070], 00:31:33.288 | 99.99th=[50070] 00:31:33.288 bw ( KiB/s): min= 1792, max= 2048, per=4.17%, avg=2011.79, stdev=69.30, samples=19 00:31:33.288 iops : min= 448, max= 512, avg=502.95, stdev=17.33, samples=19 00:31:33.288 lat (msec) : 10=0.48%, 20=0.16%, 50=99.33%, 100=0.04% 00:31:33.288 cpu : usr=96.48%, sys=1.93%, ctx=115, majf=0, minf=59 00:31:33.288 IO depths : 1=3.0%, 2=8.2%, 4=21.0%, 8=57.4%, 16=10.5%, 32=0.0%, >=64=0.0% 00:31:33.288 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.288 complete : 0=0.0%, 4=93.4%, 8=1.8%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.288 issued rwts: total=5050,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:33.288 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:33.288 filename0: (groupid=0, jobs=1): err= 0: pid=3543525: Tue Jul 23 01:11:17 2024 00:31:33.288 read: IOPS=509, BW=2038KiB/s (2087kB/s)(19.9MiB/10018msec) 00:31:33.288 slat (usec): min=5, max=800, avg=42.23, stdev=24.04 00:31:33.288 clat (usec): min=3758, max=34691, avg=31053.02, stdev=3060.74 00:31:33.288 lat (usec): min=3769, max=34728, avg=31095.25, stdev=3063.99 00:31:33.288 clat percentiles (usec): 00:31:33.288 | 1.00th=[ 7111], 5.00th=[30540], 10.00th=[30802], 20.00th=[31065], 00:31:33.288 | 30.00th=[31065], 40.00th=[31327], 50.00th=[31327], 60.00th=[31327], 00:31:33.288 | 70.00th=[31589], 80.00th=[31589], 90.00th=[32113], 95.00th=[32637], 00:31:33.288 | 99.00th=[34341], 99.50th=[34341], 99.90th=[34341], 99.95th=[34866], 00:31:33.288 | 99.99th=[34866] 00:31:33.288 bw ( KiB/s): min= 1920, max= 2432, per=4.22%, avg=2035.20, stdev=109.09, samples=20 00:31:33.288 iops : min= 480, max= 608, avg=508.80, stdev=27.27, samples=20 00:31:33.288 lat (msec) : 4=0.08%, 10=1.16%, 20=0.33%, 50=98.43% 00:31:33.288 cpu : usr=89.51%, sys=4.86%, ctx=278, majf=0, minf=39 00:31:33.288 IO depths : 1=6.1%, 2=12.2%, 4=24.9%, 8=50.4%, 16=6.4%, 32=0.0%, >=64=0.0% 00:31:33.288 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.288 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.288 issued rwts: total=5104,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:33.288 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:33.288 filename0: (groupid=0, jobs=1): err= 0: pid=3543526: Tue Jul 23 01:11:17 2024 00:31:33.288 read: IOPS=503, BW=2015KiB/s (2064kB/s)(19.7MiB/10023msec) 00:31:33.288 slat (usec): min=8, max=219, avg=42.08, stdev=26.08 00:31:33.288 clat (usec): min=17090, max=47442, avg=31396.05, stdev=2126.11 00:31:33.288 lat (usec): min=17098, max=47479, avg=31438.13, stdev=2125.03 00:31:33.288 clat percentiles (usec): 00:31:33.288 | 1.00th=[21103], 5.00th=[30278], 10.00th=[30540], 20.00th=[30802], 00:31:33.288 | 30.00th=[31065], 40.00th=[31065], 50.00th=[31327], 60.00th=[31327], 00:31:33.288 | 70.00th=[31589], 80.00th=[31851], 90.00th=[32637], 95.00th=[33817], 00:31:33.288 | 99.00th=[41157], 99.50th=[42206], 99.90th=[46400], 99.95th=[46924], 00:31:33.288 | 99.99th=[47449] 00:31:33.288 bw ( KiB/s): min= 1792, max= 2176, per=4.17%, avg=2013.60, stdev=80.97, samples=20 00:31:33.288 iops : min= 448, max= 544, avg=503.40, stdev=20.24, samples=20 00:31:33.288 lat (msec) : 20=0.79%, 50=99.21% 00:31:33.288 cpu : usr=93.07%, sys=3.08%, ctx=120, majf=0, minf=36 00:31:33.288 IO depths : 1=4.3%, 2=10.1%, 4=23.6%, 8=53.8%, 16=8.2%, 32=0.0%, >=64=0.0% 00:31:33.288 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.288 complete : 0=0.0%, 4=93.9%, 8=0.4%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.288 issued rwts: total=5050,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:33.288 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:33.288 filename0: (groupid=0, jobs=1): err= 0: pid=3543527: Tue Jul 23 01:11:17 2024 00:31:33.288 read: IOPS=502, BW=2012KiB/s (2060kB/s)(19.7MiB/10022msec) 00:31:33.288 slat (usec): min=8, max=691, avg=40.61, stdev=27.58 00:31:33.288 clat (usec): min=27683, max=41200, avg=31460.06, stdev=904.91 00:31:33.288 lat (usec): min=27700, max=41248, avg=31500.68, stdev=905.52 00:31:33.288 clat percentiles (usec): 00:31:33.288 | 1.00th=[30016], 5.00th=[30540], 10.00th=[30802], 20.00th=[31065], 00:31:33.288 | 30.00th=[31065], 40.00th=[31327], 50.00th=[31327], 60.00th=[31327], 00:31:33.288 | 70.00th=[31589], 80.00th=[31589], 90.00th=[32113], 95.00th=[32900], 00:31:33.288 | 99.00th=[34341], 99.50th=[34866], 99.90th=[41157], 99.95th=[41157], 00:31:33.288 | 99.99th=[41157] 00:31:33.288 bw ( KiB/s): min= 1792, max= 2048, per=4.16%, avg=2009.60, stdev=70.49, samples=20 00:31:33.288 iops : min= 448, max= 512, avg=502.40, stdev=17.62, samples=20 00:31:33.288 lat (msec) : 50=100.00% 00:31:33.288 cpu : usr=93.19%, sys=3.23%, ctx=117, majf=0, minf=30 00:31:33.288 IO depths : 1=3.3%, 2=9.5%, 4=24.9%, 8=53.0%, 16=9.2%, 32=0.0%, >=64=0.0% 00:31:33.288 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.288 complete : 0=0.0%, 4=94.3%, 8=0.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.288 issued rwts: total=5040,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:33.288 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:33.288 filename0: (groupid=0, jobs=1): err= 0: pid=3543528: Tue Jul 23 01:11:17 2024 00:31:33.288 read: IOPS=501, BW=2007KiB/s (2056kB/s)(19.6MiB/10011msec) 00:31:33.288 slat (usec): min=8, max=129, avg=44.55, stdev=15.97 00:31:33.288 clat (usec): min=21989, max=58157, avg=31505.91, stdev=1693.64 00:31:33.288 lat (usec): min=22000, max=58197, avg=31550.47, stdev=1693.34 00:31:33.288 clat percentiles (usec): 00:31:33.288 | 1.00th=[30278], 5.00th=[30802], 10.00th=[30802], 20.00th=[31065], 00:31:33.288 | 30.00th=[31065], 40.00th=[31327], 50.00th=[31327], 60.00th=[31327], 00:31:33.288 | 70.00th=[31589], 80.00th=[31851], 90.00th=[32113], 95.00th=[32637], 00:31:33.288 | 99.00th=[34341], 99.50th=[34866], 99.90th=[57934], 99.95th=[57934], 00:31:33.288 | 99.99th=[57934] 00:31:33.288 bw ( KiB/s): min= 1792, max= 2048, per=4.14%, avg=2000.84, stdev=75.14, samples=19 00:31:33.288 iops : min= 448, max= 512, avg=500.21, stdev=18.78, samples=19 00:31:33.288 lat (msec) : 50=99.68%, 100=0.32% 00:31:33.288 cpu : usr=98.32%, sys=1.12%, ctx=57, majf=0, minf=35 00:31:33.288 IO depths : 1=3.4%, 2=9.6%, 4=25.0%, 8=52.9%, 16=9.1%, 32=0.0%, >=64=0.0% 00:31:33.288 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.288 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.288 issued rwts: total=5024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:33.288 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:33.288 filename0: (groupid=0, jobs=1): err= 0: pid=3543529: Tue Jul 23 01:11:17 2024 00:31:33.288 read: IOPS=501, BW=2008KiB/s (2056kB/s)(19.6MiB/10009msec) 00:31:33.288 slat (usec): min=6, max=968, avg=48.05, stdev=28.94 00:31:33.288 clat (usec): min=8789, max=56962, avg=31418.38, stdev=2300.88 00:31:33.288 lat (usec): min=8805, max=57047, avg=31466.44, stdev=2303.18 00:31:33.288 clat percentiles (usec): 00:31:33.288 | 1.00th=[30278], 5.00th=[30540], 10.00th=[30802], 20.00th=[30802], 00:31:33.288 | 30.00th=[31065], 40.00th=[31065], 50.00th=[31327], 60.00th=[31327], 00:31:33.288 | 70.00th=[31327], 80.00th=[31589], 90.00th=[32113], 95.00th=[32637], 00:31:33.288 | 99.00th=[34341], 99.50th=[51643], 99.90th=[55837], 99.95th=[56361], 00:31:33.288 | 99.99th=[56886] 00:31:33.288 bw ( KiB/s): min= 1792, max= 2048, per=4.15%, avg=2003.20, stdev=85.87, samples=20 00:31:33.289 iops : min= 448, max= 512, avg=500.80, stdev=21.47, samples=20 00:31:33.289 lat (msec) : 10=0.32%, 50=99.04%, 100=0.64% 00:31:33.289 cpu : usr=97.62%, sys=1.32%, ctx=55, majf=0, minf=27 00:31:33.289 IO depths : 1=6.1%, 2=12.2%, 4=24.6%, 8=50.6%, 16=6.4%, 32=0.0%, >=64=0.0% 00:31:33.289 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.289 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.289 issued rwts: total=5024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:33.289 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:33.289 filename0: (groupid=0, jobs=1): err= 0: pid=3543530: Tue Jul 23 01:11:17 2024 00:31:33.289 read: IOPS=500, BW=2003KiB/s (2051kB/s)(19.6MiB/10011msec) 00:31:33.289 slat (usec): min=8, max=902, avg=45.06, stdev=26.75 00:31:33.289 clat (usec): min=16440, max=57461, avg=31550.47, stdev=2176.35 00:31:33.289 lat (usec): min=16451, max=57565, avg=31595.53, stdev=2176.76 00:31:33.289 clat percentiles (usec): 00:31:33.289 | 1.00th=[27657], 5.00th=[30540], 10.00th=[30802], 20.00th=[31065], 00:31:33.289 | 30.00th=[31065], 40.00th=[31065], 50.00th=[31327], 60.00th=[31327], 00:31:33.289 | 70.00th=[31589], 80.00th=[31851], 90.00th=[32375], 95.00th=[33162], 00:31:33.289 | 99.00th=[36963], 99.50th=[52167], 99.90th=[55837], 99.95th=[56886], 00:31:33.289 | 99.99th=[57410] 00:31:33.289 bw ( KiB/s): min= 1792, max= 2048, per=4.15%, avg=2002.53, stdev=72.59, samples=19 00:31:33.289 iops : min= 448, max= 512, avg=500.63, stdev=18.15, samples=19 00:31:33.289 lat (msec) : 20=0.20%, 50=99.20%, 100=0.60% 00:31:33.289 cpu : usr=92.78%, sys=3.50%, ctx=180, majf=0, minf=30 00:31:33.289 IO depths : 1=2.5%, 2=8.6%, 4=24.7%, 8=54.2%, 16=10.0%, 32=0.0%, >=64=0.0% 00:31:33.289 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.289 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.289 issued rwts: total=5014,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:33.289 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:33.289 filename0: (groupid=0, jobs=1): err= 0: pid=3543531: Tue Jul 23 01:11:17 2024 00:31:33.289 read: IOPS=502, BW=2011KiB/s (2059kB/s)(19.7MiB/10023msec) 00:31:33.289 slat (usec): min=8, max=155, avg=45.46, stdev=20.04 00:31:33.289 clat (usec): min=16258, max=53731, avg=31455.83, stdev=1493.74 00:31:33.289 lat (usec): min=16267, max=53757, avg=31501.29, stdev=1492.74 00:31:33.289 clat percentiles (usec): 00:31:33.289 | 1.00th=[27919], 5.00th=[30540], 10.00th=[30802], 20.00th=[30802], 00:31:33.289 | 30.00th=[31065], 40.00th=[31065], 50.00th=[31327], 60.00th=[31327], 00:31:33.289 | 70.00th=[31589], 80.00th=[31851], 90.00th=[32375], 95.00th=[33162], 00:31:33.289 | 99.00th=[36439], 99.50th=[38536], 99.90th=[50070], 99.95th=[53740], 00:31:33.289 | 99.99th=[53740] 00:31:33.289 bw ( KiB/s): min= 1792, max= 2064, per=4.16%, avg=2008.80, stdev=72.95, samples=20 00:31:33.289 iops : min= 448, max= 516, avg=502.20, stdev=18.24, samples=20 00:31:33.289 lat (msec) : 20=0.16%, 50=99.72%, 100=0.12% 00:31:33.289 cpu : usr=98.72%, sys=0.87%, ctx=20, majf=0, minf=30 00:31:33.289 IO depths : 1=4.4%, 2=10.6%, 4=24.9%, 8=52.0%, 16=8.0%, 32=0.0%, >=64=0.0% 00:31:33.289 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.289 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.289 issued rwts: total=5038,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:33.289 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:33.289 filename1: (groupid=0, jobs=1): err= 0: pid=3543532: Tue Jul 23 01:11:17 2024 00:31:33.289 read: IOPS=504, BW=2020KiB/s (2068kB/s)(19.8MiB/10021msec) 00:31:33.289 slat (usec): min=7, max=153, avg=28.60, stdev=17.60 00:31:33.289 clat (usec): min=17220, max=40862, avg=31466.22, stdev=1526.64 00:31:33.289 lat (usec): min=17229, max=40890, avg=31494.82, stdev=1524.84 00:31:33.289 clat percentiles (usec): 00:31:33.289 | 1.00th=[27657], 5.00th=[30278], 10.00th=[30802], 20.00th=[31065], 00:31:33.289 | 30.00th=[31065], 40.00th=[31327], 50.00th=[31327], 60.00th=[31589], 00:31:33.289 | 70.00th=[31589], 80.00th=[31851], 90.00th=[32637], 95.00th=[33817], 00:31:33.289 | 99.00th=[34866], 99.50th=[35914], 99.90th=[38536], 99.95th=[40633], 00:31:33.289 | 99.99th=[40633] 00:31:33.289 bw ( KiB/s): min= 1792, max= 2104, per=4.18%, avg=2017.60, stdev=74.98, samples=20 00:31:33.289 iops : min= 448, max= 526, avg=504.40, stdev=18.75, samples=20 00:31:33.289 lat (msec) : 20=0.53%, 50=99.47% 00:31:33.289 cpu : usr=98.42%, sys=1.19%, ctx=10, majf=0, minf=37 00:31:33.289 IO depths : 1=4.5%, 2=9.5%, 4=20.3%, 8=57.6%, 16=8.2%, 32=0.0%, >=64=0.0% 00:31:33.289 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.289 complete : 0=0.0%, 4=93.1%, 8=1.2%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.289 issued rwts: total=5060,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:33.289 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:33.289 filename1: (groupid=0, jobs=1): err= 0: pid=3543533: Tue Jul 23 01:11:17 2024 00:31:33.289 read: IOPS=504, BW=2016KiB/s (2065kB/s)(19.7MiB/10023msec) 00:31:33.289 slat (usec): min=7, max=163, avg=43.66, stdev=19.91 00:31:33.289 clat (usec): min=15713, max=43644, avg=31374.81, stdev=1524.36 00:31:33.289 lat (usec): min=15741, max=43666, avg=31418.46, stdev=1522.95 00:31:33.289 clat percentiles (usec): 00:31:33.289 | 1.00th=[27395], 5.00th=[30540], 10.00th=[30802], 20.00th=[31065], 00:31:33.289 | 30.00th=[31065], 40.00th=[31065], 50.00th=[31327], 60.00th=[31327], 00:31:33.289 | 70.00th=[31589], 80.00th=[31851], 90.00th=[32113], 95.00th=[32900], 00:31:33.289 | 99.00th=[34866], 99.50th=[38536], 99.90th=[43779], 99.95th=[43779], 00:31:33.289 | 99.99th=[43779] 00:31:33.289 bw ( KiB/s): min= 1792, max= 2144, per=4.17%, avg=2014.40, stdev=78.88, samples=20 00:31:33.289 iops : min= 448, max= 536, avg=503.60, stdev=19.72, samples=20 00:31:33.289 lat (msec) : 20=0.51%, 50=99.49% 00:31:33.289 cpu : usr=96.77%, sys=1.86%, ctx=61, majf=0, minf=29 00:31:33.289 IO depths : 1=4.7%, 2=10.7%, 4=24.4%, 8=52.3%, 16=8.0%, 32=0.0%, >=64=0.0% 00:31:33.289 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.289 complete : 0=0.0%, 4=94.1%, 8=0.3%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.289 issued rwts: total=5052,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:33.289 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:33.289 filename1: (groupid=0, jobs=1): err= 0: pid=3543534: Tue Jul 23 01:11:17 2024 00:31:33.289 read: IOPS=503, BW=2014KiB/s (2063kB/s)(19.7MiB/10008msec) 00:31:33.289 slat (usec): min=8, max=122, avg=42.66, stdev=16.88 00:31:33.289 clat (usec): min=7204, max=50993, avg=31408.65, stdev=1984.64 00:31:33.289 lat (usec): min=7217, max=51024, avg=31451.32, stdev=1985.17 00:31:33.289 clat percentiles (usec): 00:31:33.289 | 1.00th=[27395], 5.00th=[30802], 10.00th=[30802], 20.00th=[31065], 00:31:33.289 | 30.00th=[31065], 40.00th=[31065], 50.00th=[31327], 60.00th=[31327], 00:31:33.289 | 70.00th=[31589], 80.00th=[31851], 90.00th=[32113], 95.00th=[32900], 00:31:33.289 | 99.00th=[34866], 99.50th=[40109], 99.90th=[51119], 99.95th=[51119], 00:31:33.289 | 99.99th=[51119] 00:31:33.289 bw ( KiB/s): min= 1795, max= 2080, per=4.16%, avg=2009.75, stdev=73.20, samples=20 00:31:33.289 iops : min= 448, max= 520, avg=502.40, stdev=18.42, samples=20 00:31:33.289 lat (msec) : 10=0.32%, 50=99.37%, 100=0.32% 00:31:33.289 cpu : usr=98.63%, sys=0.97%, ctx=19, majf=0, minf=37 00:31:33.289 IO depths : 1=3.3%, 2=9.5%, 4=24.8%, 8=53.1%, 16=9.2%, 32=0.0%, >=64=0.0% 00:31:33.289 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.289 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.289 issued rwts: total=5040,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:33.289 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:33.289 filename1: (groupid=0, jobs=1): err= 0: pid=3543535: Tue Jul 23 01:11:17 2024 00:31:33.289 read: IOPS=502, BW=2011KiB/s (2060kB/s)(19.7MiB/10007msec) 00:31:33.289 slat (usec): min=8, max=199, avg=40.34, stdev=26.60 00:31:33.289 clat (usec): min=8117, max=53158, avg=31497.47, stdev=2583.58 00:31:33.289 lat (usec): min=8126, max=53186, avg=31537.81, stdev=2585.26 00:31:33.289 clat percentiles (usec): 00:31:33.289 | 1.00th=[21890], 5.00th=[30540], 10.00th=[30802], 20.00th=[31065], 00:31:33.289 | 30.00th=[31065], 40.00th=[31327], 50.00th=[31327], 60.00th=[31327], 00:31:33.289 | 70.00th=[31589], 80.00th=[31851], 90.00th=[32113], 95.00th=[33162], 00:31:33.289 | 99.00th=[43254], 99.50th=[49546], 99.90th=[51119], 99.95th=[51643], 00:31:33.289 | 99.99th=[53216] 00:31:33.289 bw ( KiB/s): min= 1776, max= 2048, per=4.15%, avg=2004.21, stdev=71.57, samples=19 00:31:33.289 iops : min= 444, max= 512, avg=501.05, stdev=17.89, samples=19 00:31:33.289 lat (msec) : 10=0.12%, 20=0.64%, 50=99.09%, 100=0.16% 00:31:33.289 cpu : usr=96.62%, sys=1.91%, ctx=49, majf=0, minf=33 00:31:33.289 IO depths : 1=1.8%, 2=6.1%, 4=17.5%, 8=62.1%, 16=12.5%, 32=0.0%, >=64=0.0% 00:31:33.289 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.289 complete : 0=0.0%, 4=92.8%, 8=3.3%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.289 issued rwts: total=5032,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:33.289 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:33.289 filename1: (groupid=0, jobs=1): err= 0: pid=3543536: Tue Jul 23 01:11:17 2024 00:31:33.289 read: IOPS=503, BW=2014KiB/s (2063kB/s)(19.7MiB/10008msec) 00:31:33.289 slat (usec): min=8, max=139, avg=42.11, stdev=15.07 00:31:33.289 clat (usec): min=9677, max=51642, avg=31376.21, stdev=1751.58 00:31:33.289 lat (usec): min=9692, max=51694, avg=31418.32, stdev=1753.38 00:31:33.289 clat percentiles (usec): 00:31:33.289 | 1.00th=[30016], 5.00th=[30802], 10.00th=[30802], 20.00th=[31065], 00:31:33.289 | 30.00th=[31065], 40.00th=[31065], 50.00th=[31327], 60.00th=[31327], 00:31:33.289 | 70.00th=[31589], 80.00th=[31589], 90.00th=[32113], 95.00th=[32637], 00:31:33.289 | 99.00th=[34341], 99.50th=[34341], 99.90th=[51643], 99.95th=[51643], 00:31:33.289 | 99.99th=[51643] 00:31:33.289 bw ( KiB/s): min= 1792, max= 2104, per=4.17%, avg=2012.40, stdev=75.69, samples=20 00:31:33.289 iops : min= 448, max= 526, avg=503.10, stdev=18.92, samples=20 00:31:33.289 lat (msec) : 10=0.14%, 20=0.18%, 50=99.37%, 100=0.32% 00:31:33.289 cpu : usr=93.82%, sys=3.14%, ctx=222, majf=0, minf=24 00:31:33.289 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:33.289 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.289 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.289 issued rwts: total=5040,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:33.289 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:33.289 filename1: (groupid=0, jobs=1): err= 0: pid=3543537: Tue Jul 23 01:11:17 2024 00:31:33.289 read: IOPS=508, BW=2033KiB/s (2081kB/s)(19.9MiB/10013msec) 00:31:33.289 slat (usec): min=8, max=191, avg=32.97, stdev=21.79 00:31:33.290 clat (usec): min=4548, max=58857, avg=31206.41, stdev=3292.72 00:31:33.290 lat (usec): min=4573, max=58884, avg=31239.39, stdev=3294.46 00:31:33.290 clat percentiles (usec): 00:31:33.290 | 1.00th=[10552], 5.00th=[30540], 10.00th=[30802], 20.00th=[31065], 00:31:33.290 | 30.00th=[31065], 40.00th=[31327], 50.00th=[31327], 60.00th=[31589], 00:31:33.290 | 70.00th=[31589], 80.00th=[31851], 90.00th=[32375], 95.00th=[32900], 00:31:33.290 | 99.00th=[34866], 99.50th=[41157], 99.90th=[50070], 99.95th=[51643], 00:31:33.290 | 99.99th=[58983] 00:31:33.290 bw ( KiB/s): min= 1920, max= 2176, per=4.20%, avg=2028.80, stdev=72.42, samples=20 00:31:33.290 iops : min= 480, max= 544, avg=507.20, stdev=18.10, samples=20 00:31:33.290 lat (msec) : 10=0.98%, 20=0.53%, 50=98.41%, 100=0.08% 00:31:33.290 cpu : usr=95.05%, sys=2.70%, ctx=301, majf=0, minf=31 00:31:33.290 IO depths : 1=5.5%, 2=11.7%, 4=24.5%, 8=51.3%, 16=7.0%, 32=0.0%, >=64=0.0% 00:31:33.290 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.290 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.290 issued rwts: total=5088,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:33.290 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:33.290 filename1: (groupid=0, jobs=1): err= 0: pid=3543538: Tue Jul 23 01:11:17 2024 00:31:33.290 read: IOPS=501, BW=2007KiB/s (2056kB/s)(19.7MiB/10043msec) 00:31:33.290 slat (usec): min=7, max=185, avg=25.17, stdev=20.88 00:31:33.290 clat (usec): min=11052, max=50537, avg=31563.50, stdev=5186.56 00:31:33.290 lat (usec): min=11067, max=50546, avg=31588.67, stdev=5186.38 00:31:33.290 clat percentiles (usec): 00:31:33.290 | 1.00th=[13173], 5.00th=[28967], 10.00th=[30802], 20.00th=[31065], 00:31:33.290 | 30.00th=[31327], 40.00th=[31327], 50.00th=[31589], 60.00th=[31589], 00:31:33.290 | 70.00th=[31851], 80.00th=[32113], 90.00th=[33162], 95.00th=[34341], 00:31:33.290 | 99.00th=[49546], 99.50th=[50070], 99.90th=[50594], 99.95th=[50594], 00:31:33.290 | 99.99th=[50594] 00:31:33.290 bw ( KiB/s): min= 1792, max= 2112, per=4.18%, avg=2015.20, stdev=73.13, samples=20 00:31:33.290 iops : min= 448, max= 528, avg=503.80, stdev=18.28, samples=20 00:31:33.290 lat (msec) : 20=4.31%, 50=95.54%, 100=0.16% 00:31:33.290 cpu : usr=98.50%, sys=1.08%, ctx=10, majf=0, minf=30 00:31:33.290 IO depths : 1=2.7%, 2=7.9%, 4=20.9%, 8=58.2%, 16=10.3%, 32=0.0%, >=64=0.0% 00:31:33.290 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.290 complete : 0=0.0%, 4=93.4%, 8=1.3%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.290 issued rwts: total=5040,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:33.290 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:33.290 filename1: (groupid=0, jobs=1): err= 0: pid=3543539: Tue Jul 23 01:11:17 2024 00:31:33.290 read: IOPS=502, BW=2009KiB/s (2058kB/s)(19.6MiB/10001msec) 00:31:33.290 slat (usec): min=9, max=204, avg=44.42, stdev=18.99 00:31:33.290 clat (usec): min=28294, max=52801, avg=31452.86, stdev=1372.72 00:31:33.290 lat (usec): min=28327, max=52822, avg=31497.28, stdev=1371.72 00:31:33.290 clat percentiles (usec): 00:31:33.290 | 1.00th=[30016], 5.00th=[30802], 10.00th=[30802], 20.00th=[31065], 00:31:33.290 | 30.00th=[31065], 40.00th=[31065], 50.00th=[31327], 60.00th=[31327], 00:31:33.290 | 70.00th=[31589], 80.00th=[31589], 90.00th=[32113], 95.00th=[32637], 00:31:33.290 | 99.00th=[34341], 99.50th=[34866], 99.90th=[52691], 99.95th=[52691], 00:31:33.290 | 99.99th=[52691] 00:31:33.290 bw ( KiB/s): min= 1792, max= 2048, per=4.16%, avg=2007.58, stdev=74.55, samples=19 00:31:33.290 iops : min= 448, max= 512, avg=501.89, stdev=18.64, samples=19 00:31:33.290 lat (msec) : 50=99.68%, 100=0.32% 00:31:33.290 cpu : usr=98.57%, sys=1.03%, ctx=17, majf=0, minf=38 00:31:33.290 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:33.290 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.290 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.290 issued rwts: total=5024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:33.290 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:33.290 filename2: (groupid=0, jobs=1): err= 0: pid=3543540: Tue Jul 23 01:11:17 2024 00:31:33.290 read: IOPS=503, BW=2013KiB/s (2061kB/s)(19.7MiB/10007msec) 00:31:33.290 slat (usec): min=8, max=223, avg=39.27, stdev=20.93 00:31:33.290 clat (usec): min=8974, max=58317, avg=31464.93, stdev=2135.50 00:31:33.290 lat (usec): min=8983, max=58331, avg=31504.20, stdev=2136.38 00:31:33.290 clat percentiles (usec): 00:31:33.290 | 1.00th=[28705], 5.00th=[30802], 10.00th=[30802], 20.00th=[31065], 00:31:33.290 | 30.00th=[31065], 40.00th=[31327], 50.00th=[31327], 60.00th=[31327], 00:31:33.290 | 70.00th=[31589], 80.00th=[31851], 90.00th=[32113], 95.00th=[32900], 00:31:33.290 | 99.00th=[34866], 99.50th=[47449], 99.90th=[50594], 99.95th=[58459], 00:31:33.290 | 99.99th=[58459] 00:31:33.290 bw ( KiB/s): min= 1792, max= 2048, per=4.15%, avg=2005.89, stdev=69.58, samples=19 00:31:33.290 iops : min= 448, max= 512, avg=501.47, stdev=17.40, samples=19 00:31:33.290 lat (msec) : 10=0.32%, 20=0.04%, 50=99.38%, 100=0.26% 00:31:33.290 cpu : usr=95.39%, sys=2.55%, ctx=518, majf=0, minf=36 00:31:33.290 IO depths : 1=2.3%, 2=8.1%, 4=23.1%, 8=55.9%, 16=10.6%, 32=0.0%, >=64=0.0% 00:31:33.290 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.290 complete : 0=0.0%, 4=93.9%, 8=0.8%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.290 issued rwts: total=5036,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:33.290 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:33.290 filename2: (groupid=0, jobs=1): err= 0: pid=3543541: Tue Jul 23 01:11:17 2024 00:31:33.290 read: IOPS=504, BW=2018KiB/s (2066kB/s)(19.8MiB/10023msec) 00:31:33.290 slat (usec): min=8, max=186, avg=35.67, stdev=26.15 00:31:33.290 clat (usec): min=6310, max=42705, avg=31400.07, stdev=1734.85 00:31:33.290 lat (usec): min=6320, max=42721, avg=31435.74, stdev=1734.69 00:31:33.290 clat percentiles (usec): 00:31:33.290 | 1.00th=[28181], 5.00th=[30540], 10.00th=[30802], 20.00th=[31065], 00:31:33.290 | 30.00th=[31065], 40.00th=[31327], 50.00th=[31327], 60.00th=[31327], 00:31:33.290 | 70.00th=[31589], 80.00th=[31851], 90.00th=[32375], 95.00th=[33162], 00:31:33.290 | 99.00th=[34866], 99.50th=[38536], 99.90th=[40633], 99.95th=[41157], 00:31:33.290 | 99.99th=[42730] 00:31:33.290 bw ( KiB/s): min= 1788, max= 2160, per=4.18%, avg=2015.80, stdev=79.84, samples=20 00:31:33.290 iops : min= 447, max= 540, avg=503.95, stdev=19.96, samples=20 00:31:33.290 lat (msec) : 10=0.28%, 20=0.08%, 50=99.64% 00:31:33.290 cpu : usr=98.26%, sys=1.32%, ctx=16, majf=0, minf=23 00:31:33.290 IO depths : 1=3.8%, 2=9.7%, 4=23.4%, 8=54.4%, 16=8.7%, 32=0.0%, >=64=0.0% 00:31:33.290 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.290 complete : 0=0.0%, 4=93.9%, 8=0.4%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.290 issued rwts: total=5056,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:33.290 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:33.290 filename2: (groupid=0, jobs=1): err= 0: pid=3543542: Tue Jul 23 01:11:17 2024 00:31:33.290 read: IOPS=502, BW=2008KiB/s (2057kB/s)(19.6MiB/10006msec) 00:31:33.290 slat (usec): min=8, max=176, avg=45.08, stdev=19.48 00:31:33.290 clat (usec): min=27528, max=57416, avg=31464.87, stdev=1629.49 00:31:33.290 lat (usec): min=27546, max=57444, avg=31509.95, stdev=1629.00 00:31:33.290 clat percentiles (usec): 00:31:33.290 | 1.00th=[29754], 5.00th=[30540], 10.00th=[30802], 20.00th=[31065], 00:31:33.290 | 30.00th=[31065], 40.00th=[31065], 50.00th=[31327], 60.00th=[31327], 00:31:33.290 | 70.00th=[31589], 80.00th=[31589], 90.00th=[32113], 95.00th=[32637], 00:31:33.290 | 99.00th=[34341], 99.50th=[35914], 99.90th=[57410], 99.95th=[57410], 00:31:33.290 | 99.99th=[57410] 00:31:33.290 bw ( KiB/s): min= 1792, max= 2048, per=4.16%, avg=2007.58, stdev=74.55, samples=19 00:31:33.290 iops : min= 448, max= 512, avg=501.89, stdev=18.64, samples=19 00:31:33.290 lat (msec) : 50=99.68%, 100=0.32% 00:31:33.290 cpu : usr=97.71%, sys=1.37%, ctx=44, majf=0, minf=35 00:31:33.290 IO depths : 1=5.9%, 2=12.0%, 4=24.6%, 8=50.8%, 16=6.6%, 32=0.0%, >=64=0.0% 00:31:33.290 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.290 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.290 issued rwts: total=5024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:33.290 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:33.290 filename2: (groupid=0, jobs=1): err= 0: pid=3543543: Tue Jul 23 01:11:17 2024 00:31:33.290 read: IOPS=513, BW=2055KiB/s (2105kB/s)(20.1MiB/10015msec) 00:31:33.290 slat (usec): min=6, max=137, avg=19.53, stdev=17.86 00:31:33.290 clat (usec): min=2153, max=55325, avg=30981.38, stdev=3811.07 00:31:33.290 lat (usec): min=2168, max=55333, avg=31000.91, stdev=3810.80 00:31:33.290 clat percentiles (usec): 00:31:33.290 | 1.00th=[ 5080], 5.00th=[30016], 10.00th=[30802], 20.00th=[31065], 00:31:33.290 | 30.00th=[31327], 40.00th=[31327], 50.00th=[31327], 60.00th=[31589], 00:31:33.290 | 70.00th=[31589], 80.00th=[31851], 90.00th=[32113], 95.00th=[32900], 00:31:33.290 | 99.00th=[34866], 99.50th=[35390], 99.90th=[38536], 99.95th=[45351], 00:31:33.290 | 99.99th=[55313] 00:31:33.290 bw ( KiB/s): min= 1920, max= 2480, per=4.25%, avg=2052.00, stdev=119.27, samples=20 00:31:33.290 iops : min= 480, max= 620, avg=513.00, stdev=29.82, samples=20 00:31:33.290 lat (msec) : 4=0.35%, 10=1.48%, 20=0.56%, 50=97.59%, 100=0.02% 00:31:33.290 cpu : usr=98.39%, sys=1.22%, ctx=10, majf=0, minf=28 00:31:33.290 IO depths : 1=1.9%, 2=7.7%, 4=23.6%, 8=56.1%, 16=10.7%, 32=0.0%, >=64=0.0% 00:31:33.290 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.290 complete : 0=0.0%, 4=94.0%, 8=0.4%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.290 issued rwts: total=5146,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:33.290 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:33.290 filename2: (groupid=0, jobs=1): err= 0: pid=3543544: Tue Jul 23 01:11:17 2024 00:31:33.290 read: IOPS=502, BW=2011KiB/s (2060kB/s)(19.7MiB/10023msec) 00:31:33.290 slat (usec): min=8, max=141, avg=41.70, stdev=15.62 00:31:33.290 clat (usec): min=18933, max=53400, avg=31469.52, stdev=1341.49 00:31:33.290 lat (usec): min=18941, max=53447, avg=31511.22, stdev=1341.54 00:31:33.290 clat percentiles (usec): 00:31:33.290 | 1.00th=[29492], 5.00th=[30540], 10.00th=[30802], 20.00th=[31065], 00:31:33.290 | 30.00th=[31065], 40.00th=[31327], 50.00th=[31327], 60.00th=[31327], 00:31:33.290 | 70.00th=[31589], 80.00th=[31851], 90.00th=[32113], 95.00th=[32900], 00:31:33.290 | 99.00th=[34341], 99.50th=[40109], 99.90th=[47449], 99.95th=[53216], 00:31:33.290 | 99.99th=[53216] 00:31:33.290 bw ( KiB/s): min= 1792, max= 2048, per=4.16%, avg=2009.60, stdev=73.12, samples=20 00:31:33.290 iops : min= 448, max= 512, avg=502.40, stdev=18.28, samples=20 00:31:33.290 lat (msec) : 20=0.04%, 50=99.88%, 100=0.08% 00:31:33.290 cpu : usr=98.59%, sys=1.00%, ctx=17, majf=0, minf=27 00:31:33.290 IO depths : 1=4.7%, 2=10.9%, 4=24.9%, 8=51.7%, 16=7.9%, 32=0.0%, >=64=0.0% 00:31:33.291 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.291 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.291 issued rwts: total=5040,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:33.291 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:33.291 filename2: (groupid=0, jobs=1): err= 0: pid=3543545: Tue Jul 23 01:11:17 2024 00:31:33.291 read: IOPS=504, BW=2019KiB/s (2067kB/s)(19.8MiB/10023msec) 00:31:33.291 slat (usec): min=7, max=304, avg=46.13, stdev=27.43 00:31:33.291 clat (usec): min=4316, max=56083, avg=31272.88, stdev=1801.89 00:31:33.291 lat (usec): min=4326, max=56092, avg=31319.01, stdev=1802.80 00:31:33.291 clat percentiles (usec): 00:31:33.291 | 1.00th=[26870], 5.00th=[30540], 10.00th=[30540], 20.00th=[30802], 00:31:33.291 | 30.00th=[31065], 40.00th=[31065], 50.00th=[31327], 60.00th=[31327], 00:31:33.291 | 70.00th=[31589], 80.00th=[31851], 90.00th=[32113], 95.00th=[32900], 00:31:33.291 | 99.00th=[34866], 99.50th=[35914], 99.90th=[42206], 99.95th=[49546], 00:31:33.291 | 99.99th=[55837] 00:31:33.291 bw ( KiB/s): min= 1792, max= 2200, per=4.18%, avg=2017.20, stdev=84.36, samples=20 00:31:33.291 iops : min= 448, max= 550, avg=504.30, stdev=21.09, samples=20 00:31:33.291 lat (msec) : 10=0.14%, 20=0.38%, 50=99.47%, 100=0.02% 00:31:33.291 cpu : usr=94.30%, sys=2.85%, ctx=126, majf=0, minf=30 00:31:33.291 IO depths : 1=5.4%, 2=11.4%, 4=24.2%, 8=51.9%, 16=7.2%, 32=0.0%, >=64=0.0% 00:31:33.291 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.291 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.291 issued rwts: total=5059,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:33.291 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:33.291 filename2: (groupid=0, jobs=1): err= 0: pid=3543546: Tue Jul 23 01:11:17 2024 00:31:33.291 read: IOPS=504, BW=2017KiB/s (2066kB/s)(19.8MiB/10025msec) 00:31:33.291 slat (usec): min=8, max=114, avg=42.02, stdev=15.45 00:31:33.291 clat (usec): min=3800, max=56409, avg=31373.15, stdev=2191.34 00:31:33.291 lat (usec): min=3826, max=56435, avg=31415.17, stdev=2192.31 00:31:33.291 clat percentiles (usec): 00:31:33.291 | 1.00th=[26870], 5.00th=[30540], 10.00th=[30802], 20.00th=[31065], 00:31:33.291 | 30.00th=[31065], 40.00th=[31327], 50.00th=[31327], 60.00th=[31327], 00:31:33.291 | 70.00th=[31589], 80.00th=[31851], 90.00th=[32113], 95.00th=[32900], 00:31:33.291 | 99.00th=[34866], 99.50th=[41157], 99.90th=[54789], 99.95th=[54789], 00:31:33.291 | 99.99th=[56361] 00:31:33.291 bw ( KiB/s): min= 1792, max= 2176, per=4.18%, avg=2016.00, stdev=80.59, samples=20 00:31:33.291 iops : min= 448, max= 544, avg=504.00, stdev=20.15, samples=20 00:31:33.291 lat (msec) : 4=0.14%, 10=0.14%, 20=0.20%, 50=99.25%, 100=0.28% 00:31:33.291 cpu : usr=97.93%, sys=1.38%, ctx=188, majf=0, minf=26 00:31:33.291 IO depths : 1=5.5%, 2=11.5%, 4=24.3%, 8=51.6%, 16=7.0%, 32=0.0%, >=64=0.0% 00:31:33.291 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.291 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.291 issued rwts: total=5056,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:33.291 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:33.291 filename2: (groupid=0, jobs=1): err= 0: pid=3543547: Tue Jul 23 01:11:17 2024 00:31:33.291 read: IOPS=503, BW=2013KiB/s (2061kB/s)(19.7MiB/10007msec) 00:31:33.291 slat (usec): min=8, max=1058, avg=40.11, stdev=31.42 00:31:33.291 clat (usec): min=5411, max=58848, avg=31478.22, stdev=2979.21 00:31:33.291 lat (usec): min=5421, max=58858, avg=31518.33, stdev=2979.94 00:31:33.291 clat percentiles (usec): 00:31:33.291 | 1.00th=[21890], 5.00th=[30540], 10.00th=[30802], 20.00th=[31065], 00:31:33.291 | 30.00th=[31065], 40.00th=[31327], 50.00th=[31327], 60.00th=[31327], 00:31:33.291 | 70.00th=[31589], 80.00th=[31851], 90.00th=[32375], 95.00th=[33817], 00:31:33.291 | 99.00th=[44303], 99.50th=[50594], 99.90th=[58983], 99.95th=[58983], 00:31:33.291 | 99.99th=[58983] 00:31:33.291 bw ( KiB/s): min= 1795, max= 2064, per=4.16%, avg=2008.95, stdev=71.36, samples=20 00:31:33.291 iops : min= 448, max= 516, avg=502.20, stdev=17.96, samples=20 00:31:33.291 lat (msec) : 10=0.58%, 20=0.16%, 50=98.69%, 100=0.58% 00:31:33.291 cpu : usr=94.92%, sys=2.63%, ctx=356, majf=0, minf=30 00:31:33.291 IO depths : 1=1.0%, 2=6.1%, 4=22.9%, 8=57.5%, 16=12.5%, 32=0.0%, >=64=0.0% 00:31:33.291 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.291 complete : 0=0.0%, 4=94.2%, 8=0.5%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.291 issued rwts: total=5036,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:33.291 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:33.291 00:31:33.291 Run status group 0 (all jobs): 00:31:33.291 READ: bw=47.1MiB/s (49.4MB/s), 2003KiB/s-2055KiB/s (2051kB/s-2105kB/s), io=473MiB (496MB), run=10001-10043msec 00:31:33.550 01:11:17 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:31:33.550 01:11:17 -- target/dif.sh@43 -- # local sub 00:31:33.550 01:11:17 -- target/dif.sh@45 -- # for sub in "$@" 00:31:33.550 01:11:17 -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:33.550 01:11:17 -- target/dif.sh@36 -- # local sub_id=0 00:31:33.550 01:11:17 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:33.550 01:11:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:33.550 01:11:17 -- common/autotest_common.sh@10 -- # set +x 00:31:33.550 01:11:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:33.550 01:11:17 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:33.550 01:11:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:33.550 01:11:17 -- common/autotest_common.sh@10 -- # set +x 00:31:33.550 01:11:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:33.550 01:11:17 -- target/dif.sh@45 -- # for sub in "$@" 00:31:33.550 01:11:17 -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:33.550 01:11:17 -- target/dif.sh@36 -- # local sub_id=1 00:31:33.550 01:11:17 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:33.550 01:11:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:33.550 01:11:17 -- common/autotest_common.sh@10 -- # set +x 00:31:33.550 01:11:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:33.550 01:11:17 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:33.550 01:11:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:33.550 01:11:17 -- common/autotest_common.sh@10 -- # set +x 00:31:33.550 01:11:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:33.550 01:11:17 -- target/dif.sh@45 -- # for sub in "$@" 00:31:33.550 01:11:17 -- target/dif.sh@46 -- # destroy_subsystem 2 00:31:33.550 01:11:17 -- target/dif.sh@36 -- # local sub_id=2 00:31:33.550 01:11:17 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:33.550 01:11:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:33.550 01:11:17 -- common/autotest_common.sh@10 -- # set +x 00:31:33.550 01:11:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:33.550 01:11:17 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:31:33.550 01:11:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:33.550 01:11:17 -- common/autotest_common.sh@10 -- # set +x 00:31:33.550 01:11:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:33.550 01:11:17 -- target/dif.sh@115 -- # NULL_DIF=1 00:31:33.550 01:11:17 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:31:33.550 01:11:17 -- target/dif.sh@115 -- # numjobs=2 00:31:33.550 01:11:17 -- target/dif.sh@115 -- # iodepth=8 00:31:33.550 01:11:17 -- target/dif.sh@115 -- # runtime=5 00:31:33.550 01:11:17 -- target/dif.sh@115 -- # files=1 00:31:33.550 01:11:17 -- target/dif.sh@117 -- # create_subsystems 0 1 00:31:33.550 01:11:17 -- target/dif.sh@28 -- # local sub 00:31:33.550 01:11:17 -- target/dif.sh@30 -- # for sub in "$@" 00:31:33.550 01:11:17 -- target/dif.sh@31 -- # create_subsystem 0 00:31:33.550 01:11:17 -- target/dif.sh@18 -- # local sub_id=0 00:31:33.550 01:11:17 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:33.550 01:11:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:33.550 01:11:17 -- common/autotest_common.sh@10 -- # set +x 00:31:33.550 bdev_null0 00:31:33.550 01:11:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:33.550 01:11:17 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:33.550 01:11:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:33.809 01:11:17 -- common/autotest_common.sh@10 -- # set +x 00:31:33.809 01:11:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:33.809 01:11:17 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:33.809 01:11:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:33.809 01:11:17 -- common/autotest_common.sh@10 -- # set +x 00:31:33.809 01:11:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:33.809 01:11:17 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:33.809 01:11:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:33.809 01:11:17 -- common/autotest_common.sh@10 -- # set +x 00:31:33.809 [2024-07-23 01:11:17.771626] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:33.809 01:11:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:33.809 01:11:17 -- target/dif.sh@30 -- # for sub in "$@" 00:31:33.809 01:11:17 -- target/dif.sh@31 -- # create_subsystem 1 00:31:33.809 01:11:17 -- target/dif.sh@18 -- # local sub_id=1 00:31:33.809 01:11:17 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:31:33.809 01:11:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:33.809 01:11:17 -- common/autotest_common.sh@10 -- # set +x 00:31:33.809 bdev_null1 00:31:33.809 01:11:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:33.809 01:11:17 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:33.809 01:11:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:33.809 01:11:17 -- common/autotest_common.sh@10 -- # set +x 00:31:33.809 01:11:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:33.809 01:11:17 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:33.809 01:11:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:33.809 01:11:17 -- common/autotest_common.sh@10 -- # set +x 00:31:33.809 01:11:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:33.809 01:11:17 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:33.809 01:11:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:33.809 01:11:17 -- common/autotest_common.sh@10 -- # set +x 00:31:33.809 01:11:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:33.809 01:11:17 -- target/dif.sh@118 -- # fio /dev/fd/62 00:31:33.809 01:11:17 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:31:33.809 01:11:17 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:33.809 01:11:17 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:31:33.809 01:11:17 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:33.809 01:11:17 -- nvmf/common.sh@520 -- # config=() 00:31:33.809 01:11:17 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:31:33.809 01:11:17 -- target/dif.sh@82 -- # gen_fio_conf 00:31:33.809 01:11:17 -- nvmf/common.sh@520 -- # local subsystem config 00:31:33.809 01:11:17 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:33.809 01:11:17 -- target/dif.sh@54 -- # local file 00:31:33.809 01:11:17 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:33.809 01:11:17 -- common/autotest_common.sh@1318 -- # local sanitizers 00:31:33.809 01:11:17 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:33.809 01:11:17 -- target/dif.sh@56 -- # cat 00:31:33.809 01:11:17 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:33.809 { 00:31:33.809 "params": { 00:31:33.809 "name": "Nvme$subsystem", 00:31:33.809 "trtype": "$TEST_TRANSPORT", 00:31:33.809 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:33.809 "adrfam": "ipv4", 00:31:33.809 "trsvcid": "$NVMF_PORT", 00:31:33.809 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:33.809 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:33.809 "hdgst": ${hdgst:-false}, 00:31:33.809 "ddgst": ${ddgst:-false} 00:31:33.809 }, 00:31:33.809 "method": "bdev_nvme_attach_controller" 00:31:33.809 } 00:31:33.809 EOF 00:31:33.809 )") 00:31:33.809 01:11:17 -- common/autotest_common.sh@1320 -- # shift 00:31:33.809 01:11:17 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:31:33.809 01:11:17 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:33.809 01:11:17 -- nvmf/common.sh@542 -- # cat 00:31:33.809 01:11:17 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:33.809 01:11:17 -- common/autotest_common.sh@1324 -- # grep libasan 00:31:33.809 01:11:17 -- target/dif.sh@72 -- # (( file = 1 )) 00:31:33.809 01:11:17 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:33.809 01:11:17 -- target/dif.sh@72 -- # (( file <= files )) 00:31:33.809 01:11:17 -- target/dif.sh@73 -- # cat 00:31:33.809 01:11:17 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:33.809 01:11:17 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:33.809 { 00:31:33.809 "params": { 00:31:33.809 "name": "Nvme$subsystem", 00:31:33.809 "trtype": "$TEST_TRANSPORT", 00:31:33.809 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:33.809 "adrfam": "ipv4", 00:31:33.809 "trsvcid": "$NVMF_PORT", 00:31:33.809 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:33.809 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:33.809 "hdgst": ${hdgst:-false}, 00:31:33.809 "ddgst": ${ddgst:-false} 00:31:33.809 }, 00:31:33.809 "method": "bdev_nvme_attach_controller" 00:31:33.809 } 00:31:33.809 EOF 00:31:33.809 )") 00:31:33.809 01:11:17 -- target/dif.sh@72 -- # (( file++ )) 00:31:33.809 01:11:17 -- nvmf/common.sh@542 -- # cat 00:31:33.809 01:11:17 -- target/dif.sh@72 -- # (( file <= files )) 00:31:33.809 01:11:17 -- nvmf/common.sh@544 -- # jq . 00:31:33.809 01:11:17 -- nvmf/common.sh@545 -- # IFS=, 00:31:33.809 01:11:17 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:31:33.809 "params": { 00:31:33.809 "name": "Nvme0", 00:31:33.809 "trtype": "tcp", 00:31:33.809 "traddr": "10.0.0.2", 00:31:33.809 "adrfam": "ipv4", 00:31:33.809 "trsvcid": "4420", 00:31:33.809 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:33.809 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:33.809 "hdgst": false, 00:31:33.809 "ddgst": false 00:31:33.809 }, 00:31:33.809 "method": "bdev_nvme_attach_controller" 00:31:33.809 },{ 00:31:33.809 "params": { 00:31:33.809 "name": "Nvme1", 00:31:33.809 "trtype": "tcp", 00:31:33.809 "traddr": "10.0.0.2", 00:31:33.809 "adrfam": "ipv4", 00:31:33.809 "trsvcid": "4420", 00:31:33.809 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:33.809 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:33.809 "hdgst": false, 00:31:33.809 "ddgst": false 00:31:33.809 }, 00:31:33.809 "method": "bdev_nvme_attach_controller" 00:31:33.809 }' 00:31:33.809 01:11:17 -- common/autotest_common.sh@1324 -- # asan_lib= 00:31:33.809 01:11:17 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:31:33.809 01:11:17 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:33.809 01:11:17 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:33.809 01:11:17 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:31:33.809 01:11:17 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:33.809 01:11:17 -- common/autotest_common.sh@1324 -- # asan_lib= 00:31:33.809 01:11:17 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:31:33.809 01:11:17 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:33.809 01:11:17 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:34.068 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:34.068 ... 00:31:34.068 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:34.068 ... 00:31:34.068 fio-3.35 00:31:34.068 Starting 4 threads 00:31:34.068 EAL: No free 2048 kB hugepages reported on node 1 00:31:34.643 [2024-07-23 01:11:18.727108] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:31:34.643 [2024-07-23 01:11:18.727186] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:31:39.963 00:31:39.963 filename0: (groupid=0, jobs=1): err= 0: pid=3544965: Tue Jul 23 01:11:23 2024 00:31:39.963 read: IOPS=1882, BW=14.7MiB/s (15.4MB/s)(73.5MiB/5001msec) 00:31:39.963 slat (nsec): min=4867, max=60916, avg=12476.95, stdev=6439.17 00:31:39.963 clat (usec): min=855, max=7613, avg=4211.95, stdev=626.82 00:31:39.963 lat (usec): min=867, max=7625, avg=4224.43, stdev=626.76 00:31:39.963 clat percentiles (usec): 00:31:39.963 | 1.00th=[ 2900], 5.00th=[ 3458], 10.00th=[ 3654], 20.00th=[ 3818], 00:31:39.963 | 30.00th=[ 3949], 40.00th=[ 4047], 50.00th=[ 4146], 60.00th=[ 4228], 00:31:39.963 | 70.00th=[ 4293], 80.00th=[ 4424], 90.00th=[ 4686], 95.00th=[ 5800], 00:31:39.963 | 99.00th=[ 6325], 99.50th=[ 6652], 99.90th=[ 7242], 99.95th=[ 7570], 00:31:39.963 | 99.99th=[ 7635] 00:31:39.963 bw ( KiB/s): min=14288, max=16384, per=25.05%, avg=15053.89, stdev=649.65, samples=9 00:31:39.963 iops : min= 1786, max= 2048, avg=1881.67, stdev=81.26, samples=9 00:31:39.963 lat (usec) : 1000=0.03% 00:31:39.963 lat (msec) : 2=0.13%, 4=34.71%, 10=65.13% 00:31:39.963 cpu : usr=93.18%, sys=6.32%, ctx=8, majf=0, minf=46 00:31:39.963 IO depths : 1=0.1%, 2=3.8%, 4=67.9%, 8=28.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:39.963 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:39.963 complete : 0=0.0%, 4=93.1%, 8=6.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:39.963 issued rwts: total=9413,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:39.963 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:39.963 filename0: (groupid=0, jobs=1): err= 0: pid=3544966: Tue Jul 23 01:11:23 2024 00:31:39.963 read: IOPS=1853, BW=14.5MiB/s (15.2MB/s)(72.4MiB/5002msec) 00:31:39.963 slat (usec): min=4, max=261, avg=14.00, stdev= 7.32 00:31:39.963 clat (usec): min=821, max=8027, avg=4273.79, stdev=630.78 00:31:39.963 lat (usec): min=840, max=8041, avg=4287.79, stdev=630.26 00:31:39.963 clat percentiles (usec): 00:31:39.963 | 1.00th=[ 3064], 5.00th=[ 3589], 10.00th=[ 3720], 20.00th=[ 3884], 00:31:39.963 | 30.00th=[ 3982], 40.00th=[ 4080], 50.00th=[ 4178], 60.00th=[ 4293], 00:31:39.963 | 70.00th=[ 4359], 80.00th=[ 4490], 90.00th=[ 5014], 95.00th=[ 5800], 00:31:39.963 | 99.00th=[ 6390], 99.50th=[ 6587], 99.90th=[ 7046], 99.95th=[ 7177], 00:31:39.963 | 99.99th=[ 8029] 00:31:39.963 bw ( KiB/s): min=14240, max=15296, per=24.62%, avg=14792.89, stdev=314.67, samples=9 00:31:39.963 iops : min= 1780, max= 1912, avg=1849.11, stdev=39.33, samples=9 00:31:39.963 lat (usec) : 1000=0.02% 00:31:39.963 lat (msec) : 2=0.04%, 4=31.98%, 10=67.95% 00:31:39.963 cpu : usr=94.58%, sys=4.66%, ctx=51, majf=0, minf=46 00:31:39.963 IO depths : 1=0.1%, 2=5.4%, 4=66.0%, 8=28.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:39.963 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:39.963 complete : 0=0.0%, 4=93.4%, 8=6.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:39.963 issued rwts: total=9270,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:39.963 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:39.963 filename1: (groupid=0, jobs=1): err= 0: pid=3544967: Tue Jul 23 01:11:23 2024 00:31:39.963 read: IOPS=1894, BW=14.8MiB/s (15.5MB/s)(74.0MiB/5003msec) 00:31:39.963 slat (nsec): min=4877, max=59370, avg=12509.13, stdev=6520.49 00:31:39.963 clat (usec): min=1382, max=7368, avg=4183.34, stdev=668.45 00:31:39.963 lat (usec): min=1394, max=7380, avg=4195.84, stdev=668.22 00:31:39.963 clat percentiles (usec): 00:31:39.963 | 1.00th=[ 2933], 5.00th=[ 3359], 10.00th=[ 3523], 20.00th=[ 3752], 00:31:39.963 | 30.00th=[ 3851], 40.00th=[ 4015], 50.00th=[ 4080], 60.00th=[ 4178], 00:31:39.963 | 70.00th=[ 4293], 80.00th=[ 4359], 90.00th=[ 5211], 95.00th=[ 5800], 00:31:39.963 | 99.00th=[ 6259], 99.50th=[ 6456], 99.90th=[ 6915], 99.95th=[ 7242], 00:31:39.963 | 99.99th=[ 7373] 00:31:39.963 bw ( KiB/s): min=14464, max=16368, per=25.24%, avg=15168.00, stdev=604.46, samples=9 00:31:39.963 iops : min= 1808, max= 2046, avg=1896.00, stdev=75.56, samples=9 00:31:39.963 lat (msec) : 2=0.05%, 4=38.87%, 10=61.07% 00:31:39.963 cpu : usr=94.08%, sys=5.38%, ctx=18, majf=0, minf=56 00:31:39.963 IO depths : 1=0.1%, 2=3.2%, 4=69.7%, 8=27.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:39.963 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:39.963 complete : 0=0.0%, 4=92.3%, 8=7.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:39.963 issued rwts: total=9477,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:39.963 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:39.963 filename1: (groupid=0, jobs=1): err= 0: pid=3544968: Tue Jul 23 01:11:23 2024 00:31:39.963 read: IOPS=1882, BW=14.7MiB/s (15.4MB/s)(73.6MiB/5001msec) 00:31:39.963 slat (nsec): min=4965, max=62004, avg=12245.76, stdev=6503.95 00:31:39.963 clat (usec): min=1802, max=9414, avg=4210.51, stdev=707.12 00:31:39.963 lat (usec): min=1825, max=9429, avg=4222.75, stdev=706.30 00:31:39.963 clat percentiles (usec): 00:31:39.963 | 1.00th=[ 3097], 5.00th=[ 3359], 10.00th=[ 3589], 20.00th=[ 3752], 00:31:39.963 | 30.00th=[ 3884], 40.00th=[ 3982], 50.00th=[ 4080], 60.00th=[ 4228], 00:31:39.963 | 70.00th=[ 4293], 80.00th=[ 4359], 90.00th=[ 5276], 95.00th=[ 5932], 00:31:39.963 | 99.00th=[ 6587], 99.50th=[ 6718], 99.90th=[ 7832], 99.95th=[ 8094], 00:31:39.963 | 99.99th=[ 9372] 00:31:39.963 bw ( KiB/s): min=13995, max=16288, per=24.81%, avg=14907.89, stdev=698.14, samples=9 00:31:39.963 iops : min= 1749, max= 2036, avg=1863.44, stdev=87.33, samples=9 00:31:39.963 lat (msec) : 2=0.10%, 4=41.04%, 10=58.87% 00:31:39.963 cpu : usr=94.12%, sys=5.40%, ctx=8, majf=0, minf=47 00:31:39.963 IO depths : 1=0.2%, 2=3.1%, 4=69.5%, 8=27.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:39.963 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:39.963 complete : 0=0.0%, 4=92.4%, 8=7.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:39.963 issued rwts: total=9416,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:39.963 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:39.963 00:31:39.963 Run status group 0 (all jobs): 00:31:39.963 READ: bw=58.7MiB/s (61.5MB/s), 14.5MiB/s-14.8MiB/s (15.2MB/s-15.5MB/s), io=294MiB (308MB), run=5001-5003msec 00:31:39.963 01:11:24 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:31:39.963 01:11:24 -- target/dif.sh@43 -- # local sub 00:31:39.963 01:11:24 -- target/dif.sh@45 -- # for sub in "$@" 00:31:39.963 01:11:24 -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:39.963 01:11:24 -- target/dif.sh@36 -- # local sub_id=0 00:31:39.963 01:11:24 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:39.963 01:11:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:39.963 01:11:24 -- common/autotest_common.sh@10 -- # set +x 00:31:39.963 01:11:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:39.963 01:11:24 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:39.963 01:11:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:39.963 01:11:24 -- common/autotest_common.sh@10 -- # set +x 00:31:39.963 01:11:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:39.963 01:11:24 -- target/dif.sh@45 -- # for sub in "$@" 00:31:39.963 01:11:24 -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:39.963 01:11:24 -- target/dif.sh@36 -- # local sub_id=1 00:31:39.963 01:11:24 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:39.963 01:11:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:39.963 01:11:24 -- common/autotest_common.sh@10 -- # set +x 00:31:39.963 01:11:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:39.963 01:11:24 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:39.963 01:11:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:39.963 01:11:24 -- common/autotest_common.sh@10 -- # set +x 00:31:39.963 01:11:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:39.963 00:31:39.963 real 0m24.098s 00:31:39.963 user 4m28.666s 00:31:39.963 sys 0m7.853s 00:31:39.963 01:11:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:39.963 01:11:24 -- common/autotest_common.sh@10 -- # set +x 00:31:39.963 ************************************ 00:31:39.963 END TEST fio_dif_rand_params 00:31:39.963 ************************************ 00:31:40.221 01:11:24 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:31:40.222 01:11:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:40.222 01:11:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:40.222 01:11:24 -- common/autotest_common.sh@10 -- # set +x 00:31:40.222 ************************************ 00:31:40.222 START TEST fio_dif_digest 00:31:40.222 ************************************ 00:31:40.222 01:11:24 -- common/autotest_common.sh@1104 -- # fio_dif_digest 00:31:40.222 01:11:24 -- target/dif.sh@123 -- # local NULL_DIF 00:31:40.222 01:11:24 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:31:40.222 01:11:24 -- target/dif.sh@125 -- # local hdgst ddgst 00:31:40.222 01:11:24 -- target/dif.sh@127 -- # NULL_DIF=3 00:31:40.222 01:11:24 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:31:40.222 01:11:24 -- target/dif.sh@127 -- # numjobs=3 00:31:40.222 01:11:24 -- target/dif.sh@127 -- # iodepth=3 00:31:40.222 01:11:24 -- target/dif.sh@127 -- # runtime=10 00:31:40.222 01:11:24 -- target/dif.sh@128 -- # hdgst=true 00:31:40.222 01:11:24 -- target/dif.sh@128 -- # ddgst=true 00:31:40.222 01:11:24 -- target/dif.sh@130 -- # create_subsystems 0 00:31:40.222 01:11:24 -- target/dif.sh@28 -- # local sub 00:31:40.222 01:11:24 -- target/dif.sh@30 -- # for sub in "$@" 00:31:40.222 01:11:24 -- target/dif.sh@31 -- # create_subsystem 0 00:31:40.222 01:11:24 -- target/dif.sh@18 -- # local sub_id=0 00:31:40.222 01:11:24 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:31:40.222 01:11:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:40.222 01:11:24 -- common/autotest_common.sh@10 -- # set +x 00:31:40.222 bdev_null0 00:31:40.222 01:11:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:40.222 01:11:24 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:40.222 01:11:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:40.222 01:11:24 -- common/autotest_common.sh@10 -- # set +x 00:31:40.222 01:11:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:40.222 01:11:24 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:40.222 01:11:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:40.222 01:11:24 -- common/autotest_common.sh@10 -- # set +x 00:31:40.222 01:11:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:40.222 01:11:24 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:40.222 01:11:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:40.222 01:11:24 -- common/autotest_common.sh@10 -- # set +x 00:31:40.222 [2024-07-23 01:11:24.214207] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:40.222 01:11:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:40.222 01:11:24 -- target/dif.sh@131 -- # fio /dev/fd/62 00:31:40.222 01:11:24 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:31:40.222 01:11:24 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:40.222 01:11:24 -- nvmf/common.sh@520 -- # config=() 00:31:40.222 01:11:24 -- nvmf/common.sh@520 -- # local subsystem config 00:31:40.222 01:11:24 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:40.222 01:11:24 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:40.222 { 00:31:40.222 "params": { 00:31:40.222 "name": "Nvme$subsystem", 00:31:40.222 "trtype": "$TEST_TRANSPORT", 00:31:40.222 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:40.222 "adrfam": "ipv4", 00:31:40.222 "trsvcid": "$NVMF_PORT", 00:31:40.222 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:40.222 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:40.222 "hdgst": ${hdgst:-false}, 00:31:40.222 "ddgst": ${ddgst:-false} 00:31:40.222 }, 00:31:40.222 "method": "bdev_nvme_attach_controller" 00:31:40.222 } 00:31:40.222 EOF 00:31:40.222 )") 00:31:40.222 01:11:24 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:40.222 01:11:24 -- target/dif.sh@82 -- # gen_fio_conf 00:31:40.222 01:11:24 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:40.222 01:11:24 -- target/dif.sh@54 -- # local file 00:31:40.222 01:11:24 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:31:40.222 01:11:24 -- target/dif.sh@56 -- # cat 00:31:40.222 01:11:24 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:40.222 01:11:24 -- common/autotest_common.sh@1318 -- # local sanitizers 00:31:40.222 01:11:24 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:40.222 01:11:24 -- common/autotest_common.sh@1320 -- # shift 00:31:40.222 01:11:24 -- nvmf/common.sh@542 -- # cat 00:31:40.222 01:11:24 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:31:40.222 01:11:24 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:40.222 01:11:24 -- target/dif.sh@72 -- # (( file = 1 )) 00:31:40.222 01:11:24 -- target/dif.sh@72 -- # (( file <= files )) 00:31:40.222 01:11:24 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:40.222 01:11:24 -- common/autotest_common.sh@1324 -- # grep libasan 00:31:40.222 01:11:24 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:40.222 01:11:24 -- nvmf/common.sh@544 -- # jq . 00:31:40.222 01:11:24 -- nvmf/common.sh@545 -- # IFS=, 00:31:40.222 01:11:24 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:31:40.222 "params": { 00:31:40.222 "name": "Nvme0", 00:31:40.222 "trtype": "tcp", 00:31:40.222 "traddr": "10.0.0.2", 00:31:40.222 "adrfam": "ipv4", 00:31:40.222 "trsvcid": "4420", 00:31:40.222 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:40.222 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:40.222 "hdgst": true, 00:31:40.222 "ddgst": true 00:31:40.222 }, 00:31:40.222 "method": "bdev_nvme_attach_controller" 00:31:40.222 }' 00:31:40.222 01:11:24 -- common/autotest_common.sh@1324 -- # asan_lib= 00:31:40.222 01:11:24 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:31:40.222 01:11:24 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:40.222 01:11:24 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:40.222 01:11:24 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:31:40.222 01:11:24 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:40.222 01:11:24 -- common/autotest_common.sh@1324 -- # asan_lib= 00:31:40.222 01:11:24 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:31:40.222 01:11:24 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:40.222 01:11:24 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:40.480 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:31:40.480 ... 00:31:40.480 fio-3.35 00:31:40.480 Starting 3 threads 00:31:40.480 EAL: No free 2048 kB hugepages reported on node 1 00:31:40.739 [2024-07-23 01:11:24.844254] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:31:40.739 [2024-07-23 01:11:24.844350] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:31:52.934 00:31:52.934 filename0: (groupid=0, jobs=1): err= 0: pid=3545738: Tue Jul 23 01:11:35 2024 00:31:52.934 read: IOPS=205, BW=25.7MiB/s (27.0MB/s)(258MiB/10046msec) 00:31:52.934 slat (nsec): min=7075, max=58952, avg=13247.14, stdev=3225.21 00:31:52.934 clat (usec): min=8896, max=56348, avg=14549.43, stdev=3213.85 00:31:52.935 lat (usec): min=8908, max=56360, avg=14562.68, stdev=3213.86 00:31:52.935 clat percentiles (usec): 00:31:52.935 | 1.00th=[10028], 5.00th=[12125], 10.00th=[12780], 20.00th=[13304], 00:31:52.935 | 30.00th=[13698], 40.00th=[14091], 50.00th=[14353], 60.00th=[14746], 00:31:52.935 | 70.00th=[15008], 80.00th=[15401], 90.00th=[16057], 95.00th=[16581], 00:31:52.935 | 99.00th=[17957], 99.50th=[51643], 99.90th=[55837], 99.95th=[55837], 00:31:52.935 | 99.99th=[56361] 00:31:52.935 bw ( KiB/s): min=23296, max=28672, per=34.02%, avg=26419.20, stdev=1491.80, samples=20 00:31:52.935 iops : min= 182, max= 224, avg=206.40, stdev=11.65, samples=20 00:31:52.935 lat (msec) : 10=0.97%, 20=98.50%, 100=0.53% 00:31:52.935 cpu : usr=90.70%, sys=8.82%, ctx=16, majf=0, minf=153 00:31:52.935 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:52.935 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:52.935 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:52.935 issued rwts: total=2066,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:52.935 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:52.935 filename0: (groupid=0, jobs=1): err= 0: pid=3545739: Tue Jul 23 01:11:35 2024 00:31:52.935 read: IOPS=202, BW=25.3MiB/s (26.5MB/s)(254MiB/10047msec) 00:31:52.935 slat (nsec): min=7579, max=35705, avg=13202.72, stdev=3297.96 00:31:52.935 clat (usec): min=8241, max=56983, avg=14752.99, stdev=2699.53 00:31:52.935 lat (usec): min=8253, max=56995, avg=14766.19, stdev=2699.65 00:31:52.935 clat percentiles (usec): 00:31:52.935 | 1.00th=[10028], 5.00th=[12125], 10.00th=[13042], 20.00th=[13566], 00:31:52.935 | 30.00th=[13960], 40.00th=[14353], 50.00th=[14746], 60.00th=[15008], 00:31:52.935 | 70.00th=[15401], 80.00th=[15795], 90.00th=[16319], 95.00th=[16909], 00:31:52.935 | 99.00th=[18220], 99.50th=[19006], 99.90th=[55313], 99.95th=[56361], 00:31:52.935 | 99.99th=[56886] 00:31:52.935 bw ( KiB/s): min=23040, max=28416, per=33.49%, avg=26012.20, stdev=1371.08, samples=20 00:31:52.935 iops : min= 180, max= 222, avg=203.20, stdev=10.71, samples=20 00:31:52.935 lat (msec) : 10=0.79%, 20=98.72%, 50=0.20%, 100=0.29% 00:31:52.935 cpu : usr=90.61%, sys=8.91%, ctx=23, majf=0, minf=138 00:31:52.935 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:52.935 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:52.935 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:52.935 issued rwts: total=2035,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:52.935 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:52.935 filename0: (groupid=0, jobs=1): err= 0: pid=3545740: Tue Jul 23 01:11:35 2024 00:31:52.935 read: IOPS=198, BW=24.8MiB/s (26.0MB/s)(249MiB/10043msec) 00:31:52.935 slat (nsec): min=7516, max=42052, avg=13140.63, stdev=2984.64 00:31:52.935 clat (usec): min=6981, max=57540, avg=15064.62, stdev=3360.27 00:31:52.935 lat (usec): min=6993, max=57554, avg=15077.76, stdev=3360.43 00:31:52.935 clat percentiles (usec): 00:31:52.935 | 1.00th=[ 9896], 5.00th=[12649], 10.00th=[13304], 20.00th=[13829], 00:31:52.935 | 30.00th=[14222], 40.00th=[14615], 50.00th=[14877], 60.00th=[15270], 00:31:52.935 | 70.00th=[15533], 80.00th=[15926], 90.00th=[16581], 95.00th=[16909], 00:31:52.935 | 99.00th=[17957], 99.50th=[55313], 99.90th=[57410], 99.95th=[57410], 00:31:52.935 | 99.99th=[57410] 00:31:52.935 bw ( KiB/s): min=22272, max=27136, per=32.85%, avg=25510.40, stdev=1276.69, samples=20 00:31:52.935 iops : min= 174, max= 212, avg=199.30, stdev= 9.97, samples=20 00:31:52.935 lat (msec) : 10=1.05%, 20=98.30%, 50=0.15%, 100=0.50% 00:31:52.935 cpu : usr=90.68%, sys=8.81%, ctx=22, majf=0, minf=154 00:31:52.935 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:52.935 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:52.935 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:52.935 issued rwts: total=1995,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:52.935 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:52.935 00:31:52.935 Run status group 0 (all jobs): 00:31:52.935 READ: bw=75.8MiB/s (79.5MB/s), 24.8MiB/s-25.7MiB/s (26.0MB/s-27.0MB/s), io=762MiB (799MB), run=10043-10047msec 00:31:52.935 01:11:35 -- target/dif.sh@132 -- # destroy_subsystems 0 00:31:52.935 01:11:35 -- target/dif.sh@43 -- # local sub 00:31:52.935 01:11:35 -- target/dif.sh@45 -- # for sub in "$@" 00:31:52.935 01:11:35 -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:52.935 01:11:35 -- target/dif.sh@36 -- # local sub_id=0 00:31:52.935 01:11:35 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:52.935 01:11:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:52.935 01:11:35 -- common/autotest_common.sh@10 -- # set +x 00:31:52.935 01:11:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:52.935 01:11:35 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:52.935 01:11:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:52.935 01:11:35 -- common/autotest_common.sh@10 -- # set +x 00:31:52.935 01:11:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:52.935 00:31:52.935 real 0m11.127s 00:31:52.935 user 0m28.419s 00:31:52.935 sys 0m2.930s 00:31:52.935 01:11:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:52.935 01:11:35 -- common/autotest_common.sh@10 -- # set +x 00:31:52.935 ************************************ 00:31:52.935 END TEST fio_dif_digest 00:31:52.935 ************************************ 00:31:52.935 01:11:35 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:31:52.935 01:11:35 -- target/dif.sh@147 -- # nvmftestfini 00:31:52.935 01:11:35 -- nvmf/common.sh@476 -- # nvmfcleanup 00:31:52.935 01:11:35 -- nvmf/common.sh@116 -- # sync 00:31:52.935 01:11:35 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:31:52.935 01:11:35 -- nvmf/common.sh@119 -- # set +e 00:31:52.935 01:11:35 -- nvmf/common.sh@120 -- # for i in {1..20} 00:31:52.935 01:11:35 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:31:52.935 rmmod nvme_tcp 00:31:52.935 rmmod nvme_fabrics 00:31:52.935 rmmod nvme_keyring 00:31:52.935 01:11:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:31:52.935 01:11:35 -- nvmf/common.sh@123 -- # set -e 00:31:52.935 01:11:35 -- nvmf/common.sh@124 -- # return 0 00:31:52.935 01:11:35 -- nvmf/common.sh@477 -- # '[' -n 3539513 ']' 00:31:52.935 01:11:35 -- nvmf/common.sh@478 -- # killprocess 3539513 00:31:52.935 01:11:35 -- common/autotest_common.sh@926 -- # '[' -z 3539513 ']' 00:31:52.935 01:11:35 -- common/autotest_common.sh@930 -- # kill -0 3539513 00:31:52.935 01:11:35 -- common/autotest_common.sh@931 -- # uname 00:31:52.935 01:11:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:52.935 01:11:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3539513 00:31:52.935 01:11:35 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:52.935 01:11:35 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:52.935 01:11:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3539513' 00:31:52.935 killing process with pid 3539513 00:31:52.935 01:11:35 -- common/autotest_common.sh@945 -- # kill 3539513 00:31:52.935 01:11:35 -- common/autotest_common.sh@950 -- # wait 3539513 00:31:52.935 01:11:35 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:31:52.935 01:11:35 -- nvmf/common.sh@481 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:52.935 Waiting for block devices as requested 00:31:52.935 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:31:52.935 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:31:52.935 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:31:52.935 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:31:52.935 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:31:52.935 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:31:53.194 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:31:53.194 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:31:53.194 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:31:53.194 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:31:53.452 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:31:53.452 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:31:53.452 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:31:53.452 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:31:53.709 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:31:53.709 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:31:53.709 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:31:53.968 01:11:37 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:31:53.969 01:11:37 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:31:53.969 01:11:37 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:53.969 01:11:37 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:31:53.969 01:11:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:53.969 01:11:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:53.969 01:11:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:55.871 01:11:39 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:31:55.871 00:31:55.871 real 1m6.981s 00:31:55.871 user 6m24.597s 00:31:55.871 sys 0m20.269s 00:31:55.871 01:11:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:55.871 01:11:39 -- common/autotest_common.sh@10 -- # set +x 00:31:55.871 ************************************ 00:31:55.871 END TEST nvmf_dif 00:31:55.871 ************************************ 00:31:55.871 01:11:39 -- spdk/autotest.sh@301 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:31:55.871 01:11:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:55.871 01:11:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:55.871 01:11:39 -- common/autotest_common.sh@10 -- # set +x 00:31:55.871 ************************************ 00:31:55.871 START TEST nvmf_abort_qd_sizes 00:31:55.871 ************************************ 00:31:55.871 01:11:39 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:31:55.871 * Looking for test storage... 00:31:55.871 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:55.871 01:11:40 -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:55.871 01:11:40 -- nvmf/common.sh@7 -- # uname -s 00:31:55.871 01:11:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:55.871 01:11:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:55.871 01:11:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:55.871 01:11:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:55.871 01:11:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:55.871 01:11:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:55.871 01:11:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:55.871 01:11:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:55.871 01:11:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:55.871 01:11:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:55.871 01:11:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:55.871 01:11:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:55.871 01:11:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:55.871 01:11:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:55.871 01:11:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:55.871 01:11:40 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:55.871 01:11:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:55.871 01:11:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:55.871 01:11:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:55.871 01:11:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:55.871 01:11:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:55.871 01:11:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:55.871 01:11:40 -- paths/export.sh@5 -- # export PATH 00:31:55.871 01:11:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:55.871 01:11:40 -- nvmf/common.sh@46 -- # : 0 00:31:55.871 01:11:40 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:31:55.871 01:11:40 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:31:55.871 01:11:40 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:31:55.871 01:11:40 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:55.871 01:11:40 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:55.871 01:11:40 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:31:55.871 01:11:40 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:31:55.871 01:11:40 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:31:55.871 01:11:40 -- target/abort_qd_sizes.sh@73 -- # nvmftestinit 00:31:55.871 01:11:40 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:31:55.871 01:11:40 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:55.871 01:11:40 -- nvmf/common.sh@436 -- # prepare_net_devs 00:31:55.871 01:11:40 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:31:55.871 01:11:40 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:31:55.871 01:11:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:55.871 01:11:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:55.871 01:11:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:55.871 01:11:40 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:31:55.871 01:11:40 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:31:55.871 01:11:40 -- nvmf/common.sh@284 -- # xtrace_disable 00:31:55.871 01:11:40 -- common/autotest_common.sh@10 -- # set +x 00:31:57.771 01:11:41 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:31:57.771 01:11:41 -- nvmf/common.sh@290 -- # pci_devs=() 00:31:57.771 01:11:41 -- nvmf/common.sh@290 -- # local -a pci_devs 00:31:57.771 01:11:41 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:31:57.771 01:11:41 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:31:57.771 01:11:41 -- nvmf/common.sh@292 -- # pci_drivers=() 00:31:57.771 01:11:41 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:31:57.771 01:11:41 -- nvmf/common.sh@294 -- # net_devs=() 00:31:57.771 01:11:41 -- nvmf/common.sh@294 -- # local -ga net_devs 00:31:57.771 01:11:41 -- nvmf/common.sh@295 -- # e810=() 00:31:57.771 01:11:41 -- nvmf/common.sh@295 -- # local -ga e810 00:31:57.771 01:11:41 -- nvmf/common.sh@296 -- # x722=() 00:31:57.771 01:11:41 -- nvmf/common.sh@296 -- # local -ga x722 00:31:57.771 01:11:41 -- nvmf/common.sh@297 -- # mlx=() 00:31:57.771 01:11:41 -- nvmf/common.sh@297 -- # local -ga mlx 00:31:57.771 01:11:41 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:57.771 01:11:41 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:57.771 01:11:41 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:57.771 01:11:41 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:57.771 01:11:41 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:57.771 01:11:41 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:57.771 01:11:41 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:57.771 01:11:41 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:57.771 01:11:41 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:57.771 01:11:41 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:57.771 01:11:41 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:57.771 01:11:41 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:31:57.771 01:11:41 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:31:57.771 01:11:41 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:31:57.771 01:11:41 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:31:57.771 01:11:41 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:31:57.771 01:11:41 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:31:57.771 01:11:41 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:31:57.771 01:11:41 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:57.771 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:57.771 01:11:41 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:31:57.771 01:11:41 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:31:57.771 01:11:41 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:57.771 01:11:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:57.771 01:11:41 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:31:57.771 01:11:41 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:31:57.771 01:11:41 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:57.771 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:57.771 01:11:41 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:31:57.771 01:11:41 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:31:57.771 01:11:41 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:57.771 01:11:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:57.771 01:11:41 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:31:57.771 01:11:41 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:31:57.771 01:11:41 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:31:57.771 01:11:41 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:31:57.771 01:11:41 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:31:57.771 01:11:41 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:57.771 01:11:41 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:31:57.771 01:11:41 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:57.771 01:11:41 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:57.772 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:57.772 01:11:41 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:31:57.772 01:11:41 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:31:57.772 01:11:41 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:57.772 01:11:41 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:31:57.772 01:11:41 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:57.772 01:11:41 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:57.772 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:57.772 01:11:41 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:31:57.772 01:11:41 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:31:57.772 01:11:41 -- nvmf/common.sh@402 -- # is_hw=yes 00:31:57.772 01:11:41 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:31:57.772 01:11:41 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:31:57.772 01:11:41 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:31:57.772 01:11:41 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:57.772 01:11:41 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:57.772 01:11:41 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:57.772 01:11:41 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:31:57.772 01:11:41 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:57.772 01:11:41 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:57.772 01:11:41 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:31:57.772 01:11:41 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:57.772 01:11:41 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:57.772 01:11:41 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:31:57.772 01:11:41 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:31:57.772 01:11:41 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:31:57.772 01:11:41 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:58.030 01:11:41 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:58.030 01:11:42 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:58.030 01:11:42 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:31:58.030 01:11:42 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:58.030 01:11:42 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:58.030 01:11:42 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:58.030 01:11:42 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:31:58.030 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:58.030 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.152 ms 00:31:58.030 00:31:58.030 --- 10.0.0.2 ping statistics --- 00:31:58.030 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:58.030 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:31:58.030 01:11:42 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:58.030 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:58.030 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:31:58.030 00:31:58.030 --- 10.0.0.1 ping statistics --- 00:31:58.030 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:58.030 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:31:58.030 01:11:42 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:58.030 01:11:42 -- nvmf/common.sh@410 -- # return 0 00:31:58.030 01:11:42 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:31:58.030 01:11:42 -- nvmf/common.sh@439 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:58.964 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:31:58.964 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:31:58.964 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:31:58.964 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:31:59.224 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:31:59.224 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:31:59.224 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:31:59.224 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:31:59.224 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:31:59.224 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:31:59.224 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:31:59.224 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:31:59.224 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:31:59.224 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:31:59.224 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:31:59.224 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:00.161 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:32:00.161 01:11:44 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:00.161 01:11:44 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:32:00.162 01:11:44 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:32:00.162 01:11:44 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:00.162 01:11:44 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:32:00.162 01:11:44 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:32:00.162 01:11:44 -- target/abort_qd_sizes.sh@74 -- # nvmfappstart -m 0xf 00:32:00.162 01:11:44 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:32:00.162 01:11:44 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:00.162 01:11:44 -- common/autotest_common.sh@10 -- # set +x 00:32:00.162 01:11:44 -- nvmf/common.sh@469 -- # nvmfpid=3550621 00:32:00.162 01:11:44 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:32:00.162 01:11:44 -- nvmf/common.sh@470 -- # waitforlisten 3550621 00:32:00.162 01:11:44 -- common/autotest_common.sh@819 -- # '[' -z 3550621 ']' 00:32:00.162 01:11:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:00.162 01:11:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:00.162 01:11:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:00.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:00.162 01:11:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:00.162 01:11:44 -- common/autotest_common.sh@10 -- # set +x 00:32:00.420 [2024-07-23 01:11:44.394681] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:32:00.420 [2024-07-23 01:11:44.394769] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:00.420 EAL: No free 2048 kB hugepages reported on node 1 00:32:00.420 [2024-07-23 01:11:44.463218] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:00.420 [2024-07-23 01:11:44.554129] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:32:00.420 [2024-07-23 01:11:44.554302] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:00.420 [2024-07-23 01:11:44.554323] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:00.420 [2024-07-23 01:11:44.554339] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:00.420 [2024-07-23 01:11:44.554442] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:00.420 [2024-07-23 01:11:44.554511] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:32:00.420 [2024-07-23 01:11:44.554620] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:00.420 [2024-07-23 01:11:44.554640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:32:01.352 01:11:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:01.352 01:11:45 -- common/autotest_common.sh@852 -- # return 0 00:32:01.352 01:11:45 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:32:01.352 01:11:45 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:01.352 01:11:45 -- common/autotest_common.sh@10 -- # set +x 00:32:01.352 01:11:45 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:01.352 01:11:45 -- target/abort_qd_sizes.sh@76 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:32:01.352 01:11:45 -- target/abort_qd_sizes.sh@78 -- # mapfile -t nvmes 00:32:01.352 01:11:45 -- target/abort_qd_sizes.sh@78 -- # nvme_in_userspace 00:32:01.352 01:11:45 -- scripts/common.sh@311 -- # local bdf bdfs 00:32:01.352 01:11:45 -- scripts/common.sh@312 -- # local nvmes 00:32:01.352 01:11:45 -- scripts/common.sh@314 -- # [[ -n 0000:88:00.0 ]] 00:32:01.352 01:11:45 -- scripts/common.sh@315 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:32:01.352 01:11:45 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:32:01.352 01:11:45 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:32:01.352 01:11:45 -- scripts/common.sh@322 -- # uname -s 00:32:01.352 01:11:45 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:32:01.352 01:11:45 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:32:01.352 01:11:45 -- scripts/common.sh@327 -- # (( 1 )) 00:32:01.352 01:11:45 -- scripts/common.sh@328 -- # printf '%s\n' 0000:88:00.0 00:32:01.352 01:11:45 -- target/abort_qd_sizes.sh@79 -- # (( 1 > 0 )) 00:32:01.352 01:11:45 -- target/abort_qd_sizes.sh@81 -- # nvme=0000:88:00.0 00:32:01.352 01:11:45 -- target/abort_qd_sizes.sh@83 -- # run_test spdk_target_abort spdk_target 00:32:01.352 01:11:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:32:01.352 01:11:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:01.352 01:11:45 -- common/autotest_common.sh@10 -- # set +x 00:32:01.352 ************************************ 00:32:01.352 START TEST spdk_target_abort 00:32:01.352 ************************************ 00:32:01.352 01:11:45 -- common/autotest_common.sh@1104 -- # spdk_target 00:32:01.352 01:11:45 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:32:01.352 01:11:45 -- target/abort_qd_sizes.sh@44 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:32:01.352 01:11:45 -- target/abort_qd_sizes.sh@46 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:32:01.352 01:11:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:01.352 01:11:45 -- common/autotest_common.sh@10 -- # set +x 00:32:04.668 spdk_targetn1 00:32:04.668 01:11:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:04.668 01:11:48 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:04.668 01:11:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:04.668 01:11:48 -- common/autotest_common.sh@10 -- # set +x 00:32:04.668 [2024-07-23 01:11:48.174695] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:04.668 01:11:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:04.668 01:11:48 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:spdk_target -a -s SPDKISFASTANDAWESOME 00:32:04.668 01:11:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:04.668 01:11:48 -- common/autotest_common.sh@10 -- # set +x 00:32:04.668 01:11:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:04.668 01:11:48 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:spdk_target spdk_targetn1 00:32:04.668 01:11:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:04.668 01:11:48 -- common/autotest_common.sh@10 -- # set +x 00:32:04.668 01:11:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:04.668 01:11:48 -- target/abort_qd_sizes.sh@51 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:spdk_target -t tcp -a 10.0.0.2 -s 4420 00:32:04.668 01:11:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:04.668 01:11:48 -- common/autotest_common.sh@10 -- # set +x 00:32:04.668 [2024-07-23 01:11:48.206997] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:04.668 01:11:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:04.668 01:11:48 -- target/abort_qd_sizes.sh@53 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:spdk_target 00:32:04.668 01:11:48 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:32:04.668 01:11:48 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:32:04.668 01:11:48 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:32:04.668 01:11:48 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:32:04.668 01:11:48 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:32:04.668 01:11:48 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:32:04.668 01:11:48 -- target/abort_qd_sizes.sh@24 -- # local target r 00:32:04.668 01:11:48 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:32:04.668 01:11:48 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:04.668 01:11:48 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:32:04.668 01:11:48 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:04.668 01:11:48 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:32:04.668 01:11:48 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:04.668 01:11:48 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:32:04.668 01:11:48 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:04.668 01:11:48 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:04.668 01:11:48 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:04.668 01:11:48 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:32:04.668 01:11:48 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:04.668 01:11:48 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:32:04.668 EAL: No free 2048 kB hugepages reported on node 1 00:32:07.195 Initializing NVMe Controllers 00:32:07.195 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:32:07.196 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:32:07.196 Initialization complete. Launching workers. 00:32:07.196 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 10424, failed: 0 00:32:07.196 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1262, failed to submit 9162 00:32:07.196 success 727, unsuccess 535, failed 0 00:32:07.196 01:11:51 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:07.196 01:11:51 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:32:07.196 EAL: No free 2048 kB hugepages reported on node 1 00:32:10.471 Initializing NVMe Controllers 00:32:10.471 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:32:10.471 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:32:10.471 Initialization complete. Launching workers. 00:32:10.471 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 8635, failed: 0 00:32:10.471 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1210, failed to submit 7425 00:32:10.471 success 352, unsuccess 858, failed 0 00:32:10.471 01:11:54 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:10.471 01:11:54 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:32:10.471 EAL: No free 2048 kB hugepages reported on node 1 00:32:13.746 Initializing NVMe Controllers 00:32:13.746 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:32:13.746 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:32:13.746 Initialization complete. Launching workers. 00:32:13.746 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 31624, failed: 0 00:32:13.746 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 2686, failed to submit 28938 00:32:13.746 success 530, unsuccess 2156, failed 0 00:32:13.746 01:11:57 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:spdk_target 00:32:13.746 01:11:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:13.746 01:11:57 -- common/autotest_common.sh@10 -- # set +x 00:32:13.746 01:11:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:13.746 01:11:57 -- target/abort_qd_sizes.sh@56 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:32:13.746 01:11:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:13.746 01:11:57 -- common/autotest_common.sh@10 -- # set +x 00:32:15.117 01:11:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:15.117 01:11:59 -- target/abort_qd_sizes.sh@62 -- # killprocess 3550621 00:32:15.117 01:11:59 -- common/autotest_common.sh@926 -- # '[' -z 3550621 ']' 00:32:15.117 01:11:59 -- common/autotest_common.sh@930 -- # kill -0 3550621 00:32:15.117 01:11:59 -- common/autotest_common.sh@931 -- # uname 00:32:15.117 01:11:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:15.117 01:11:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3550621 00:32:15.117 01:11:59 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:32:15.117 01:11:59 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:32:15.117 01:11:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3550621' 00:32:15.117 killing process with pid 3550621 00:32:15.117 01:11:59 -- common/autotest_common.sh@945 -- # kill 3550621 00:32:15.117 01:11:59 -- common/autotest_common.sh@950 -- # wait 3550621 00:32:15.383 00:32:15.383 real 0m14.027s 00:32:15.383 user 0m55.661s 00:32:15.383 sys 0m2.505s 00:32:15.383 01:11:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:15.383 01:11:59 -- common/autotest_common.sh@10 -- # set +x 00:32:15.383 ************************************ 00:32:15.383 END TEST spdk_target_abort 00:32:15.383 ************************************ 00:32:15.383 01:11:59 -- target/abort_qd_sizes.sh@84 -- # run_test kernel_target_abort kernel_target 00:32:15.383 01:11:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:32:15.383 01:11:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:15.383 01:11:59 -- common/autotest_common.sh@10 -- # set +x 00:32:15.383 ************************************ 00:32:15.383 START TEST kernel_target_abort 00:32:15.383 ************************************ 00:32:15.383 01:11:59 -- common/autotest_common.sh@1104 -- # kernel_target 00:32:15.383 01:11:59 -- target/abort_qd_sizes.sh@66 -- # local name=kernel_target 00:32:15.383 01:11:59 -- target/abort_qd_sizes.sh@68 -- # configure_kernel_target kernel_target 00:32:15.383 01:11:59 -- nvmf/common.sh@621 -- # kernel_name=kernel_target 00:32:15.383 01:11:59 -- nvmf/common.sh@622 -- # nvmet=/sys/kernel/config/nvmet 00:32:15.383 01:11:59 -- nvmf/common.sh@623 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/kernel_target 00:32:15.383 01:11:59 -- nvmf/common.sh@624 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:32:15.383 01:11:59 -- nvmf/common.sh@625 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:15.383 01:11:59 -- nvmf/common.sh@627 -- # local block nvme 00:32:15.383 01:11:59 -- nvmf/common.sh@629 -- # [[ ! -e /sys/module/nvmet ]] 00:32:15.383 01:11:59 -- nvmf/common.sh@630 -- # modprobe nvmet 00:32:15.383 01:11:59 -- nvmf/common.sh@633 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:15.383 01:11:59 -- nvmf/common.sh@635 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:16.320 Waiting for block devices as requested 00:32:16.320 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:32:16.577 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:16.577 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:16.835 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:16.835 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:16.835 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:16.835 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:17.092 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:17.092 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:17.092 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:17.092 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:17.351 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:17.351 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:17.351 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:17.351 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:17.351 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:17.609 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:17.609 01:12:01 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:32:17.609 01:12:01 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:17.609 01:12:01 -- nvmf/common.sh@640 -- # block_in_use nvme0n1 00:32:17.609 01:12:01 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:32:17.609 01:12:01 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:17.609 No valid GPT data, bailing 00:32:17.609 01:12:01 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:17.609 01:12:01 -- scripts/common.sh@393 -- # pt= 00:32:17.609 01:12:01 -- scripts/common.sh@394 -- # return 1 00:32:17.609 01:12:01 -- nvmf/common.sh@640 -- # nvme=/dev/nvme0n1 00:32:17.609 01:12:01 -- nvmf/common.sh@643 -- # [[ -b /dev/nvme0n1 ]] 00:32:17.609 01:12:01 -- nvmf/common.sh@645 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:32:17.609 01:12:01 -- nvmf/common.sh@646 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:32:17.609 01:12:01 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:17.609 01:12:01 -- nvmf/common.sh@652 -- # echo SPDK-kernel_target 00:32:17.609 01:12:01 -- nvmf/common.sh@654 -- # echo 1 00:32:17.609 01:12:01 -- nvmf/common.sh@655 -- # echo /dev/nvme0n1 00:32:17.609 01:12:01 -- nvmf/common.sh@656 -- # echo 1 00:32:17.609 01:12:01 -- nvmf/common.sh@662 -- # echo 10.0.0.1 00:32:17.609 01:12:01 -- nvmf/common.sh@663 -- # echo tcp 00:32:17.609 01:12:01 -- nvmf/common.sh@664 -- # echo 4420 00:32:17.609 01:12:01 -- nvmf/common.sh@665 -- # echo ipv4 00:32:17.609 01:12:01 -- nvmf/common.sh@668 -- # ln -s /sys/kernel/config/nvmet/subsystems/kernel_target /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:17.609 01:12:01 -- nvmf/common.sh@671 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:32:17.867 00:32:17.867 Discovery Log Number of Records 2, Generation counter 2 00:32:17.867 =====Discovery Log Entry 0====== 00:32:17.867 trtype: tcp 00:32:17.867 adrfam: ipv4 00:32:17.867 subtype: current discovery subsystem 00:32:17.867 treq: not specified, sq flow control disable supported 00:32:17.867 portid: 1 00:32:17.867 trsvcid: 4420 00:32:17.867 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:17.867 traddr: 10.0.0.1 00:32:17.867 eflags: none 00:32:17.867 sectype: none 00:32:17.867 =====Discovery Log Entry 1====== 00:32:17.867 trtype: tcp 00:32:17.867 adrfam: ipv4 00:32:17.867 subtype: nvme subsystem 00:32:17.867 treq: not specified, sq flow control disable supported 00:32:17.867 portid: 1 00:32:17.867 trsvcid: 4420 00:32:17.867 subnqn: kernel_target 00:32:17.867 traddr: 10.0.0.1 00:32:17.867 eflags: none 00:32:17.867 sectype: none 00:32:17.867 01:12:01 -- target/abort_qd_sizes.sh@69 -- # rabort tcp IPv4 10.0.0.1 4420 kernel_target 00:32:17.867 01:12:01 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:32:17.867 01:12:01 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:32:17.867 01:12:01 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:32:17.867 01:12:01 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:32:17.867 01:12:01 -- target/abort_qd_sizes.sh@21 -- # local subnqn=kernel_target 00:32:17.867 01:12:01 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:32:17.867 01:12:01 -- target/abort_qd_sizes.sh@24 -- # local target r 00:32:17.867 01:12:01 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:32:17.867 01:12:01 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:17.867 01:12:01 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:32:17.867 01:12:01 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:17.867 01:12:01 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:32:17.867 01:12:01 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:17.867 01:12:01 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:32:17.867 01:12:01 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:17.867 01:12:01 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:32:17.867 01:12:01 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:17.867 01:12:01 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:32:17.867 01:12:01 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:17.867 01:12:01 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:32:17.867 EAL: No free 2048 kB hugepages reported on node 1 00:32:21.148 Initializing NVMe Controllers 00:32:21.148 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:32:21.148 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:32:21.148 Initialization complete. Launching workers. 00:32:21.148 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 30882, failed: 0 00:32:21.148 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 30882, failed to submit 0 00:32:21.148 success 0, unsuccess 30882, failed 0 00:32:21.148 01:12:04 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:21.148 01:12:04 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:32:21.148 EAL: No free 2048 kB hugepages reported on node 1 00:32:24.425 Initializing NVMe Controllers 00:32:24.425 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:32:24.425 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:32:24.425 Initialization complete. Launching workers. 00:32:24.425 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 60599, failed: 0 00:32:24.425 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 15270, failed to submit 45329 00:32:24.425 success 0, unsuccess 15270, failed 0 00:32:24.425 01:12:08 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:24.425 01:12:08 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:32:24.425 EAL: No free 2048 kB hugepages reported on node 1 00:32:27.006 Initializing NVMe Controllers 00:32:27.006 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:32:27.006 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:32:27.006 Initialization complete. Launching workers. 00:32:27.006 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 59447, failed: 0 00:32:27.006 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 14834, failed to submit 44613 00:32:27.006 success 0, unsuccess 14834, failed 0 00:32:27.006 01:12:11 -- target/abort_qd_sizes.sh@70 -- # clean_kernel_target 00:32:27.006 01:12:11 -- nvmf/common.sh@675 -- # [[ -e /sys/kernel/config/nvmet/subsystems/kernel_target ]] 00:32:27.006 01:12:11 -- nvmf/common.sh@677 -- # echo 0 00:32:27.265 01:12:11 -- nvmf/common.sh@679 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/kernel_target 00:32:27.265 01:12:11 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:32:27.265 01:12:11 -- nvmf/common.sh@681 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:27.265 01:12:11 -- nvmf/common.sh@682 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:32:27.265 01:12:11 -- nvmf/common.sh@684 -- # modules=(/sys/module/nvmet/holders/*) 00:32:27.265 01:12:11 -- nvmf/common.sh@686 -- # modprobe -r nvmet_tcp nvmet 00:32:27.265 00:32:27.265 real 0m11.860s 00:32:27.265 user 0m4.257s 00:32:27.265 sys 0m2.479s 00:32:27.265 01:12:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:27.265 01:12:11 -- common/autotest_common.sh@10 -- # set +x 00:32:27.265 ************************************ 00:32:27.265 END TEST kernel_target_abort 00:32:27.265 ************************************ 00:32:27.265 01:12:11 -- target/abort_qd_sizes.sh@86 -- # trap - SIGINT SIGTERM EXIT 00:32:27.265 01:12:11 -- target/abort_qd_sizes.sh@87 -- # nvmftestfini 00:32:27.265 01:12:11 -- nvmf/common.sh@476 -- # nvmfcleanup 00:32:27.265 01:12:11 -- nvmf/common.sh@116 -- # sync 00:32:27.265 01:12:11 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:32:27.265 01:12:11 -- nvmf/common.sh@119 -- # set +e 00:32:27.265 01:12:11 -- nvmf/common.sh@120 -- # for i in {1..20} 00:32:27.265 01:12:11 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:32:27.265 rmmod nvme_tcp 00:32:27.265 rmmod nvme_fabrics 00:32:27.265 rmmod nvme_keyring 00:32:27.265 01:12:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:32:27.265 01:12:11 -- nvmf/common.sh@123 -- # set -e 00:32:27.265 01:12:11 -- nvmf/common.sh@124 -- # return 0 00:32:27.265 01:12:11 -- nvmf/common.sh@477 -- # '[' -n 3550621 ']' 00:32:27.265 01:12:11 -- nvmf/common.sh@478 -- # killprocess 3550621 00:32:27.265 01:12:11 -- common/autotest_common.sh@926 -- # '[' -z 3550621 ']' 00:32:27.265 01:12:11 -- common/autotest_common.sh@930 -- # kill -0 3550621 00:32:27.265 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (3550621) - No such process 00:32:27.265 01:12:11 -- common/autotest_common.sh@953 -- # echo 'Process with pid 3550621 is not found' 00:32:27.265 Process with pid 3550621 is not found 00:32:27.265 01:12:11 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:32:27.265 01:12:11 -- nvmf/common.sh@481 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:28.644 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:32:28.644 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:32:28.644 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:32:28.644 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:32:28.644 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:32:28.644 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:32:28.644 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:32:28.644 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:32:28.644 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:32:28.644 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:32:28.644 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:32:28.644 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:32:28.644 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:32:28.644 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:32:28.644 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:32:28.644 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:32:28.644 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:32:28.644 01:12:12 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:32:28.644 01:12:12 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:32:28.644 01:12:12 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:28.644 01:12:12 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:32:28.644 01:12:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:28.644 01:12:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:28.644 01:12:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:30.556 01:12:14 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:32:30.556 00:32:30.556 real 0m34.755s 00:32:30.556 user 1m2.168s 00:32:30.556 sys 0m8.183s 00:32:30.556 01:12:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:30.556 01:12:14 -- common/autotest_common.sh@10 -- # set +x 00:32:30.556 ************************************ 00:32:30.556 END TEST nvmf_abort_qd_sizes 00:32:30.556 ************************************ 00:32:30.815 01:12:14 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:32:30.815 01:12:14 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:32:30.815 01:12:14 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:32:30.815 01:12:14 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:32:30.815 01:12:14 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:32:30.815 01:12:14 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:32:30.815 01:12:14 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:32:30.815 01:12:14 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:32:30.815 01:12:14 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:32:30.815 01:12:14 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:32:30.815 01:12:14 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:32:30.815 01:12:14 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:32:30.815 01:12:14 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:32:30.815 01:12:14 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:32:30.815 01:12:14 -- spdk/autotest.sh@378 -- # [[ 0 -eq 1 ]] 00:32:30.815 01:12:14 -- spdk/autotest.sh@383 -- # trap - SIGINT SIGTERM EXIT 00:32:30.815 01:12:14 -- spdk/autotest.sh@385 -- # timing_enter post_cleanup 00:32:30.815 01:12:14 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:30.815 01:12:14 -- common/autotest_common.sh@10 -- # set +x 00:32:30.815 01:12:14 -- spdk/autotest.sh@386 -- # autotest_cleanup 00:32:30.815 01:12:14 -- common/autotest_common.sh@1371 -- # local autotest_es=0 00:32:30.815 01:12:14 -- common/autotest_common.sh@1372 -- # xtrace_disable 00:32:30.815 01:12:14 -- common/autotest_common.sh@10 -- # set +x 00:32:32.716 INFO: APP EXITING 00:32:32.716 INFO: killing all VMs 00:32:32.716 INFO: killing vhost app 00:32:32.716 INFO: EXIT DONE 00:32:33.653 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:32:33.653 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:32:33.653 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:32:33.653 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:32:33.653 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:32:33.653 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:32:33.653 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:32:33.653 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:32:33.653 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:32:33.653 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:32:33.653 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:32:33.653 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:32:33.653 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:32:33.653 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:32:33.653 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:32:33.653 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:32:33.653 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:32:35.030 Cleaning 00:32:35.030 Removing: /var/run/dpdk/spdk0/config 00:32:35.030 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:32:35.030 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:32:35.030 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:32:35.030 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:32:35.030 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:32:35.030 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:32:35.030 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:32:35.030 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:32:35.030 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:32:35.030 Removing: /var/run/dpdk/spdk0/hugepage_info 00:32:35.030 Removing: /var/run/dpdk/spdk1/config 00:32:35.030 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:32:35.030 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:32:35.030 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:32:35.030 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:32:35.030 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:32:35.030 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:32:35.030 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:32:35.030 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:32:35.030 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:32:35.030 Removing: /var/run/dpdk/spdk1/hugepage_info 00:32:35.030 Removing: /var/run/dpdk/spdk1/mp_socket 00:32:35.030 Removing: /var/run/dpdk/spdk2/config 00:32:35.030 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:32:35.030 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:32:35.030 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:32:35.030 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:32:35.030 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:32:35.030 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:32:35.030 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:32:35.030 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:32:35.030 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:32:35.030 Removing: /var/run/dpdk/spdk2/hugepage_info 00:32:35.030 Removing: /var/run/dpdk/spdk3/config 00:32:35.030 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:32:35.030 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:32:35.030 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:32:35.030 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:32:35.030 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:32:35.030 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:32:35.030 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:32:35.030 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:32:35.030 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:32:35.030 Removing: /var/run/dpdk/spdk3/hugepage_info 00:32:35.030 Removing: /var/run/dpdk/spdk4/config 00:32:35.030 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:32:35.030 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:32:35.030 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:32:35.030 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:32:35.030 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:32:35.030 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:32:35.030 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:32:35.030 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:32:35.030 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:32:35.030 Removing: /var/run/dpdk/spdk4/hugepage_info 00:32:35.030 Removing: /dev/shm/bdev_svc_trace.1 00:32:35.030 Removing: /dev/shm/nvmf_trace.0 00:32:35.030 Removing: /dev/shm/spdk_tgt_trace.pid3275988 00:32:35.030 Removing: /var/run/dpdk/spdk0 00:32:35.030 Removing: /var/run/dpdk/spdk1 00:32:35.030 Removing: /var/run/dpdk/spdk2 00:32:35.030 Removing: /var/run/dpdk/spdk3 00:32:35.030 Removing: /var/run/dpdk/spdk4 00:32:35.030 Removing: /var/run/dpdk/spdk_pid3273771 00:32:35.030 Removing: /var/run/dpdk/spdk_pid3274838 00:32:35.030 Removing: /var/run/dpdk/spdk_pid3275988 00:32:35.030 Removing: /var/run/dpdk/spdk_pid3276471 00:32:35.030 Removing: /var/run/dpdk/spdk_pid3277692 00:32:35.030 Removing: /var/run/dpdk/spdk_pid3278637 00:32:35.030 Removing: /var/run/dpdk/spdk_pid3278820 00:32:35.030 Removing: /var/run/dpdk/spdk_pid3279143 00:32:35.030 Removing: /var/run/dpdk/spdk_pid3279383 00:32:35.030 Removing: /var/run/dpdk/spdk_pid3279673 00:32:35.030 Removing: /var/run/dpdk/spdk_pid3279832 00:32:35.030 Removing: /var/run/dpdk/spdk_pid3279995 00:32:35.030 Removing: /var/run/dpdk/spdk_pid3280174 00:32:35.030 Removing: /var/run/dpdk/spdk_pid3280762 00:32:35.030 Removing: /var/run/dpdk/spdk_pid3283177 00:32:35.030 Removing: /var/run/dpdk/spdk_pid3283475 00:32:35.030 Removing: /var/run/dpdk/spdk_pid3283672 00:32:35.030 Removing: /var/run/dpdk/spdk_pid3283789 00:32:35.030 Removing: /var/run/dpdk/spdk_pid3284224 00:32:35.030 Removing: /var/run/dpdk/spdk_pid3284365 00:32:35.030 Removing: /var/run/dpdk/spdk_pid3284676 00:32:35.030 Removing: /var/run/dpdk/spdk_pid3284818 00:32:35.030 Removing: /var/run/dpdk/spdk_pid3284982 00:32:35.030 Removing: /var/run/dpdk/spdk_pid3285126 00:32:35.030 Removing: /var/run/dpdk/spdk_pid3285296 00:32:35.030 Removing: /var/run/dpdk/spdk_pid3285439 00:32:35.030 Removing: /var/run/dpdk/spdk_pid3285805 00:32:35.030 Removing: /var/run/dpdk/spdk_pid3285962 00:32:35.030 Removing: /var/run/dpdk/spdk_pid3286160 00:32:35.030 Removing: /var/run/dpdk/spdk_pid3286459 00:32:35.030 Removing: /var/run/dpdk/spdk_pid3286480 00:32:35.030 Removing: /var/run/dpdk/spdk_pid3286662 00:32:35.030 Removing: /var/run/dpdk/spdk_pid3286806 00:32:35.030 Removing: /var/run/dpdk/spdk_pid3286967 00:32:35.030 Removing: /var/run/dpdk/spdk_pid3287105 00:32:35.030 Removing: /var/run/dpdk/spdk_pid3287362 00:32:35.030 Removing: /var/run/dpdk/spdk_pid3287533 00:32:35.030 Removing: /var/run/dpdk/spdk_pid3287688 00:32:35.030 Removing: /var/run/dpdk/spdk_pid3287832 00:32:35.030 Removing: /var/run/dpdk/spdk_pid3288056 00:32:35.030 Removing: /var/run/dpdk/spdk_pid3288253 00:32:35.030 Removing: /var/run/dpdk/spdk_pid3288415 00:32:35.030 Removing: /var/run/dpdk/spdk_pid3288555 00:32:35.030 Removing: /var/run/dpdk/spdk_pid3288761 00:32:35.030 Removing: /var/run/dpdk/spdk_pid3288979 00:32:35.030 Removing: /var/run/dpdk/spdk_pid3289136 00:32:35.030 Removing: /var/run/dpdk/spdk_pid3289282 00:32:35.030 Removing: /var/run/dpdk/spdk_pid3289454 00:32:35.030 Removing: /var/run/dpdk/spdk_pid3289701 00:32:35.030 Removing: /var/run/dpdk/spdk_pid3289863 00:32:35.030 Removing: /var/run/dpdk/spdk_pid3290003 00:32:35.030 Removing: /var/run/dpdk/spdk_pid3290168 00:32:35.030 Removing: /var/run/dpdk/spdk_pid3290421 00:32:35.030 Removing: /var/run/dpdk/spdk_pid3290590 00:32:35.030 Removing: /var/run/dpdk/spdk_pid3290730 00:32:35.030 Removing: /var/run/dpdk/spdk_pid3290885 00:32:35.030 Removing: /var/run/dpdk/spdk_pid3291150 00:32:35.030 Removing: /var/run/dpdk/spdk_pid3291312 00:32:35.030 Removing: /var/run/dpdk/spdk_pid3291459 00:32:35.030 Removing: /var/run/dpdk/spdk_pid3291612 00:32:35.030 Removing: /var/run/dpdk/spdk_pid3291869 00:32:35.030 Removing: /var/run/dpdk/spdk_pid3292035 00:32:35.030 Removing: /var/run/dpdk/spdk_pid3292179 00:32:35.030 Removing: /var/run/dpdk/spdk_pid3292340 00:32:35.030 Removing: /var/run/dpdk/spdk_pid3292600 00:32:35.030 Removing: /var/run/dpdk/spdk_pid3292769 00:32:35.030 Removing: /var/run/dpdk/spdk_pid3292915 00:32:35.030 Removing: /var/run/dpdk/spdk_pid3293114 00:32:35.030 Removing: /var/run/dpdk/spdk_pid3293338 00:32:35.030 Removing: /var/run/dpdk/spdk_pid3293502 00:32:35.030 Removing: /var/run/dpdk/spdk_pid3293647 00:32:35.030 Removing: /var/run/dpdk/spdk_pid3293922 00:32:35.030 Removing: /var/run/dpdk/spdk_pid3293991 00:32:35.030 Removing: /var/run/dpdk/spdk_pid3294194 00:32:35.030 Removing: /var/run/dpdk/spdk_pid3296387 00:32:35.030 Removing: /var/run/dpdk/spdk_pid3352147 00:32:35.030 Removing: /var/run/dpdk/spdk_pid3354688 00:32:35.030 Removing: /var/run/dpdk/spdk_pid3361895 00:32:35.030 Removing: /var/run/dpdk/spdk_pid3365748 00:32:35.030 Removing: /var/run/dpdk/spdk_pid3368260 00:32:35.030 Removing: /var/run/dpdk/spdk_pid3368671 00:32:35.030 Removing: /var/run/dpdk/spdk_pid3372513 00:32:35.030 Removing: /var/run/dpdk/spdk_pid3372556 00:32:35.030 Removing: /var/run/dpdk/spdk_pid3373105 00:32:35.030 Removing: /var/run/dpdk/spdk_pid3373802 00:32:35.030 Removing: /var/run/dpdk/spdk_pid3374480 00:32:35.030 Removing: /var/run/dpdk/spdk_pid3374885 00:32:35.030 Removing: /var/run/dpdk/spdk_pid3374890 00:32:35.030 Removing: /var/run/dpdk/spdk_pid3375036 00:32:35.030 Removing: /var/run/dpdk/spdk_pid3375176 00:32:35.030 Removing: /var/run/dpdk/spdk_pid3375179 00:32:35.030 Removing: /var/run/dpdk/spdk_pid3375856 00:32:35.030 Removing: /var/run/dpdk/spdk_pid3376466 00:32:35.030 Removing: /var/run/dpdk/spdk_pid3377090 00:32:35.030 Removing: /var/run/dpdk/spdk_pid3377497 00:32:35.030 Removing: /var/run/dpdk/spdk_pid3377518 00:32:35.030 Removing: /var/run/dpdk/spdk_pid3377773 00:32:35.030 Removing: /var/run/dpdk/spdk_pid3378813 00:32:35.030 Removing: /var/run/dpdk/spdk_pid3379677 00:32:35.030 Removing: /var/run/dpdk/spdk_pid3385163 00:32:35.031 Removing: /var/run/dpdk/spdk_pid3385449 00:32:35.031 Removing: /var/run/dpdk/spdk_pid3388122 00:32:35.031 Removing: /var/run/dpdk/spdk_pid3391884 00:32:35.031 Removing: /var/run/dpdk/spdk_pid3394609 00:32:35.031 Removing: /var/run/dpdk/spdk_pid3401223 00:32:35.031 Removing: /var/run/dpdk/spdk_pid3406622 00:32:35.031 Removing: /var/run/dpdk/spdk_pid3407853 00:32:35.031 Removing: /var/run/dpdk/spdk_pid3408543 00:32:35.031 Removing: /var/run/dpdk/spdk_pid3419023 00:32:35.031 Removing: /var/run/dpdk/spdk_pid3421290 00:32:35.031 Removing: /var/run/dpdk/spdk_pid3424128 00:32:35.031 Removing: /var/run/dpdk/spdk_pid3425318 00:32:35.031 Removing: /var/run/dpdk/spdk_pid3426800 00:32:35.031 Removing: /var/run/dpdk/spdk_pid3426958 00:32:35.031 Removing: /var/run/dpdk/spdk_pid3427105 00:32:35.031 Removing: /var/run/dpdk/spdk_pid3427378 00:32:35.031 Removing: /var/run/dpdk/spdk_pid3427968 00:32:35.031 Removing: /var/run/dpdk/spdk_pid3429969 00:32:35.031 Removing: /var/run/dpdk/spdk_pid3430866 00:32:35.290 Removing: /var/run/dpdk/spdk_pid3431379 00:32:35.290 Removing: /var/run/dpdk/spdk_pid3434925 00:32:35.290 Removing: /var/run/dpdk/spdk_pid3438387 00:32:35.290 Removing: /var/run/dpdk/spdk_pid3442081 00:32:35.290 Removing: /var/run/dpdk/spdk_pid3466146 00:32:35.290 Removing: /var/run/dpdk/spdk_pid3468976 00:32:35.290 Removing: /var/run/dpdk/spdk_pid3472803 00:32:35.290 Removing: /var/run/dpdk/spdk_pid3473791 00:32:35.290 Removing: /var/run/dpdk/spdk_pid3474910 00:32:35.290 Removing: /var/run/dpdk/spdk_pid3477613 00:32:35.290 Removing: /var/run/dpdk/spdk_pid3480031 00:32:35.290 Removing: /var/run/dpdk/spdk_pid3484399 00:32:35.290 Removing: /var/run/dpdk/spdk_pid3484405 00:32:35.290 Removing: /var/run/dpdk/spdk_pid3487330 00:32:35.290 Removing: /var/run/dpdk/spdk_pid3487474 00:32:35.290 Removing: /var/run/dpdk/spdk_pid3487615 00:32:35.290 Removing: /var/run/dpdk/spdk_pid3487887 00:32:35.290 Removing: /var/run/dpdk/spdk_pid3487912 00:32:35.290 Removing: /var/run/dpdk/spdk_pid3489128 00:32:35.290 Removing: /var/run/dpdk/spdk_pid3490345 00:32:35.290 Removing: /var/run/dpdk/spdk_pid3491675 00:32:35.290 Removing: /var/run/dpdk/spdk_pid3493398 00:32:35.290 Removing: /var/run/dpdk/spdk_pid3494616 00:32:35.290 Removing: /var/run/dpdk/spdk_pid3495832 00:32:35.290 Removing: /var/run/dpdk/spdk_pid3499708 00:32:35.290 Removing: /var/run/dpdk/spdk_pid3500173 00:32:35.290 Removing: /var/run/dpdk/spdk_pid3501492 00:32:35.290 Removing: /var/run/dpdk/spdk_pid3502250 00:32:35.290 Removing: /var/run/dpdk/spdk_pid3506029 00:32:35.290 Removing: /var/run/dpdk/spdk_pid3508074 00:32:35.290 Removing: /var/run/dpdk/spdk_pid3511682 00:32:35.290 Removing: /var/run/dpdk/spdk_pid3515307 00:32:35.290 Removing: /var/run/dpdk/spdk_pid3518971 00:32:35.290 Removing: /var/run/dpdk/spdk_pid3519388 00:32:35.290 Removing: /var/run/dpdk/spdk_pid3519815 00:32:35.290 Removing: /var/run/dpdk/spdk_pid3520231 00:32:35.290 Removing: /var/run/dpdk/spdk_pid3520822 00:32:35.290 Removing: /var/run/dpdk/spdk_pid3521490 00:32:35.290 Removing: /var/run/dpdk/spdk_pid3522303 00:32:35.290 Removing: /var/run/dpdk/spdk_pid3522971 00:32:35.290 Removing: /var/run/dpdk/spdk_pid3525636 00:32:35.290 Removing: /var/run/dpdk/spdk_pid3525783 00:32:35.290 Removing: /var/run/dpdk/spdk_pid3529641 00:32:35.290 Removing: /var/run/dpdk/spdk_pid3529823 00:32:35.290 Removing: /var/run/dpdk/spdk_pid3531516 00:32:35.290 Removing: /var/run/dpdk/spdk_pid3536628 00:32:35.290 Removing: /var/run/dpdk/spdk_pid3536680 00:32:35.290 Removing: /var/run/dpdk/spdk_pid3539695 00:32:35.290 Removing: /var/run/dpdk/spdk_pid3541144 00:32:35.290 Removing: /var/run/dpdk/spdk_pid3542576 00:32:35.290 Removing: /var/run/dpdk/spdk_pid3543397 00:32:35.290 Removing: /var/run/dpdk/spdk_pid3544780 00:32:35.290 Removing: /var/run/dpdk/spdk_pid3545682 00:32:35.290 Removing: /var/run/dpdk/spdk_pid3551060 00:32:35.290 Removing: /var/run/dpdk/spdk_pid3551465 00:32:35.290 Removing: /var/run/dpdk/spdk_pid3551871 00:32:35.290 Removing: /var/run/dpdk/spdk_pid3553448 00:32:35.290 Removing: /var/run/dpdk/spdk_pid3553869 00:32:35.290 Removing: /var/run/dpdk/spdk_pid3554592 00:32:35.290 Clean 00:32:35.290 killing process with pid 3245957 00:32:43.408 killing process with pid 3245954 00:32:43.408 killing process with pid 3245956 00:32:43.408 killing process with pid 3245955 00:32:43.408 01:12:27 -- common/autotest_common.sh@1436 -- # return 0 00:32:43.408 01:12:27 -- spdk/autotest.sh@387 -- # timing_exit post_cleanup 00:32:43.408 01:12:27 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:43.408 01:12:27 -- common/autotest_common.sh@10 -- # set +x 00:32:43.408 01:12:27 -- spdk/autotest.sh@389 -- # timing_exit autotest 00:32:43.408 01:12:27 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:43.408 01:12:27 -- common/autotest_common.sh@10 -- # set +x 00:32:43.669 01:12:27 -- spdk/autotest.sh@390 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:32:43.669 01:12:27 -- spdk/autotest.sh@392 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:32:43.669 01:12:27 -- spdk/autotest.sh@392 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:32:43.669 01:12:27 -- spdk/autotest.sh@394 -- # hash lcov 00:32:43.669 01:12:27 -- spdk/autotest.sh@394 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:32:43.669 01:12:27 -- spdk/autotest.sh@396 -- # hostname 00:32:43.669 01:12:27 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:32:43.669 geninfo: WARNING: invalid characters removed from testname! 00:33:10.269 01:12:53 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:13.565 01:12:57 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:16.865 01:13:00 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:19.406 01:13:03 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:21.947 01:13:06 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:25.242 01:13:08 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:27.787 01:13:11 -- spdk/autotest.sh@403 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:33:27.787 01:13:11 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:27.787 01:13:11 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:33:27.787 01:13:11 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:27.787 01:13:11 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:27.787 01:13:11 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:27.787 01:13:11 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:27.787 01:13:11 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:27.787 01:13:11 -- paths/export.sh@5 -- $ export PATH 00:33:27.787 01:13:11 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:27.787 01:13:11 -- common/autobuild_common.sh@437 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:33:27.787 01:13:11 -- common/autobuild_common.sh@438 -- $ date +%s 00:33:27.787 01:13:11 -- common/autobuild_common.sh@438 -- $ mktemp -dt spdk_1721689991.XXXXXX 00:33:27.787 01:13:11 -- common/autobuild_common.sh@438 -- $ SPDK_WORKSPACE=/tmp/spdk_1721689991.OoeHwE 00:33:27.787 01:13:11 -- common/autobuild_common.sh@440 -- $ [[ -n '' ]] 00:33:27.787 01:13:11 -- common/autobuild_common.sh@444 -- $ '[' -n v22.11.4 ']' 00:33:27.787 01:13:11 -- common/autobuild_common.sh@445 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:33:27.787 01:13:11 -- common/autobuild_common.sh@445 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:33:27.787 01:13:11 -- common/autobuild_common.sh@451 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:33:27.787 01:13:11 -- common/autobuild_common.sh@453 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:33:27.787 01:13:11 -- common/autobuild_common.sh@454 -- $ get_config_params 00:33:27.787 01:13:11 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:33:27.787 01:13:11 -- common/autotest_common.sh@10 -- $ set +x 00:33:27.787 01:13:11 -- common/autobuild_common.sh@454 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:33:27.787 01:13:11 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:33:27.787 01:13:11 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:27.787 01:13:11 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:33:27.787 01:13:11 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:33:27.787 01:13:11 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:33:27.787 01:13:11 -- spdk/autopackage.sh@19 -- $ timing_finish 00:33:27.787 01:13:11 -- common/autotest_common.sh@724 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:33:27.787 01:13:11 -- common/autotest_common.sh@725 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:33:27.787 01:13:11 -- common/autotest_common.sh@727 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:33:27.787 01:13:11 -- spdk/autopackage.sh@20 -- $ exit 0 00:33:27.787 + [[ -n 3191183 ]] 00:33:27.787 + sudo kill 3191183 00:33:27.797 [Pipeline] } 00:33:27.815 [Pipeline] // stage 00:33:27.820 [Pipeline] } 00:33:27.837 [Pipeline] // timeout 00:33:27.843 [Pipeline] } 00:33:27.860 [Pipeline] // catchError 00:33:27.865 [Pipeline] } 00:33:27.882 [Pipeline] // wrap 00:33:27.888 [Pipeline] } 00:33:27.903 [Pipeline] // catchError 00:33:27.912 [Pipeline] stage 00:33:27.915 [Pipeline] { (Epilogue) 00:33:27.929 [Pipeline] catchError 00:33:27.931 [Pipeline] { 00:33:27.944 [Pipeline] echo 00:33:27.946 Cleanup processes 00:33:27.952 [Pipeline] sh 00:33:28.238 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:28.238 3566786 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:28.252 [Pipeline] sh 00:33:28.555 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:28.555 ++ grep -v 'sudo pgrep' 00:33:28.555 ++ awk '{print $1}' 00:33:28.555 + sudo kill -9 00:33:28.555 + true 00:33:28.572 [Pipeline] sh 00:33:28.854 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:33:38.837 [Pipeline] sh 00:33:39.122 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:33:39.122 Artifacts sizes are good 00:33:39.137 [Pipeline] archiveArtifacts 00:33:39.144 Archiving artifacts 00:33:39.361 [Pipeline] sh 00:33:39.658 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:33:39.674 [Pipeline] cleanWs 00:33:39.684 [WS-CLEANUP] Deleting project workspace... 00:33:39.684 [WS-CLEANUP] Deferred wipeout is used... 00:33:39.691 [WS-CLEANUP] done 00:33:39.693 [Pipeline] } 00:33:39.713 [Pipeline] // catchError 00:33:39.725 [Pipeline] sh 00:33:40.006 + logger -p user.info -t JENKINS-CI 00:33:40.014 [Pipeline] } 00:33:40.030 [Pipeline] // stage 00:33:40.036 [Pipeline] } 00:33:40.052 [Pipeline] // node 00:33:40.058 [Pipeline] End of Pipeline 00:33:40.097 Finished: SUCCESS